* [dpdk-dev] [PATCH v4 01/12] lib/rte_security: add security library
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 " Akhil Goyal
@ 2017-10-14 22:17 ` Akhil Goyal
2017-10-15 12:47 ` Aviad Yehezkel
` (2 more replies)
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 02/12] doc: add details of rte security Akhil Goyal
` (12 subsequent siblings)
13 siblings, 3 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-14 22:17 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
rte_security library provides APIs for security session
create/free for protocol offload or offloaded crypto
operation to ethernet device.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
---
lib/librte_security/Makefile | 53 +++
lib/librte_security/rte_security.c | 149 ++++++++
lib/librte_security/rte_security.h | 535 +++++++++++++++++++++++++++
lib/librte_security/rte_security_driver.h | 155 ++++++++
lib/librte_security/rte_security_version.map | 13 +
5 files changed, 905 insertions(+)
create mode 100644 lib/librte_security/Makefile
create mode 100644 lib/librte_security/rte_security.c
create mode 100644 lib/librte_security/rte_security.h
create mode 100644 lib/librte_security/rte_security_driver.h
create mode 100644 lib/librte_security/rte_security_version.map
diff --git a/lib/librte_security/Makefile b/lib/librte_security/Makefile
new file mode 100644
index 0000000..af87bb2
--- /dev/null
+++ b/lib/librte_security/Makefile
@@ -0,0 +1,53 @@
+# BSD LICENSE
+#
+# Copyright(c) 2017 Intel Corporation. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of Intel Corporation nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_security.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library source files
+SRCS-y += rte_security.c
+
+# export include files
+SYMLINK-y-include += rte_security.h
+SYMLINK-y-include += rte_security_driver.h
+
+# versioning export map
+EXPORT_MAP := rte_security_version.map
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_security/rte_security.c b/lib/librte_security/rte_security.c
new file mode 100644
index 0000000..1227fca
--- /dev/null
+++ b/lib/librte_security/rte_security.c
@@ -0,0 +1,149 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright 2017 NXP.
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of NXP nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_malloc.h>
+#include <rte_dev.h>
+
+#include "rte_security.h"
+#include "rte_security_driver.h"
+
+struct rte_security_session *
+rte_security_session_create(struct rte_security_ctx *instance,
+ struct rte_security_session_conf *conf,
+ struct rte_mempool *mp)
+{
+ struct rte_security_session *sess = NULL;
+
+ if (conf == NULL)
+ return NULL;
+
+ RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->session_create, NULL);
+
+ if (rte_mempool_get(mp, (void *)&sess))
+ return NULL;
+
+ if (instance->ops->session_create(instance->device, conf, sess, mp)) {
+ rte_mempool_put(mp, (void *)sess);
+ return NULL;
+ }
+ instance->sess_cnt++;
+
+ return sess;
+}
+
+int
+rte_security_session_update(struct rte_security_ctx *instance,
+ struct rte_security_session *sess,
+ struct rte_security_session_conf *conf)
+{
+ RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->session_update, -ENOTSUP);
+ return instance->ops->session_update(instance->device, sess, conf);
+}
+
+int
+rte_security_session_stats_get(struct rte_security_ctx *instance,
+ struct rte_security_session *sess,
+ struct rte_security_stats *stats)
+{
+ RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->session_stats_get, -ENOTSUP);
+ return instance->ops->session_stats_get(instance->device, sess, stats);
+}
+
+int
+rte_security_session_destroy(struct rte_security_ctx *instance,
+ struct rte_security_session *sess)
+{
+ int ret;
+ struct rte_mempool *mp = rte_mempool_from_obj(sess);
+
+ RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->session_destroy, -ENOTSUP);
+
+ if (instance->sess_cnt)
+ instance->sess_cnt--;
+
+ ret = instance->ops->session_destroy(instance->device, sess);
+ if (!ret)
+ rte_mempool_put(mp, (void *)sess);
+
+ return ret;
+}
+
+int
+rte_security_set_pkt_metadata(struct rte_security_ctx *instance,
+ struct rte_security_session *sess,
+ struct rte_mbuf *m, void *params)
+{
+ RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->set_pkt_metadata, -ENOTSUP);
+ return instance->ops->set_pkt_metadata(instance->device,
+ sess, m, params);
+}
+
+const struct rte_security_capability *
+rte_security_capabilities_get(struct rte_security_ctx *instance)
+{
+ RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->capabilities_get, NULL);
+ return instance->ops->capabilities_get(instance->device);
+}
+
+const struct rte_security_capability *
+rte_security_capability_get(struct rte_security_ctx *instance,
+ struct rte_security_capability_idx *idx)
+{
+ const struct rte_security_capability *capabilities;
+ const struct rte_security_capability *capability;
+ uint16_t i = 0;
+
+ RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->capabilities_get, NULL);
+ capabilities = instance->ops->capabilities_get(instance->device);
+
+ if (capabilities == NULL)
+ return NULL;
+
+ while ((capability = &capabilities[i++])->action
+ != RTE_SECURITY_ACTION_TYPE_NONE) {
+ if (capability->action == idx->action &&
+ capability->protocol == idx->protocol) {
+ if (idx->protocol == RTE_SECURITY_PROTOCOL_IPSEC) {
+ if (capability->ipsec.proto ==
+ idx->ipsec.proto &&
+ capability->ipsec.mode ==
+ idx->ipsec.mode &&
+ capability->ipsec.direction ==
+ idx->ipsec.direction)
+ return capability;
+ }
+ }
+ }
+
+ return NULL;
+}
diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
new file mode 100644
index 0000000..416bbfd
--- /dev/null
+++ b/lib/librte_security/rte_security.h
@@ -0,0 +1,535 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright 2017 NXP.
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of NXP nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_SECURITY_H_
+#define _RTE_SECURITY_H_
+
+/**
+ * @file rte_security.h
+ *
+ * RTE Security Common Definitions
+ *
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <sys/types.h>
+
+#include <netinet/in.h>
+#include <netinet/ip.h>
+#include <netinet/ip6.h>
+
+#include <rte_common.h>
+#include <rte_crypto.h>
+#include <rte_mbuf.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
+
+/** IPSec protocol mode */
+enum rte_security_ipsec_sa_mode {
+ RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
+ /**< IPSec Transport mode */
+ RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ /**< IPSec Tunnel mode */
+};
+
+/** IPSec Protocol */
+enum rte_security_ipsec_sa_protocol {
+ RTE_SECURITY_IPSEC_SA_PROTO_AH,
+ /**< AH protocol */
+ RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ /**< ESP protocol */
+};
+
+/** IPSEC tunnel type */
+enum rte_security_ipsec_tunnel_type {
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4,
+ /**< Outer header is IPv4 */
+ RTE_SECURITY_IPSEC_TUNNEL_IPV6,
+ /**< Outer header is IPv6 */
+};
+
+/**
+ * Security context for crypto/eth devices
+ *
+ * Security instance for each driver to register security operations.
+ * The application can get the security context from the crypto/eth device id
+ * using the APIs rte_cryptodev_get_sec_ctx()/rte_eth_dev_get_sec_ctx()
+ * This structure is used to identify the device(crypto/eth) for which the
+ * security operations need to be performed.
+ */
+struct rte_security_ctx {
+ enum {
+ RTE_SECURITY_INSTANCE_INVALID,
+ /**< Security context is invalid */
+ RTE_SECURITY_INSTANCE_VALID
+ /**< Security context is valid */
+ } state;
+ /**< Current state of security context */
+ void *device;
+ /**< Crypto/ethernet device attached */
+ struct rte_security_ops *ops;
+ /**< Pointer to security ops for the device */
+ uint16_t sess_cnt;
+ /**< Number of sessions attached to this context */
+};
+
+/**
+ * IPSEC tunnel parameters
+ *
+ * These parameters are used to build outbound tunnel headers.
+ */
+struct rte_security_ipsec_tunnel_param {
+ enum rte_security_ipsec_tunnel_type type;
+ /**< Tunnel type: IPv4 or IPv6 */
+ RTE_STD_C11
+ union {
+ struct {
+ struct in_addr src_ip;
+ /**< IPv4 source address */
+ struct in_addr dst_ip;
+ /**< IPv4 destination address */
+ uint8_t dscp;
+ /**< IPv4 Differentiated Services Code Point */
+ uint8_t df;
+ /**< IPv4 Don't Fragment bit */
+ uint8_t ttl;
+ /**< IPv4 Time To Live */
+ } ipv4;
+ /**< IPv4 header parameters */
+ struct {
+ struct in6_addr src_addr;
+ /**< IPv6 source address */
+ struct in6_addr dst_addr;
+ /**< IPv6 destination address */
+ uint8_t dscp;
+ /**< IPv6 Differentiated Services Code Point */
+ uint32_t flabel;
+ /**< IPv6 flow label */
+ uint8_t hlimit;
+ /**< IPv6 hop limit */
+ } ipv6;
+ /**< IPv6 header parameters */
+ };
+};
+
+/**
+ * IPsec Security Association option flags
+ */
+struct rte_security_ipsec_sa_options {
+ /**< Extended Sequence Numbers (ESN)
+ *
+ * * 1: Use extended (64 bit) sequence numbers
+ * * 0: Use normal sequence numbers
+ */
+ uint32_t esn : 1;
+
+ /**< UDP encapsulation
+ *
+ * * 1: Do UDP encapsulation/decapsulation so that IPSEC packets can
+ * traverse through NAT boxes.
+ * * 0: No UDP encapsulation
+ */
+ uint32_t udp_encap : 1;
+
+ /**< Copy DSCP bits
+ *
+ * * 1: Copy IPv4 or IPv6 DSCP bits from inner IP header to
+ * the outer IP header in encapsulation, and vice versa in
+ * decapsulation.
+ * * 0: Do not change DSCP field.
+ */
+ uint32_t copy_dscp : 1;
+
+ /**< Copy IPv6 Flow Label
+ *
+ * * 1: Copy IPv6 flow label from inner IPv6 header to the
+ * outer IPv6 header.
+ * * 0: Outer header is not modified.
+ */
+ uint32_t copy_flabel : 1;
+
+ /**< Copy IPv4 Don't Fragment bit
+ *
+ * * 1: Copy the DF bit from the inner IPv4 header to the outer
+ * IPv4 header.
+ * * 0: Outer header is not modified.
+ */
+ uint32_t copy_df : 1;
+
+ /**< Decrement inner packet Time To Live (TTL) field
+ *
+ * * 1: In tunnel mode, decrement inner packet IPv4 TTL or
+ * IPv6 Hop Limit after tunnel decapsulation, or before tunnel
+ * encapsulation.
+ * * 0: Inner packet is not modified.
+ */
+ uint32_t dec_ttl : 1;
+};
+
+/** IPSec security association direction */
+enum rte_security_ipsec_sa_direction {
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ /**< Encrypt and generate digest */
+ RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+ /**< Verify digest and decrypt */
+};
+
+/**
+ * IPsec security association configuration data.
+ *
+ * This structure contains data required to create an IPsec SA security session.
+ */
+struct rte_security_ipsec_xform {
+ uint32_t spi;
+ /**< SA security parameter index */
+ uint32_t salt;
+ /**< SA salt */
+ struct rte_security_ipsec_sa_options options;
+ /**< various SA options */
+ enum rte_security_ipsec_sa_direction direction;
+ /**< IPSec SA Direction - Egress/Ingress */
+ enum rte_security_ipsec_sa_protocol proto;
+ /**< IPsec SA Protocol - AH/ESP */
+ enum rte_security_ipsec_sa_mode mode;
+ /**< IPsec SA Mode - transport/tunnel */
+ struct rte_security_ipsec_tunnel_param tunnel;
+ /**< Tunnel parameters, NULL for transport mode */
+};
+
+/**
+ * MACsec security session configuration
+ */
+struct rte_security_macsec_xform {
+ /** To be Filled */
+};
+
+/**
+ * Security session action type.
+ */
+enum rte_security_session_action_type {
+ RTE_SECURITY_ACTION_TYPE_NONE,
+ /**< No security actions */
+ RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+ /**< Crypto processing for security protocol is processed inline
+ * during transmission
+ */
+ RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+ /**< All security protocol processing is performed inline during
+ * transmission
+ */
+ RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+ /**< All security protocol processing including crypto is performed
+ * on a lookaside accelerator
+ */
+};
+
+/** Security session protocol definition */
+enum rte_security_session_protocol {
+ RTE_SECURITY_PROTOCOL_IPSEC,
+ /**< IPsec Protocol */
+ RTE_SECURITY_PROTOCOL_MACSEC,
+ /**< MACSec Protocol */
+};
+
+/**
+ * Security session configuration
+ */
+struct rte_security_session_conf {
+ enum rte_security_session_action_type action_type;
+ /**< Type of action to be performed on the session */
+ enum rte_security_session_protocol protocol;
+ /**< Security protocol to be configured */
+ union {
+ struct rte_security_ipsec_xform ipsec;
+ struct rte_security_macsec_xform macsec;
+ };
+ /**< Configuration parameters for security session */
+ struct rte_crypto_sym_xform *crypto_xform;
+ /**< Security Session Crypto Transformations */
+};
+
+struct rte_security_session {
+ void *sess_private_data;
+ /**< Private session material */
+};
+
+/**
+ * Create security session as specified by the session configuration
+ *
+ * @param instance security instance
+ * @param conf session configuration parameters
+ * @param mp mempool to allocate session objects from
+ * @return
+ * - On success, pointer to session
+ * - On failure, NULL
+ */
+struct rte_security_session *
+rte_security_session_create(struct rte_security_ctx *instance,
+ struct rte_security_session_conf *conf,
+ struct rte_mempool *mp);
+
+/**
+ * Update security session as specified by the session configuration
+ *
+ * @param instance security instance
+ * @param sess session to update parameters
+ * @param conf update configuration parameters
+ * @return
+ * - On success returns 0
+ * - On failure return errno
+ */
+int
+rte_security_session_update(struct rte_security_ctx *instance,
+ struct rte_security_session *sess,
+ struct rte_security_session_conf *conf);
+
+/**
+ * Free security session header and the session private data and
+ * return it to its original mempool.
+ *
+ * @param instance security instance
+ * @param sess security session to freed
+ *
+ * @return
+ * - 0 if successful.
+ * - -EINVAL if session is NULL.
+ * - -EBUSY if not all device private data has been freed.
+ */
+int
+rte_security_session_destroy(struct rte_security_ctx *instance,
+ struct rte_security_session *sess);
+
+/**
+ * Updates the buffer with device-specific defined metadata
+ *
+ * @param instance security instance
+ * @param sess security session
+ * @param mb packet mbuf to set metadata on.
+ * @param params device-specific defined parameters
+ * required for metadata
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+int
+rte_security_set_pkt_metadata(struct rte_security_ctx *instance,
+ struct rte_security_session *sess,
+ struct rte_mbuf *mb, void *params);
+
+/**
+ * Attach a session to a symmetric crypto operation
+ *
+ * @param sym_op crypto operation
+ * @param sess security session
+ */
+static inline int
+__rte_security_attach_session(struct rte_crypto_sym_op *sym_op,
+ struct rte_security_session *sess)
+{
+ sym_op->sec_session = sess;
+
+ return 0;
+}
+
+static inline void *
+get_sec_session_private_data(const struct rte_security_session *sess)
+{
+ return sess->sess_private_data;
+}
+
+static inline void
+set_sec_session_private_data(struct rte_security_session *sess,
+ void *private_data)
+{
+ sess->sess_private_data = private_data;
+}
+
+/**
+ * Attach a session to a crypto operation.
+ * This API is needed only in case of RTE_SECURITY_SESS_CRYPTO_PROTO_OFFLOAD
+ * For other rte_security_session_action_type, ol_flags in rte_mbuf may be
+ * defined to perform security operations.
+ *
+ * @param op crypto operation
+ * @param sess security session
+ */
+static inline int
+rte_security_attach_session(struct rte_crypto_op *op,
+ struct rte_security_session *sess)
+{
+ if (unlikely(op->type != RTE_CRYPTO_OP_TYPE_SYMMETRIC))
+ return -EINVAL;
+
+ op->sess_type = RTE_CRYPTO_OP_SECURITY_SESSION;
+
+ return __rte_security_attach_session(op->sym, sess);
+}
+
+struct rte_security_macsec_stats {
+ uint64_t reserved;
+};
+
+struct rte_security_ipsec_stats {
+ uint64_t reserved;
+
+};
+
+struct rte_security_stats {
+ enum rte_security_session_protocol protocol;
+ /**< Security protocol to be configured */
+
+ union {
+ struct rte_security_macsec_stats macsec;
+ struct rte_security_ipsec_stats ipsec;
+ };
+};
+
+/**
+ * Get security session statistics
+ *
+ * @param instance security instance
+ * @param sess security session
+ * @param stats statistics
+ * @return
+ * - On success return 0
+ * - On failure errno
+ */
+int
+rte_security_session_stats_get(struct rte_security_ctx *instance,
+ struct rte_security_session *sess,
+ struct rte_security_stats *stats);
+
+/**
+ * Security capability definition
+ */
+struct rte_security_capability {
+ enum rte_security_session_action_type action;
+ /**< Security action type*/
+ enum rte_security_session_protocol protocol;
+ /**< Security protocol */
+ RTE_STD_C11
+ union {
+ struct {
+ enum rte_security_ipsec_sa_protocol proto;
+ /**< IPsec SA protocol */
+ enum rte_security_ipsec_sa_mode mode;
+ /**< IPsec SA mode */
+ enum rte_security_ipsec_sa_direction direction;
+ /**< IPsec SA direction */
+ struct rte_security_ipsec_sa_options options;
+ /**< IPsec SA supported options */
+ } ipsec;
+ /**< IPsec capability */
+ struct {
+ /* To be Filled */
+ } macsec;
+ /**< MACsec capability */
+ };
+
+ const struct rte_cryptodev_capabilities *crypto_capabilities;
+ /**< Corresponding crypto capabilities for security capability */
+
+ uint32_t ol_flags;
+ /**< Device offload flags */
+};
+
+#define RTE_SECURITY_TX_OLOAD_NEED_MDATA 0x00000001
+/**< HW needs metadata update, see rte_security_set_pkt_metadata().
+ */
+
+#define RTE_SECURITY_TX_HW_TRAILER_OFFLOAD 0x00000002
+/**< HW constructs trailer of packets
+ * Transmitted packets will have the trailer added to them
+ * by hardawre. The next protocol field will be based on
+ * the mbuf->inner_esp_next_proto field.
+ */
+#define RTE_SECURITY_RX_HW_TRAILER_OFFLOAD 0x00010000
+/**< HW removes trailer of packets
+ * Received packets have no trailer, the next protocol field
+ * is supplied in the mbuf->inner_esp_next_proto field.
+ * Inner packet is not modified.
+ */
+
+/**
+ * Security capability index used to query a security instance for a specific
+ * security capability
+ */
+struct rte_security_capability_idx {
+ enum rte_security_session_action_type action;
+ enum rte_security_session_protocol protocol;
+
+ union {
+ struct {
+ enum rte_security_ipsec_sa_protocol proto;
+ enum rte_security_ipsec_sa_mode mode;
+ enum rte_security_ipsec_sa_direction direction;
+ } ipsec;
+ };
+};
+
+/**
+ * Returns array of security instance capabilities
+ *
+ * @param instance Security instance.
+ *
+ * @return
+ * - Returns array of security capabilities.
+ * - Return NULL if no capabilities available.
+ */
+const struct rte_security_capability *
+rte_security_capabilities_get(struct rte_security_ctx *instance);
+
+/**
+ * Query if a specific capability is available on security instance
+ *
+ * @param instance security instance.
+ * @param idx security capability index to match against
+ *
+ * @return
+ * - Returns pointer to security capability on match of capability
+ * index criteria.
+ * - Return NULL if the capability not matched on security instance.
+ */
+const struct rte_security_capability *
+rte_security_capability_get(struct rte_security_ctx *instance,
+ struct rte_security_capability_idx *idx);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_SECURITY_H_ */
diff --git a/lib/librte_security/rte_security_driver.h b/lib/librte_security/rte_security_driver.h
new file mode 100644
index 0000000..78814fa
--- /dev/null
+++ b/lib/librte_security/rte_security_driver.h
@@ -0,0 +1,155 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_SECURITY_DRIVER_H_
+#define _RTE_SECURITY_DRIVER_H_
+
+/**
+ * @file rte_security_driver.h
+ *
+ * RTE Security Common Definitions
+ *
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "rte_security.h"
+
+/**
+ * Configure a security session on a device.
+ *
+ * @param device Crypto/eth device pointer
+ * @param conf Security session configuration
+ * @param sess Pointer to Security private session structure
+ * @param mp Mempool where the private session is allocated
+ *
+ * @return
+ * - Returns 0 if private session structure have been created successfully.
+ * - Returns -EINVAL if input parameters are invalid.
+ * - Returns -ENOTSUP if crypto device does not support the crypto transform.
+ * - Returns -ENOMEM if the private session could not be allocated.
+ */
+typedef int (*security_session_create_t)(void *device,
+ struct rte_security_session_conf *conf,
+ struct rte_security_session *sess,
+ struct rte_mempool *mp);
+
+/**
+ * Free driver private session data.
+ *
+ * @param dev Crypto/eth device pointer
+ * @param sess Security session structure
+ */
+typedef int (*security_session_destroy_t)(void *device,
+ struct rte_security_session *sess);
+
+/**
+ * Update driver private session data.
+ *
+ * @param device Crypto/eth device pointer
+ * @param sess Pointer to Security private session structure
+ * @param conf Security session configuration
+ *
+ * @return
+ * - Returns 0 if private session structure have been updated successfully.
+ * - Returns -EINVAL if input parameters are invalid.
+ * - Returns -ENOTSUP if crypto device does not support the crypto transform.
+ */
+typedef int (*security_session_update_t)(void *device,
+ struct rte_security_session *sess,
+ struct rte_security_session_conf *conf);
+/**
+ * Get stats from the PMD.
+ *
+ * @param device Crypto/eth device pointer
+ * @param sess Pointer to Security private session structure
+ * @param stats Security stats of the driver
+ *
+ * @return
+ * - Returns 0 if private session structure have been updated successfully.
+ * - Returns -EINVAL if session parameters are invalid.
+ */
+typedef int (*security_session_stats_get_t)(void *device,
+ struct rte_security_session *sess,
+ struct rte_security_stats *stats);
+
+/**
+ * Update the mbuf with provided metadata.
+ *
+ * @param sess Security session structure
+ * @param mb Packet buffer
+ * @param mt Metadata
+ *
+ * @return
+ * - Returns 0 if metadata updated successfully.
+ * - Returns -ve value for errors.
+ */
+typedef int (*security_set_pkt_metadata_t)(void *device,
+ struct rte_security_session *sess, struct rte_mbuf *m,
+ void *params);
+
+/**
+ * Get security capabilities of the device.
+ *
+ * @param device crypto/eth device pointer
+ *
+ * @return
+ * - Returns rte_security_capability pointer on success.
+ * - Returns NULL on error.
+ */
+typedef const struct rte_security_capability *(*security_capabilities_get_t)(
+ void *device);
+
+/** Security operations function pointer table */
+struct rte_security_ops {
+ security_session_create_t session_create;
+ /**< Configure a security session. */
+ security_session_update_t session_update;
+ /**< Update a security session. */
+ security_session_stats_get_t session_stats_get;
+ /**< Get security session statistics. */
+ security_session_destroy_t session_destroy;
+ /**< Clear a security sessions private data. */
+ security_set_pkt_metadata_t set_pkt_metadata;
+ /**< Update mbuf metadata. */
+ security_capabilities_get_t capabilities_get;
+ /**< Get security capabilities. */
+};
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_SECURITY_DRIVER_H_ */
diff --git a/lib/librte_security/rte_security_version.map b/lib/librte_security/rte_security_version.map
new file mode 100644
index 0000000..8af7fc1
--- /dev/null
+++ b/lib/librte_security/rte_security_version.map
@@ -0,0 +1,13 @@
+DPDK_17.11 {
+ global:
+
+ rte_security_attach_session;
+ rte_security_capabilities_get;
+ rte_security_capability_get;
+ rte_security_session_create;
+ rte_security_session_destroy;
+ rte_security_session_stats_get;
+ rte_security_session_update;
+ rte_security_set_pkt_metadata;
+
+};
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 01/12] lib/rte_security: add security library
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 01/12] lib/rte_security: add security library Akhil Goyal
@ 2017-10-15 12:47 ` Aviad Yehezkel
2017-10-19 9:30 ` Ananyev, Konstantin
2017-10-20 9:37 ` Thomas Monjalon
2 siblings, 0 replies; 195+ messages in thread
From: Aviad Yehezkel @ 2017-10-15 12:47 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
On 10/15/2017 1:17 AM, Akhil Goyal wrote:
> rte_security library provides APIs for security session
> create/free for protocol offload or offloaded crypto
> operation to ethernet device.
>
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> Signed-off-by: Boris Pismenny <borisp@mellanox.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
> ---
> lib/librte_security/Makefile | 53 +++
> lib/librte_security/rte_security.c | 149 ++++++++
> lib/librte_security/rte_security.h | 535 +++++++++++++++++++++++++++
> lib/librte_security/rte_security_driver.h | 155 ++++++++
> lib/librte_security/rte_security_version.map | 13 +
> 5 files changed, 905 insertions(+)
> create mode 100644 lib/librte_security/Makefile
> create mode 100644 lib/librte_security/rte_security.c
> create mode 100644 lib/librte_security/rte_security.h
> create mode 100644 lib/librte_security/rte_security_driver.h
> create mode 100644 lib/librte_security/rte_security_version.map
>
> diff --git a/lib/librte_security/Makefile b/lib/librte_security/Makefile
> new file mode 100644
> index 0000000..af87bb2
> --- /dev/null
> +++ b/lib/librte_security/Makefile
> @@ -0,0 +1,53 @@
> +# BSD LICENSE
> +#
> +# Copyright(c) 2017 Intel Corporation. All rights reserved.
> +#
> +# Redistribution and use in source and binary forms, with or without
> +# modification, are permitted provided that the following conditions
> +# are met:
> +#
> +# * Redistributions of source code must retain the above copyright
> +# notice, this list of conditions and the following disclaimer.
> +# * Redistributions in binary form must reproduce the above copyright
> +# notice, this list of conditions and the following disclaimer in
> +# the documentation and/or other materials provided with the
> +# distribution.
> +# * Neither the name of Intel Corporation nor the names of its
> +# contributors may be used to endorse or promote products derived
> +# from this software without specific prior written permission.
> +#
> +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> +
> +include $(RTE_SDK)/mk/rte.vars.mk
> +
> +# library name
> +LIB = librte_security.a
> +
> +# library version
> +LIBABIVER := 1
> +
> +# build flags
> +CFLAGS += -O3
> +CFLAGS += $(WERROR_FLAGS)
> +
> +# library source files
> +SRCS-y += rte_security.c
> +
> +# export include files
> +SYMLINK-y-include += rte_security.h
> +SYMLINK-y-include += rte_security_driver.h
> +
> +# versioning export map
> +EXPORT_MAP := rte_security_version.map
> +
> +include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/lib/librte_security/rte_security.c b/lib/librte_security/rte_security.c
> new file mode 100644
> index 0000000..1227fca
> --- /dev/null
> +++ b/lib/librte_security/rte_security.c
> @@ -0,0 +1,149 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright 2017 NXP.
> + * Copyright(c) 2017 Intel Corporation. All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * * Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * * Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in
> + * the documentation and/or other materials provided with the
> + * distribution.
> + * * Neither the name of NXP nor the names of its
> + * contributors may be used to endorse or promote products derived
> + * from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#include <rte_malloc.h>
> +#include <rte_dev.h>
> +
> +#include "rte_security.h"
> +#include "rte_security_driver.h"
> +
> +struct rte_security_session *
> +rte_security_session_create(struct rte_security_ctx *instance,
> + struct rte_security_session_conf *conf,
> + struct rte_mempool *mp)
> +{
> + struct rte_security_session *sess = NULL;
> +
> + if (conf == NULL)
> + return NULL;
> +
> + RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->session_create, NULL);
> +
> + if (rte_mempool_get(mp, (void *)&sess))
> + return NULL;
> +
> + if (instance->ops->session_create(instance->device, conf, sess, mp)) {
> + rte_mempool_put(mp, (void *)sess);
> + return NULL;
> + }
> + instance->sess_cnt++;
> +
> + return sess;
> +}
> +
> +int
> +rte_security_session_update(struct rte_security_ctx *instance,
> + struct rte_security_session *sess,
> + struct rte_security_session_conf *conf)
> +{
> + RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->session_update, -ENOTSUP);
> + return instance->ops->session_update(instance->device, sess, conf);
> +}
> +
> +int
> +rte_security_session_stats_get(struct rte_security_ctx *instance,
> + struct rte_security_session *sess,
> + struct rte_security_stats *stats)
> +{
> + RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->session_stats_get, -ENOTSUP);
> + return instance->ops->session_stats_get(instance->device, sess, stats);
> +}
> +
> +int
> +rte_security_session_destroy(struct rte_security_ctx *instance,
> + struct rte_security_session *sess)
> +{
> + int ret;
> + struct rte_mempool *mp = rte_mempool_from_obj(sess);
> +
> + RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->session_destroy, -ENOTSUP);
> +
> + if (instance->sess_cnt)
> + instance->sess_cnt--;
> +
> + ret = instance->ops->session_destroy(instance->device, sess);
> + if (!ret)
> + rte_mempool_put(mp, (void *)sess);
> +
> + return ret;
> +}
> +
> +int
> +rte_security_set_pkt_metadata(struct rte_security_ctx *instance,
> + struct rte_security_session *sess,
> + struct rte_mbuf *m, void *params)
> +{
> + RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->set_pkt_metadata, -ENOTSUP);
> + return instance->ops->set_pkt_metadata(instance->device,
> + sess, m, params);
> +}
> +
> +const struct rte_security_capability *
> +rte_security_capabilities_get(struct rte_security_ctx *instance)
> +{
> + RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->capabilities_get, NULL);
> + return instance->ops->capabilities_get(instance->device);
> +}
> +
> +const struct rte_security_capability *
> +rte_security_capability_get(struct rte_security_ctx *instance,
> + struct rte_security_capability_idx *idx)
> +{
> + const struct rte_security_capability *capabilities;
> + const struct rte_security_capability *capability;
> + uint16_t i = 0;
> +
> + RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->capabilities_get, NULL);
> + capabilities = instance->ops->capabilities_get(instance->device);
> +
> + if (capabilities == NULL)
> + return NULL;
> +
> + while ((capability = &capabilities[i++])->action
> + != RTE_SECURITY_ACTION_TYPE_NONE) {
> + if (capability->action == idx->action &&
> + capability->protocol == idx->protocol) {
> + if (idx->protocol == RTE_SECURITY_PROTOCOL_IPSEC) {
> + if (capability->ipsec.proto ==
> + idx->ipsec.proto &&
> + capability->ipsec.mode ==
> + idx->ipsec.mode &&
> + capability->ipsec.direction ==
> + idx->ipsec.direction)
> + return capability;
> + }
> + }
> + }
> +
> + return NULL;
> +}
> diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
> new file mode 100644
> index 0000000..416bbfd
> --- /dev/null
> +++ b/lib/librte_security/rte_security.h
> @@ -0,0 +1,535 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright 2017 NXP.
> + * Copyright(c) 2017 Intel Corporation. All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * * Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * * Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in
> + * the documentation and/or other materials provided with the
> + * distribution.
> + * * Neither the name of NXP nor the names of its
> + * contributors may be used to endorse or promote products derived
> + * from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef _RTE_SECURITY_H_
> +#define _RTE_SECURITY_H_
> +
> +/**
> + * @file rte_security.h
> + *
> + * RTE Security Common Definitions
> + *
> + */
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include <sys/types.h>
> +
> +#include <netinet/in.h>
> +#include <netinet/ip.h>
> +#include <netinet/ip6.h>
> +
> +#include <rte_common.h>
> +#include <rte_crypto.h>
> +#include <rte_mbuf.h>
> +#include <rte_memory.h>
> +#include <rte_mempool.h>
> +
> +/** IPSec protocol mode */
> +enum rte_security_ipsec_sa_mode {
> + RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
> + /**< IPSec Transport mode */
> + RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
> + /**< IPSec Tunnel mode */
> +};
> +
> +/** IPSec Protocol */
> +enum rte_security_ipsec_sa_protocol {
> + RTE_SECURITY_IPSEC_SA_PROTO_AH,
> + /**< AH protocol */
> + RTE_SECURITY_IPSEC_SA_PROTO_ESP,
> + /**< ESP protocol */
> +};
> +
> +/** IPSEC tunnel type */
> +enum rte_security_ipsec_tunnel_type {
> + RTE_SECURITY_IPSEC_TUNNEL_IPV4,
> + /**< Outer header is IPv4 */
> + RTE_SECURITY_IPSEC_TUNNEL_IPV6,
> + /**< Outer header is IPv6 */
> +};
> +
> +/**
> + * Security context for crypto/eth devices
> + *
> + * Security instance for each driver to register security operations.
> + * The application can get the security context from the crypto/eth device id
> + * using the APIs rte_cryptodev_get_sec_ctx()/rte_eth_dev_get_sec_ctx()
> + * This structure is used to identify the device(crypto/eth) for which the
> + * security operations need to be performed.
> + */
> +struct rte_security_ctx {
> + enum {
> + RTE_SECURITY_INSTANCE_INVALID,
> + /**< Security context is invalid */
> + RTE_SECURITY_INSTANCE_VALID
> + /**< Security context is valid */
> + } state;
> + /**< Current state of security context */
> + void *device;
> + /**< Crypto/ethernet device attached */
> + struct rte_security_ops *ops;
> + /**< Pointer to security ops for the device */
> + uint16_t sess_cnt;
> + /**< Number of sessions attached to this context */
> +};
> +
> +/**
> + * IPSEC tunnel parameters
> + *
> + * These parameters are used to build outbound tunnel headers.
> + */
> +struct rte_security_ipsec_tunnel_param {
> + enum rte_security_ipsec_tunnel_type type;
> + /**< Tunnel type: IPv4 or IPv6 */
> + RTE_STD_C11
> + union {
> + struct {
> + struct in_addr src_ip;
> + /**< IPv4 source address */
> + struct in_addr dst_ip;
> + /**< IPv4 destination address */
> + uint8_t dscp;
> + /**< IPv4 Differentiated Services Code Point */
> + uint8_t df;
> + /**< IPv4 Don't Fragment bit */
> + uint8_t ttl;
> + /**< IPv4 Time To Live */
> + } ipv4;
> + /**< IPv4 header parameters */
> + struct {
> + struct in6_addr src_addr;
> + /**< IPv6 source address */
> + struct in6_addr dst_addr;
> + /**< IPv6 destination address */
> + uint8_t dscp;
> + /**< IPv6 Differentiated Services Code Point */
> + uint32_t flabel;
> + /**< IPv6 flow label */
> + uint8_t hlimit;
> + /**< IPv6 hop limit */
> + } ipv6;
> + /**< IPv6 header parameters */
> + };
> +};
> +
> +/**
> + * IPsec Security Association option flags
> + */
> +struct rte_security_ipsec_sa_options {
> + /**< Extended Sequence Numbers (ESN)
> + *
> + * * 1: Use extended (64 bit) sequence numbers
> + * * 0: Use normal sequence numbers
> + */
> + uint32_t esn : 1;
> +
> + /**< UDP encapsulation
> + *
> + * * 1: Do UDP encapsulation/decapsulation so that IPSEC packets can
> + * traverse through NAT boxes.
> + * * 0: No UDP encapsulation
> + */
> + uint32_t udp_encap : 1;
> +
> + /**< Copy DSCP bits
> + *
> + * * 1: Copy IPv4 or IPv6 DSCP bits from inner IP header to
> + * the outer IP header in encapsulation, and vice versa in
> + * decapsulation.
> + * * 0: Do not change DSCP field.
> + */
> + uint32_t copy_dscp : 1;
> +
> + /**< Copy IPv6 Flow Label
> + *
> + * * 1: Copy IPv6 flow label from inner IPv6 header to the
> + * outer IPv6 header.
> + * * 0: Outer header is not modified.
> + */
> + uint32_t copy_flabel : 1;
> +
> + /**< Copy IPv4 Don't Fragment bit
> + *
> + * * 1: Copy the DF bit from the inner IPv4 header to the outer
> + * IPv4 header.
> + * * 0: Outer header is not modified.
> + */
> + uint32_t copy_df : 1;
> +
> + /**< Decrement inner packet Time To Live (TTL) field
> + *
> + * * 1: In tunnel mode, decrement inner packet IPv4 TTL or
> + * IPv6 Hop Limit after tunnel decapsulation, or before tunnel
> + * encapsulation.
> + * * 0: Inner packet is not modified.
> + */
> + uint32_t dec_ttl : 1;
> +};
> +
> +/** IPSec security association direction */
> +enum rte_security_ipsec_sa_direction {
> + RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
> + /**< Encrypt and generate digest */
> + RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
> + /**< Verify digest and decrypt */
> +};
> +
> +/**
> + * IPsec security association configuration data.
> + *
> + * This structure contains data required to create an IPsec SA security session.
> + */
> +struct rte_security_ipsec_xform {
> + uint32_t spi;
> + /**< SA security parameter index */
> + uint32_t salt;
> + /**< SA salt */
> + struct rte_security_ipsec_sa_options options;
> + /**< various SA options */
> + enum rte_security_ipsec_sa_direction direction;
> + /**< IPSec SA Direction - Egress/Ingress */
> + enum rte_security_ipsec_sa_protocol proto;
> + /**< IPsec SA Protocol - AH/ESP */
> + enum rte_security_ipsec_sa_mode mode;
> + /**< IPsec SA Mode - transport/tunnel */
> + struct rte_security_ipsec_tunnel_param tunnel;
> + /**< Tunnel parameters, NULL for transport mode */
> +};
> +
> +/**
> + * MACsec security session configuration
> + */
> +struct rte_security_macsec_xform {
> + /** To be Filled */
> +};
> +
> +/**
> + * Security session action type.
> + */
> +enum rte_security_session_action_type {
> + RTE_SECURITY_ACTION_TYPE_NONE,
> + /**< No security actions */
> + RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
> + /**< Crypto processing for security protocol is processed inline
> + * during transmission
> + */
> + RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
> + /**< All security protocol processing is performed inline during
> + * transmission
> + */
> + RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
> + /**< All security protocol processing including crypto is performed
> + * on a lookaside accelerator
> + */
> +};
> +
> +/** Security session protocol definition */
> +enum rte_security_session_protocol {
> + RTE_SECURITY_PROTOCOL_IPSEC,
> + /**< IPsec Protocol */
> + RTE_SECURITY_PROTOCOL_MACSEC,
> + /**< MACSec Protocol */
> +};
> +
> +/**
> + * Security session configuration
> + */
> +struct rte_security_session_conf {
> + enum rte_security_session_action_type action_type;
> + /**< Type of action to be performed on the session */
> + enum rte_security_session_protocol protocol;
> + /**< Security protocol to be configured */
> + union {
> + struct rte_security_ipsec_xform ipsec;
> + struct rte_security_macsec_xform macsec;
> + };
> + /**< Configuration parameters for security session */
> + struct rte_crypto_sym_xform *crypto_xform;
> + /**< Security Session Crypto Transformations */
> +};
> +
> +struct rte_security_session {
> + void *sess_private_data;
> + /**< Private session material */
> +};
> +
> +/**
> + * Create security session as specified by the session configuration
> + *
> + * @param instance security instance
> + * @param conf session configuration parameters
> + * @param mp mempool to allocate session objects from
> + * @return
> + * - On success, pointer to session
> + * - On failure, NULL
> + */
> +struct rte_security_session *
> +rte_security_session_create(struct rte_security_ctx *instance,
> + struct rte_security_session_conf *conf,
> + struct rte_mempool *mp);
> +
> +/**
> + * Update security session as specified by the session configuration
> + *
> + * @param instance security instance
> + * @param sess session to update parameters
> + * @param conf update configuration parameters
> + * @return
> + * - On success returns 0
> + * - On failure return errno
> + */
> +int
> +rte_security_session_update(struct rte_security_ctx *instance,
> + struct rte_security_session *sess,
> + struct rte_security_session_conf *conf);
> +
> +/**
> + * Free security session header and the session private data and
> + * return it to its original mempool.
> + *
> + * @param instance security instance
> + * @param sess security session to freed
> + *
> + * @return
> + * - 0 if successful.
> + * - -EINVAL if session is NULL.
> + * - -EBUSY if not all device private data has been freed.
> + */
> +int
> +rte_security_session_destroy(struct rte_security_ctx *instance,
> + struct rte_security_session *sess);
> +
> +/**
> + * Updates the buffer with device-specific defined metadata
> + *
> + * @param instance security instance
> + * @param sess security session
> + * @param mb packet mbuf to set metadata on.
> + * @param params device-specific defined parameters
> + * required for metadata
> + *
> + * @return
> + * - On success, zero.
> + * - On failure, a negative value.
> + */
> +int
> +rte_security_set_pkt_metadata(struct rte_security_ctx *instance,
> + struct rte_security_session *sess,
> + struct rte_mbuf *mb, void *params);
> +
> +/**
> + * Attach a session to a symmetric crypto operation
> + *
> + * @param sym_op crypto operation
> + * @param sess security session
> + */
> +static inline int
> +__rte_security_attach_session(struct rte_crypto_sym_op *sym_op,
> + struct rte_security_session *sess)
> +{
> + sym_op->sec_session = sess;
> +
> + return 0;
> +}
> +
> +static inline void *
> +get_sec_session_private_data(const struct rte_security_session *sess)
> +{
> + return sess->sess_private_data;
> +}
> +
> +static inline void
> +set_sec_session_private_data(struct rte_security_session *sess,
> + void *private_data)
> +{
> + sess->sess_private_data = private_data;
> +}
> +
> +/**
> + * Attach a session to a crypto operation.
> + * This API is needed only in case of RTE_SECURITY_SESS_CRYPTO_PROTO_OFFLOAD
> + * For other rte_security_session_action_type, ol_flags in rte_mbuf may be
> + * defined to perform security operations.
> + *
> + * @param op crypto operation
> + * @param sess security session
> + */
> +static inline int
> +rte_security_attach_session(struct rte_crypto_op *op,
> + struct rte_security_session *sess)
> +{
> + if (unlikely(op->type != RTE_CRYPTO_OP_TYPE_SYMMETRIC))
> + return -EINVAL;
> +
> + op->sess_type = RTE_CRYPTO_OP_SECURITY_SESSION;
> +
> + return __rte_security_attach_session(op->sym, sess);
> +}
> +
> +struct rte_security_macsec_stats {
> + uint64_t reserved;
> +};
> +
> +struct rte_security_ipsec_stats {
> + uint64_t reserved;
> +
> +};
> +
> +struct rte_security_stats {
> + enum rte_security_session_protocol protocol;
> + /**< Security protocol to be configured */
> +
> + union {
> + struct rte_security_macsec_stats macsec;
> + struct rte_security_ipsec_stats ipsec;
> + };
> +};
> +
> +/**
> + * Get security session statistics
> + *
> + * @param instance security instance
> + * @param sess security session
> + * @param stats statistics
> + * @return
> + * - On success return 0
> + * - On failure errno
> + */
> +int
> +rte_security_session_stats_get(struct rte_security_ctx *instance,
> + struct rte_security_session *sess,
> + struct rte_security_stats *stats);
> +
> +/**
> + * Security capability definition
> + */
> +struct rte_security_capability {
> + enum rte_security_session_action_type action;
> + /**< Security action type*/
> + enum rte_security_session_protocol protocol;
> + /**< Security protocol */
> + RTE_STD_C11
> + union {
> + struct {
> + enum rte_security_ipsec_sa_protocol proto;
> + /**< IPsec SA protocol */
> + enum rte_security_ipsec_sa_mode mode;
> + /**< IPsec SA mode */
> + enum rte_security_ipsec_sa_direction direction;
> + /**< IPsec SA direction */
> + struct rte_security_ipsec_sa_options options;
> + /**< IPsec SA supported options */
> + } ipsec;
> + /**< IPsec capability */
> + struct {
> + /* To be Filled */
> + } macsec;
> + /**< MACsec capability */
> + };
> +
> + const struct rte_cryptodev_capabilities *crypto_capabilities;
> + /**< Corresponding crypto capabilities for security capability */
> +
> + uint32_t ol_flags;
> + /**< Device offload flags */
> +};
> +
> +#define RTE_SECURITY_TX_OLOAD_NEED_MDATA 0x00000001
> +/**< HW needs metadata update, see rte_security_set_pkt_metadata().
> + */
> +
> +#define RTE_SECURITY_TX_HW_TRAILER_OFFLOAD 0x00000002
> +/**< HW constructs trailer of packets
> + * Transmitted packets will have the trailer added to them
> + * by hardawre. The next protocol field will be based on
> + * the mbuf->inner_esp_next_proto field.
> + */
> +#define RTE_SECURITY_RX_HW_TRAILER_OFFLOAD 0x00010000
> +/**< HW removes trailer of packets
> + * Received packets have no trailer, the next protocol field
> + * is supplied in the mbuf->inner_esp_next_proto field.
> + * Inner packet is not modified.
> + */
> +
> +/**
> + * Security capability index used to query a security instance for a specific
> + * security capability
> + */
> +struct rte_security_capability_idx {
> + enum rte_security_session_action_type action;
> + enum rte_security_session_protocol protocol;
> +
> + union {
> + struct {
> + enum rte_security_ipsec_sa_protocol proto;
> + enum rte_security_ipsec_sa_mode mode;
> + enum rte_security_ipsec_sa_direction direction;
> + } ipsec;
> + };
> +};
> +
> +/**
> + * Returns array of security instance capabilities
> + *
> + * @param instance Security instance.
> + *
> + * @return
> + * - Returns array of security capabilities.
> + * - Return NULL if no capabilities available.
> + */
> +const struct rte_security_capability *
> +rte_security_capabilities_get(struct rte_security_ctx *instance);
> +
> +/**
> + * Query if a specific capability is available on security instance
> + *
> + * @param instance security instance.
> + * @param idx security capability index to match against
> + *
> + * @return
> + * - Returns pointer to security capability on match of capability
> + * index criteria.
> + * - Return NULL if the capability not matched on security instance.
> + */
> +const struct rte_security_capability *
> +rte_security_capability_get(struct rte_security_ctx *instance,
> + struct rte_security_capability_idx *idx);
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_SECURITY_H_ */
> diff --git a/lib/librte_security/rte_security_driver.h b/lib/librte_security/rte_security_driver.h
> new file mode 100644
> index 0000000..78814fa
> --- /dev/null
> +++ b/lib/librte_security/rte_security_driver.h
> @@ -0,0 +1,155 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright(c) 2017 Intel Corporation. All rights reserved.
> + * Copyright 2017 NXP.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * * Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * * Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in
> + * the documentation and/or other materials provided with the
> + * distribution.
> + * * Neither the name of Intel Corporation nor the names of its
> + * contributors may be used to endorse or promote products derived
> + * from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef _RTE_SECURITY_DRIVER_H_
> +#define _RTE_SECURITY_DRIVER_H_
> +
> +/**
> + * @file rte_security_driver.h
> + *
> + * RTE Security Common Definitions
> + *
> + */
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include "rte_security.h"
> +
> +/**
> + * Configure a security session on a device.
> + *
> + * @param device Crypto/eth device pointer
> + * @param conf Security session configuration
> + * @param sess Pointer to Security private session structure
> + * @param mp Mempool where the private session is allocated
> + *
> + * @return
> + * - Returns 0 if private session structure have been created successfully.
> + * - Returns -EINVAL if input parameters are invalid.
> + * - Returns -ENOTSUP if crypto device does not support the crypto transform.
> + * - Returns -ENOMEM if the private session could not be allocated.
> + */
> +typedef int (*security_session_create_t)(void *device,
> + struct rte_security_session_conf *conf,
> + struct rte_security_session *sess,
> + struct rte_mempool *mp);
> +
> +/**
> + * Free driver private session data.
> + *
> + * @param dev Crypto/eth device pointer
> + * @param sess Security session structure
> + */
> +typedef int (*security_session_destroy_t)(void *device,
> + struct rte_security_session *sess);
> +
> +/**
> + * Update driver private session data.
> + *
> + * @param device Crypto/eth device pointer
> + * @param sess Pointer to Security private session structure
> + * @param conf Security session configuration
> + *
> + * @return
> + * - Returns 0 if private session structure have been updated successfully.
> + * - Returns -EINVAL if input parameters are invalid.
> + * - Returns -ENOTSUP if crypto device does not support the crypto transform.
> + */
> +typedef int (*security_session_update_t)(void *device,
> + struct rte_security_session *sess,
> + struct rte_security_session_conf *conf);
> +/**
> + * Get stats from the PMD.
> + *
> + * @param device Crypto/eth device pointer
> + * @param sess Pointer to Security private session structure
> + * @param stats Security stats of the driver
> + *
> + * @return
> + * - Returns 0 if private session structure have been updated successfully.
> + * - Returns -EINVAL if session parameters are invalid.
> + */
> +typedef int (*security_session_stats_get_t)(void *device,
> + struct rte_security_session *sess,
> + struct rte_security_stats *stats);
> +
> +/**
> + * Update the mbuf with provided metadata.
> + *
> + * @param sess Security session structure
> + * @param mb Packet buffer
> + * @param mt Metadata
> + *
> + * @return
> + * - Returns 0 if metadata updated successfully.
> + * - Returns -ve value for errors.
> + */
> +typedef int (*security_set_pkt_metadata_t)(void *device,
> + struct rte_security_session *sess, struct rte_mbuf *m,
> + void *params);
> +
> +/**
> + * Get security capabilities of the device.
> + *
> + * @param device crypto/eth device pointer
> + *
> + * @return
> + * - Returns rte_security_capability pointer on success.
> + * - Returns NULL on error.
> + */
> +typedef const struct rte_security_capability *(*security_capabilities_get_t)(
> + void *device);
> +
> +/** Security operations function pointer table */
> +struct rte_security_ops {
> + security_session_create_t session_create;
> + /**< Configure a security session. */
> + security_session_update_t session_update;
> + /**< Update a security session. */
> + security_session_stats_get_t session_stats_get;
> + /**< Get security session statistics. */
> + security_session_destroy_t session_destroy;
> + /**< Clear a security sessions private data. */
> + security_set_pkt_metadata_t set_pkt_metadata;
> + /**< Update mbuf metadata. */
> + security_capabilities_get_t capabilities_get;
> + /**< Get security capabilities. */
> +};
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_SECURITY_DRIVER_H_ */
> diff --git a/lib/librte_security/rte_security_version.map b/lib/librte_security/rte_security_version.map
> new file mode 100644
> index 0000000..8af7fc1
> --- /dev/null
> +++ b/lib/librte_security/rte_security_version.map
> @@ -0,0 +1,13 @@
> +DPDK_17.11 {
> + global:
> +
> + rte_security_attach_session;
> + rte_security_capabilities_get;
> + rte_security_capability_get;
> + rte_security_session_create;
> + rte_security_session_destroy;
> + rte_security_session_stats_get;
> + rte_security_session_update;
> + rte_security_set_pkt_metadata;
> +
> +};
Tested-by: Aviad Yehezkel <aviadye@mellanox.com>
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 01/12] lib/rte_security: add security library
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 01/12] lib/rte_security: add security library Akhil Goyal
2017-10-15 12:47 ` Aviad Yehezkel
@ 2017-10-19 9:30 ` Ananyev, Konstantin
2017-10-21 15:54 ` Akhil Goyal
2017-10-20 9:37 ` Thomas Monjalon
2 siblings, 1 reply; 195+ messages in thread
From: Ananyev, Konstantin @ 2017-10-19 9:30 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: Doherty, Declan, De Lara Guarch, Pablo, hemant.agrawal, Nicolau,
Radu, borisp, aviadye, thomas, sandeep.malik, jerin.jacob,
Mcnamara, John, shahafs, olivier.matz
> +
> +/**
> + * Security context for crypto/eth devices
> + *
> + * Security instance for each driver to register security operations.
> + * The application can get the security context from the crypto/eth device id
> + * using the APIs rte_cryptodev_get_sec_ctx()/rte_eth_dev_get_sec_ctx()
> + * This structure is used to identify the device(crypto/eth) for which the
> + * security operations need to be performed.
> + */
> +struct rte_security_ctx {
> + enum {
> + RTE_SECURITY_INSTANCE_INVALID,
> + /**< Security context is invalid */
> + RTE_SECURITY_INSTANCE_VALID
> + /**< Security context is valid */
> + } state;
As a nit - why do you need state now?
As I understand if device doesn't have its security context setup properly,
then rte_eth_dev_get_sec_ctx() would just return 0.
Konstantin
> + /**< Current state of security context */
> + void *device;
> + /**< Crypto/ethernet device attached */
> + struct rte_security_ops *ops;
> + /**< Pointer to security ops for the device */
> + uint16_t sess_cnt;
> + /**< Number of sessions attached to this context */
> +};
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 01/12] lib/rte_security: add security library
2017-10-19 9:30 ` Ananyev, Konstantin
@ 2017-10-21 15:54 ` Akhil Goyal
0 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-21 15:54 UTC (permalink / raw)
To: Ananyev, Konstantin, dev
Cc: Doherty, Declan, De Lara Guarch, Pablo, hemant.agrawal, Nicolau,
Radu, borisp, aviadye, thomas, sandeep.malik, jerin.jacob,
Mcnamara, John, shahafs, olivier.matz
On 10/19/2017 3:00 PM, Ananyev, Konstantin wrote:
>
>
>> +
>> +/**
>> + * Security context for crypto/eth devices
>> + *
>> + * Security instance for each driver to register security operations.
>> + * The application can get the security context from the crypto/eth device id
>> + * using the APIs rte_cryptodev_get_sec_ctx()/rte_eth_dev_get_sec_ctx()
>> + * This structure is used to identify the device(crypto/eth) for which the
>> + * security operations need to be performed.
>> + */
>> +struct rte_security_ctx {
>> + enum {
>> + RTE_SECURITY_INSTANCE_INVALID,
>> + /**< Security context is invalid */
>> + RTE_SECURITY_INSTANCE_VALID
>> + /**< Security context is valid */
>> + } state;
>
> As a nit - why do you need state now?
> As I understand if device doesn't have its security context setup properly,
> then rte_eth_dev_get_sec_ctx() would just return 0.
> Konstantin
Ok would remove it in v5.
>
>> + /**< Current state of security context */
>> + void *device;
>> + /**< Crypto/ethernet device attached */
>> + struct rte_security_ops *ops;
>> + /**< Pointer to security ops for the device */
>> + uint16_t sess_cnt;
>> + /**< Number of sessions attached to this context */
>> +};
>
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 01/12] lib/rte_security: add security library
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 01/12] lib/rte_security: add security library Akhil Goyal
2017-10-15 12:47 ` Aviad Yehezkel
2017-10-19 9:30 ` Ananyev, Konstantin
@ 2017-10-20 9:37 ` Thomas Monjalon
2017-10-20 9:39 ` Thomas Monjalon
2017-10-21 19:45 ` Akhil Goyal
2 siblings, 2 replies; 195+ messages in thread
From: Thomas Monjalon @ 2017-10-20 9:37 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, sandeep.malik, jerin.jacob,
john.mcnamara, konstantin.ananyev, shahafs, olivier.matz
15/10/2017 00:17, Akhil Goyal:
> lib/librte_security/Makefile | 53 +++
> lib/librte_security/rte_security.c | 149 ++++++++
> lib/librte_security/rte_security.h | 535 +++++++++++++++++++++++++++
> lib/librte_security/rte_security_driver.h | 155 ++++++++
> lib/librte_security/rte_security_version.map | 13 +
> 5 files changed, 905 insertions(+)
Please add the doxygen changes in doc/api/ in this patch.
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 01/12] lib/rte_security: add security library
2017-10-20 9:37 ` Thomas Monjalon
@ 2017-10-20 9:39 ` Thomas Monjalon
2017-10-21 19:46 ` Akhil Goyal
2017-10-21 19:45 ` Akhil Goyal
1 sibling, 1 reply; 195+ messages in thread
From: Thomas Monjalon @ 2017-10-20 9:39 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, sandeep.malik, jerin.jacob,
john.mcnamara, konstantin.ananyev, shahafs, olivier.matz
20/10/2017 11:37, Thomas Monjalon:
> 15/10/2017 00:17, Akhil Goyal:
> > lib/librte_security/Makefile | 53 +++
> > lib/librte_security/rte_security.c | 149 ++++++++
> > lib/librte_security/rte_security.h | 535 +++++++++++++++++++++++++++
> > lib/librte_security/rte_security_driver.h | 155 ++++++++
> > lib/librte_security/rte_security_version.map | 13 +
> > 5 files changed, 905 insertions(+)
>
> Please add the doxygen changes in doc/api/ in this patch.
You should also add the MAINTAINER entry in this patch.
And the new lib must be added to the list of .so files in the release notes.
Last details :)
Thanks
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 01/12] lib/rte_security: add security library
2017-10-20 9:39 ` Thomas Monjalon
@ 2017-10-21 19:46 ` Akhil Goyal
0 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-21 19:46 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, sandeep.malik, jerin.jacob,
john.mcnamara, konstantin.ananyev, shahafs, olivier.matz
On 10/20/2017 3:09 PM, Thomas Monjalon wrote:
> 20/10/2017 11:37, Thomas Monjalon:
>> 15/10/2017 00:17, Akhil Goyal:
>>> lib/librte_security/Makefile | 53 +++
>>> lib/librte_security/rte_security.c | 149 ++++++++
>>> lib/librte_security/rte_security.h | 535 +++++++++++++++++++++++++++
>>> lib/librte_security/rte_security_driver.h | 155 ++++++++
>>> lib/librte_security/rte_security_version.map | 13 +
>>> 5 files changed, 905 insertions(+)
>>
>> Please add the doxygen changes in doc/api/ in this patch.
>
> You should also add the MAINTAINER entry in this patch.
>
> And the new lib must be added to the list of .so files in the release notes.
>
> Last details :)
> Thanks
>
ok will add in next version.
Thanks,
Akhil
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 01/12] lib/rte_security: add security library
2017-10-20 9:37 ` Thomas Monjalon
2017-10-20 9:39 ` Thomas Monjalon
@ 2017-10-21 19:45 ` Akhil Goyal
1 sibling, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-21 19:45 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, sandeep.malik, jerin.jacob,
john.mcnamara, konstantin.ananyev, shahafs, olivier.matz
On 10/20/2017 3:07 PM, Thomas Monjalon wrote:
> 15/10/2017 00:17, Akhil Goyal:
>> lib/librte_security/Makefile | 53 +++
>> lib/librte_security/rte_security.c | 149 ++++++++
>> lib/librte_security/rte_security.h | 535 +++++++++++++++++++++++++++
>> lib/librte_security/rte_security_driver.h | 155 ++++++++
>> lib/librte_security/rte_security_version.map | 13 +
>> 5 files changed, 905 insertions(+)
>
> Please add the doxygen changes in doc/api/ in this patch.
>
ok.
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v4 02/12] doc: add details of rte security
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 " Akhil Goyal
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 01/12] lib/rte_security: add security library Akhil Goyal
@ 2017-10-14 22:17 ` Akhil Goyal
2017-10-15 12:47 ` Aviad Yehezkel
2017-10-20 9:41 ` Thomas Monjalon
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 03/12] cryptodev: support security APIs Akhil Goyal
` (11 subsequent siblings)
13 siblings, 2 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-14 22:17 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf | 1 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rte_security.rst | 559 +++++++++++++++++++++++++++++++++
4 files changed, 563 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/rte_security.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 990815f..7c680dc 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -58,7 +58,8 @@ The public API headers are grouped by topics:
[ixgbe] (@ref rte_pmd_ixgbe.h),
[i40e] (@ref rte_pmd_i40e.h),
[bnxt] (@ref rte_pmd_bnxt.h),
- [crypto_scheduler] (@ref rte_cryptodev_scheduler.h)
+ [crypto_scheduler] (@ref rte_cryptodev_scheduler.h),
+ [security] (@ref rte_security.h)
- **memory**:
[memseg] (@ref rte_memory.h),
diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf
index 9e9fa56..567691b 100644
--- a/doc/api/doxy-api.conf
+++ b/doc/api/doxy-api.conf
@@ -70,6 +70,7 @@ INPUT = doc/api/doxy-api-index.md \
lib/librte_reorder \
lib/librte_ring \
lib/librte_sched \
+ lib/librte_security \
lib/librte_table \
lib/librte_timer \
lib/librte_vhost
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index b5ad6b8..46cb4fe 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -46,6 +46,7 @@ Programmer's Guide
rte_flow
traffic_management
cryptodev_lib
+ rte_security
link_bonding_poll_mode_drv_lib
timer_lib
hash_lib
diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
new file mode 100644
index 0000000..0708856
--- /dev/null
+++ b/doc/guides/prog_guide/rte_security.rst
@@ -0,0 +1,559 @@
+.. BSD LICENSE
+ Copyright 2017 NXP.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of NXP nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+Security Library
+================
+
+The security library provides a framework for management and provisioning
+of security protocol operations offloaded to hardware based devices. The
+library defines generic APIs to create and free security sessions which can
+support full protocol offload as well as inline crypto operation with
+NIC or crypto devices. The framework currently only supports the IPSec protocol
+and associated operations, other protocols will be added in future.
+
+Design Principles
+-----------------
+
+The security library provides an additional offload capability to an existing
+crypto device and/or ethernet device.
+
+.. code-block:: console
+
+ +---------------+
+ | rte_security |
+ +---------------+
+ \ /
+ +-----------+ +--------------+
+ | NIC PMD | | CRYPTO PMD |
+ +-----------+ +--------------+
+
+The supported offload types are explained in the sections below.
+
+Inline Crypto
+~~~~~~~~~~~~~
+
+RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO:
+The crypto processing for security protocol (e.g. IPSec) is processed
+inline during receive and transmission on NIC port. The flow based
+security action should be configured on the port.
+
+Ingress Data path - The packet is decrypted in RX path and relevant
+crypto status is set in Rx descriptors. After the successful inline
+crypto processing the packet is presented to host as a regular Rx packet
+however all security protocol related headers are still attached to the
+packet. e.g. In case of IPSec, the IPSec tunnel headers (if any),
+ESP/AH headers will remain in the packet but the received packet
+contains the decrypted data where the encrypted data was when the packet
+arrived. The driver Rx path check the descriptors and and based on the
+crypto status sets additional flags in the rte_mbuf.ol_flags field.
+
+.. note::
+
+ The underlying device may not support crypto processing for all ingress packet
+ matching to a particular flow (e.g. fragmented packets), such packets will
+ be passed as encrypted packets. It is the responsibility of application to
+ process such encrypted packets using other crypto driver instance.
+
+Egress Data path - The software prepares the egress packet by adding
+relevant security protocol headers. Only the data will not be
+encrypted by the software. The driver will accordingly configure the
+tx descriptors. The hardware device will encrypt the data before sending the
+the packet out.
+
+.. note::
+
+ The underlying device may support post encryption TSO.
+
+.. code-block:: console
+
+ Egress Data Path
+ |
+ +--------|--------+
+ | egress IPsec |
+ | | |
+ | +------V------+ |
+ | | SADB lookup | |
+ | +------|------+ |
+ | +------V------+ |
+ | | Tunnel | | <------ Add tunnel header to packet
+ | +------|------+ |
+ | +------V------+ |
+ | | ESP | | <------ Add ESP header without trailer to packet
+ | | | | <------ Mark packet to be offloaded, add trailer
+ | +------|------+ | meta-data to mbuf
+ +--------V--------+
+ |
+ +--------V--------+
+ | L2 Stack |
+ +--------|--------+
+ |
+ +--------V--------+
+ | |
+ | NIC PMD | <------ Set hw context for inline crypto offload
+ | |
+ +--------|--------+
+ |
+ +--------|--------+
+ | HW ACCELERATED | <------ Packet Encryption and
+ | NIC | Authentication happens inline
+ | |
+ +-----------------+
+
+
+Inline protocol offload
+~~~~~~~~~~~~~~~~~~~~~~~
+
+RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+The crypto and protocol processing for security protocol (e.g. IPSec)
+is processed inline during receive and transmission. The flow based
+security action should be configured on the port.
+
+Ingress Data path - The packet is decrypted in the RX path and relevant
+crypto status is set in the Rx descriptors. After the successful inline
+crypto processing the packet is presented to the host as a regular Rx packet
+but all security protocol related headers are optionally removed from the
+packet. e.g. in the case of IPSec, the IPSec tunnel headers (if any),
+ESP/AH headers will be removed from the packet and the received packet
+will contains the decrypted packet only. The driver Rx path checks the
+descriptors and based on the crypto status sets additional flags in
+``rte_mbuf.ol_flags`` field.
+
+.. note::
+
+ The underlying device in this case is stateful. It is expected that
+ the device shall support crypto processing for all kind of packets matching
+ to a given flow, this includes fragmented packets (post reassembly).
+ E.g. in case of IPSec the device may internally manage anti-replay etc.
+ It will provide a configuration option for anti-replay behavior i.e. to drop
+ the packets or pass them to driver with error flags set in the descriptor.
+
+Egress Data path - The software will send the plain packet without any
+security protocol headers added to the packet. The driver will configure
+the security index and other requirement in tx descriptors.
+The hardware device will do security processing on the packet that includes
+adding the relevant protocol headers and encrypting the data before sending
+the packet out. The software should make sure that the buffer
+has required head room and tail room for any protocol header addition. The
+software may also do early fragmentation if the resultant packet is expected
+to cross the MTU size.
+
+
+.. note::
+
+ The underlying device will manage state information required for egress
+ processing. E.g. in case of IPSec, the seq number will be added to the
+ packet, however the device shall provide indication when the sequence number
+ is about to overflow. The underlying device may support post encryption TSO.
+
+.. code-block:: console
+
+ Egress Data Path
+ |
+ +--------|--------+
+ | egress IPsec |
+ | | |
+ | +------V------+ |
+ | | SADB lookup | |
+ | +------|------+ |
+ | +------V------+ |
+ | | Desc | | <------ Mark packet to be offloaded
+ | +------|------+ |
+ +--------V--------+
+ |
+ +--------V--------+
+ | L2 Stack |
+ +--------|--------+
+ |
+ +--------V--------+
+ | |
+ | NIC PMD | <------ Set hw context for inline crypto offload
+ | |
+ +--------|--------+
+ |
+ +--------|--------+
+ | HW ACCELERATED | <------ Add tunnel, ESP header etc header to
+ | NIC | packet. Packet Encryption and
+ | | Authentication happens inline.
+ +-----------------+
+
+
+Lookaside protocol offload
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
+This extends librte_cryptodev to support the programming of IPsec
+Security Association (SA) as part of a crypto session creation including
+the definition. In addition to standard crypto processing, as defined by
+the cryptodev, the security protocol processing is also offloaded to the
+crypto device.
+
+Decryption: The packet is sent to the crypto device for security
+protocol processing. The device will decrypt the packet and it will also
+optionally remove additional security headers from the packet.
+E.g. in case of IPSec, IPSec tunnel headers (if any), ESP/AH headers
+will be removed from the packet and the decrypted packet may contain
+plain data only.
+
+.. note::
+
+ In case of IPSec the device may internally manage anti-replay etc.
+ It will provide a configuration option for anti-replay behavior i.e. to drop
+ the packets or pass them to driver with error flags set in descriptor.
+
+Encryption: The software will submit the packet to cryptodev as usual
+for encryption, the hardware device in this case will also add the relevant
+security protocol header along with encrypting the packet. The software
+should make sure that the buffer has required head room and tail room
+for any protocol header addition.
+
+.. note::
+
+ In the case of IPSec, the seq number will be added to the packet,
+ It shall provide an indication when the sequence number is about to
+ overflow.
+
+.. code-block:: console
+
+ Egress Data Path
+ |
+ +--------|--------+
+ | egress IPsec |
+ | | |
+ | +------V------+ |
+ | | SADB lookup | | <------ SA maps to cryptodev session
+ | +------|------+ |
+ | +------|------+ |
+ | | \--------------------\
+ | | Crypto | | | <- Crypto processing through
+ | | /----------------\ | inline crypto PMD
+ | +------|------+ | | |
+ +--------V--------+ | |
+ | | |
+ +--------V--------+ | | create <-- SA is added to hw
+ | L2 Stack | | | inline using existing create
+ +--------|--------+ | | session sym session APIs
+ | | | |
+ +--------V--------+ +---|---|----V---+
+ | | | \---/ | | <--- Add tunnel, ESP header etc
+ | NIC PMD | | INLINE | | header to packet.Packet
+ | | | CRYPTO PMD | | Encryption/Decryption and
+ +--------|--------+ +----------------+ Authentication happens
+ | inline.
+ +--------|--------+
+ | NIC |
+ +--------|--------+
+ V
+
+Device Features and Capabilities
+---------------------------------
+
+Device Capabilities For Security Operations
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The device (crypto or ethernet) capabilities which support security operations,
+are defined by the security action type, security protocol, protocol
+capabilities and corresponding crypto capabilities for security. For the full
+scope of the Security capability see definition of rte_security_capability
+structure in the *DPDK API Reference*.
+
+.. code-block:: c
+
+ struct rte_security_capability;
+
+Each driver (crypto or ethernet) defines its own private array of capabilities
+for the operations it supports. Below is an example of the capabilities for a
+PMD which supports the IPSec protocol.
+
+.. code-block:: c
+
+ static const struct rte_security_capability pmd_security_capabilities[] = {
+ { /* IPsec Lookaside Protocol offload ESP Tunnel Egress */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ .options = { 0 }
+ },
+ .crypto_capabilities = pmd_capabilities
+ },
+ { /* IPsec Lookaside Protocol offload ESP Tunnel Ingress */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+ .options = { 0 }
+ },
+ .crypto_capabilities = pmd_capabilities
+ },
+ {
+ .action = RTE_SECURITY_ACTION_TYPE_NONE
+ }
+ };
+ static const struct rte_cryptodev_capabilities pmd_capabilities[] = {
+ { /* SHA1 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ .sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ .auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 12,
+ .max = 12,
+ .increment = 0
+ },
+ .aad_size = { 0 },
+ .iv_size = { 0 }
+ }
+ }
+ },
+ { /* AES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ .sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ .cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_CBC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }
+ }
+ }
+ }
+
+
+Capabilities Discovery
+~~~~~~~~~~~~~~~~~~~~~~
+
+Discovering the features and capabilities of a driver (crypto/ethernet)
+is achieved through the ``rte_security_capabilities_get()`` function.
+
+.. code-block:: c
+
+ const struct rte_security_capability *rte_security_capabilities_get(uint16_t id);
+
+This allows the user to query a specific driver and get all device
+security capabilities. It returns an array of ``rte_security_capability`` structures
+which contains all the capabilities for that device.
+
+Security Session Create/Free
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Security Sessions are created to store the immutable fields of a particular Security
+Association for a particular protocol which is defined by a security session
+configuration structure which is used in the operation processing of a packet flow.
+Sessions are used to manage protocol specific information as well as crypto parameters.
+Security sessions cache this immutable data in a optimal way for the underlying PMD
+and this allows further acceleration of the offload of Crypto workloads.
+
+The Security framework provides APIs to create and free sessions for crypto/ethernet
+devices, where sessions are mempool objects. It is the application's responsibility
+to create and manage the session mempools. The mempool object size should be able to
+accommodate the driver's private data of security session.
+
+Once the session mempools have been created, ``rte_security_session_create()``
+is used to allocate and initialize a session for the required crypto/ethernet device.
+
+Session APIs need a parameter ``rte_security_ctx`` to identify the crypto/ethernet
+security ops. This parameter can be retreived using the APIs
+``rte_cryptodev_get_sec_ctx()`` (for crypto device) or ``rte_eth_dev_get_sec_ctx``
+(for ethernet port).
+
+Sessions already created can be updated with ``rte_security_session_update()``.
+
+When a session is no longer used, the user must call ``rte_security_session_destroy()``
+to free the driver private session data and return the memory back to the mempool.
+
+For look aside protocol offload to hardware crypto device, the ``rte_crypto_op``
+created by the application is attached to the security session by the API
+``rte_security_attach_session()``.
+
+For Inline Crypto and Inline protocol offload, device specific defined metadata is
+updated in the mbuf using ``rte_security_set_pkt_metadata()`` if
+``DEV_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
+
+Security session configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Security Session configuration structure is defined as ``rte_security_session_conf``
+
+.. code-block:: c
+
+ struct rte_security_session_conf {
+ enum rte_security_session_action_type action_type;
+ /**< Type of action to be performed on the session */
+ enum rte_security_session_protocol protocol;
+ /**< Security protocol to be configured */
+ union {
+ struct rte_security_ipsec_xform ipsec;
+ struct rte_security_macsec_xform macsec;
+ };
+ /**< Configuration parameters for security session */
+ struct rte_crypto_sym_xform *crypto_xform;
+ /**< Security Session Crypto Transformations */
+ };
+
+The configuration structure reuses the ``rte_crypto_sym_xform`` struct for crypto related
+configuration. The ``rte_security_session_action_type`` struct is used to specify whether the
+session is configured for Lookaside Protocol offload or Inline Crypto or Inline Protocol
+Offload.
+
+.. code-block:: c
+
+ enum rte_security_session_action_type {
+ RTE_SECURITY_ACTION_TYPE_NONE,
+ /**< No security actions */
+ RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+ /**< Crypto processing for security protocol is processed inline
+ * during transmission */
+ RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+ /**< All security protocol processing is performed inline during
+ * transmission */
+ RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+ /**< All security protocol processing including crypto is performed
+ * on a lookaside accelerator */
+ };
+
+The ``rte_security_session_protocol`` is defined as
+
+.. code-block:: c
+
+ enum rte_security_session_protocol {
+ RTE_SECURITY_PROTOCOL_IPSEC,
+ /**< IPsec Protocol */
+ RTE_SECURITY_PROTOCOL_MACSEC,
+ /**< MACSec Protocol */
+ };
+
+Currently the library defines configuration parameters for IPSec only. For other
+protocols like MACSec, structures and enums are defined as place holders which
+will be updated in the future.
+
+IPsec related configuration parameters are defined in ``rte_security_ipsec_xform``
+
+.. code-block:: c
+
+ struct rte_security_ipsec_xform {
+ uint32_t spi;
+ /**< SA security parameter index */
+ uint32_t salt;
+ /**< SA salt */
+ struct rte_security_ipsec_sa_options options;
+ /**< various SA options */
+ enum rte_security_ipsec_sa_direction direction;
+ /**< IPSec SA Direction - Egress/Ingress */
+ enum rte_security_ipsec_sa_protocol proto;
+ /**< IPsec SA Protocol - AH/ESP */
+ enum rte_security_ipsec_sa_mode mode;
+ /**< IPsec SA Mode - transport/tunnel */
+ struct rte_security_ipsec_tunnel_param tunnel;
+ /**< Tunnel parameters, NULL for transport mode */
+ };
+
+
+Security API
+~~~~~~~~~~~~
+
+The rte_security Library API is described in the *DPDK API Reference* document.
+
+Flow based Security Session
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In the case of NIC based offloads, the security session specified in the
+'rte_flow_action_security' must be created on the same port as the
+flow action that is being specified.
+
+The ingress/egress flow attribute should match that specified in the security
+session if the security session supports the definition of the direction.
+
+Multiple flows can be configured to use the same security session. For
+example if the security session specifies an egress IPsec SA, then multiple
+flows can be specified to that SA. In the case of an ingress IPsec SA then
+it is only valid to have a single flow to map to that security session.
+
+.. code-block:: console
+
+ Configuration Path
+ |
+ +--------|--------+
+ | Add/Remove |
+ | IPsec SA | <------ Build security flow action of
+ | | | ipsec transform
+ |--------|--------|
+ |
+ +--------V--------+
+ | Flow API |
+ +--------|--------+
+ |
+ +--------V--------+
+ | |
+ | NIC PMD | <------ Add/Remove SA to/from hw context
+ | |
+ +--------|--------+
+ |
+ +--------|--------+
+ | HW ACCELERATED |
+ | NIC |
+ | |
+ +--------|--------+
+
+* Add/Delete SA flow:
+ To add a new inline SA construct a rte_flow_item for Ethernet + IP + ESP
+ using the SA selectors and the ``rte_crypto_ipsec_xform`` as the ``rte_flow_action``.
+ Note that any rte_flow_items may be empty, which means it is not checked.
+
+.. code-block:: console
+
+ In its most basic form, IPsec flow specification is as follows:
+ +-------+ +----------+ +--------+ +-----+
+ | Eth | -> | IP4/6 | -> | ESP | -> | END |
+ +-------+ +----------+ +--------+ +-----+
+
+ However, the API can represent, IPsec crypto offload with any encapsulation:
+ +-------+ +--------+ +-----+
+ | Eth | -> ... -> | ESP | -> | END |
+ +-------+ +--------+ +-----+
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 02/12] doc: add details of rte security
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 02/12] doc: add details of rte security Akhil Goyal
@ 2017-10-15 12:47 ` Aviad Yehezkel
2017-10-20 9:41 ` Thomas Monjalon
1 sibling, 0 replies; 195+ messages in thread
From: Aviad Yehezkel @ 2017-10-15 12:47 UTC (permalink / raw)
To: dev
On 10/15/2017 1:17 AM, Akhil Goyal wrote:
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> Acked-by: John McNamara <john.mcnamara@intel.com>
> ---
> doc/api/doxy-api-index.md | 3 +-
> doc/api/doxy-api.conf | 1 +
> doc/guides/prog_guide/index.rst | 1 +
> doc/guides/prog_guide/rte_security.rst | 559 +++++++++++++++++++++++++++++++++
> 4 files changed, 563 insertions(+), 1 deletion(-)
> create mode 100644 doc/guides/prog_guide/rte_security.rst
>
> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
> index 990815f..7c680dc 100644
> --- a/doc/api/doxy-api-index.md
> +++ b/doc/api/doxy-api-index.md
> @@ -58,7 +58,8 @@ The public API headers are grouped by topics:
> [ixgbe] (@ref rte_pmd_ixgbe.h),
> [i40e] (@ref rte_pmd_i40e.h),
> [bnxt] (@ref rte_pmd_bnxt.h),
> - [crypto_scheduler] (@ref rte_cryptodev_scheduler.h)
> + [crypto_scheduler] (@ref rte_cryptodev_scheduler.h),
> + [security] (@ref rte_security.h)
>
> - **memory**:
> [memseg] (@ref rte_memory.h),
> diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf
> index 9e9fa56..567691b 100644
> --- a/doc/api/doxy-api.conf
> +++ b/doc/api/doxy-api.conf
> @@ -70,6 +70,7 @@ INPUT = doc/api/doxy-api-index.md \
> lib/librte_reorder \
> lib/librte_ring \
> lib/librte_sched \
> + lib/librte_security \
> lib/librte_table \
> lib/librte_timer \
> lib/librte_vhost
> diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
> index b5ad6b8..46cb4fe 100644
> --- a/doc/guides/prog_guide/index.rst
> +++ b/doc/guides/prog_guide/index.rst
> @@ -46,6 +46,7 @@ Programmer's Guide
> rte_flow
> traffic_management
> cryptodev_lib
> + rte_security
> link_bonding_poll_mode_drv_lib
> timer_lib
> hash_lib
> diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
> new file mode 100644
> index 0000000..0708856
> --- /dev/null
> +++ b/doc/guides/prog_guide/rte_security.rst
> @@ -0,0 +1,559 @@
> +.. BSD LICENSE
> + Copyright 2017 NXP.
> +
> + Redistribution and use in source and binary forms, with or without
> + modification, are permitted provided that the following conditions
> + are met:
> +
> + * Redistributions of source code must retain the above copyright
> + notice, this list of conditions and the following disclaimer.
> + * Redistributions in binary form must reproduce the above copyright
> + notice, this list of conditions and the following disclaimer in
> + the documentation and/or other materials provided with the
> + distribution.
> + * Neither the name of NXP nor the names of its
> + contributors may be used to endorse or promote products derived
> + from this software without specific prior written permission.
> +
> + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> +
> +
> +Security Library
> +================
> +
> +The security library provides a framework for management and provisioning
> +of security protocol operations offloaded to hardware based devices. The
> +library defines generic APIs to create and free security sessions which can
> +support full protocol offload as well as inline crypto operation with
> +NIC or crypto devices. The framework currently only supports the IPSec protocol
> +and associated operations, other protocols will be added in future.
> +
> +Design Principles
> +-----------------
> +
> +The security library provides an additional offload capability to an existing
> +crypto device and/or ethernet device.
> +
> +.. code-block:: console
> +
> + +---------------+
> + | rte_security |
> + +---------------+
> + \ /
> + +-----------+ +--------------+
> + | NIC PMD | | CRYPTO PMD |
> + +-----------+ +--------------+
> +
> +The supported offload types are explained in the sections below.
> +
> +Inline Crypto
> +~~~~~~~~~~~~~
> +
> +RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO:
> +The crypto processing for security protocol (e.g. IPSec) is processed
> +inline during receive and transmission on NIC port. The flow based
> +security action should be configured on the port.
> +
> +Ingress Data path - The packet is decrypted in RX path and relevant
> +crypto status is set in Rx descriptors. After the successful inline
> +crypto processing the packet is presented to host as a regular Rx packet
> +however all security protocol related headers are still attached to the
> +packet. e.g. In case of IPSec, the IPSec tunnel headers (if any),
> +ESP/AH headers will remain in the packet but the received packet
> +contains the decrypted data where the encrypted data was when the packet
> +arrived. The driver Rx path check the descriptors and and based on the
> +crypto status sets additional flags in the rte_mbuf.ol_flags field.
> +
> +.. note::
> +
> + The underlying device may not support crypto processing for all ingress packet
> + matching to a particular flow (e.g. fragmented packets), such packets will
> + be passed as encrypted packets. It is the responsibility of application to
> + process such encrypted packets using other crypto driver instance.
> +
> +Egress Data path - The software prepares the egress packet by adding
> +relevant security protocol headers. Only the data will not be
> +encrypted by the software. The driver will accordingly configure the
> +tx descriptors. The hardware device will encrypt the data before sending the
> +the packet out.
> +
> +.. note::
> +
> + The underlying device may support post encryption TSO.
> +
> +.. code-block:: console
> +
> + Egress Data Path
> + |
> + +--------|--------+
> + | egress IPsec |
> + | | |
> + | +------V------+ |
> + | | SADB lookup | |
> + | +------|------+ |
> + | +------V------+ |
> + | | Tunnel | | <------ Add tunnel header to packet
> + | +------|------+ |
> + | +------V------+ |
> + | | ESP | | <------ Add ESP header without trailer to packet
> + | | | | <------ Mark packet to be offloaded, add trailer
> + | +------|------+ | meta-data to mbuf
> + +--------V--------+
> + |
> + +--------V--------+
> + | L2 Stack |
> + +--------|--------+
> + |
> + +--------V--------+
> + | |
> + | NIC PMD | <------ Set hw context for inline crypto offload
> + | |
> + +--------|--------+
> + |
> + +--------|--------+
> + | HW ACCELERATED | <------ Packet Encryption and
> + | NIC | Authentication happens inline
> + | |
> + +-----------------+
> +
> +
> +Inline protocol offload
> +~~~~~~~~~~~~~~~~~~~~~~~
> +
> +RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
> +The crypto and protocol processing for security protocol (e.g. IPSec)
> +is processed inline during receive and transmission. The flow based
> +security action should be configured on the port.
> +
> +Ingress Data path - The packet is decrypted in the RX path and relevant
> +crypto status is set in the Rx descriptors. After the successful inline
> +crypto processing the packet is presented to the host as a regular Rx packet
> +but all security protocol related headers are optionally removed from the
> +packet. e.g. in the case of IPSec, the IPSec tunnel headers (if any),
> +ESP/AH headers will be removed from the packet and the received packet
> +will contains the decrypted packet only. The driver Rx path checks the
> +descriptors and based on the crypto status sets additional flags in
> +``rte_mbuf.ol_flags`` field.
> +
> +.. note::
> +
> + The underlying device in this case is stateful. It is expected that
> + the device shall support crypto processing for all kind of packets matching
> + to a given flow, this includes fragmented packets (post reassembly).
> + E.g. in case of IPSec the device may internally manage anti-replay etc.
> + It will provide a configuration option for anti-replay behavior i.e. to drop
> + the packets or pass them to driver with error flags set in the descriptor.
> +
> +Egress Data path - The software will send the plain packet without any
> +security protocol headers added to the packet. The driver will configure
> +the security index and other requirement in tx descriptors.
> +The hardware device will do security processing on the packet that includes
> +adding the relevant protocol headers and encrypting the data before sending
> +the packet out. The software should make sure that the buffer
> +has required head room and tail room for any protocol header addition. The
> +software may also do early fragmentation if the resultant packet is expected
> +to cross the MTU size.
> +
> +
> +.. note::
> +
> + The underlying device will manage state information required for egress
> + processing. E.g. in case of IPSec, the seq number will be added to the
> + packet, however the device shall provide indication when the sequence number
> + is about to overflow. The underlying device may support post encryption TSO.
> +
> +.. code-block:: console
> +
> + Egress Data Path
> + |
> + +--------|--------+
> + | egress IPsec |
> + | | |
> + | +------V------+ |
> + | | SADB lookup | |
> + | +------|------+ |
> + | +------V------+ |
> + | | Desc | | <------ Mark packet to be offloaded
> + | +------|------+ |
> + +--------V--------+
> + |
> + +--------V--------+
> + | L2 Stack |
> + +--------|--------+
> + |
> + +--------V--------+
> + | |
> + | NIC PMD | <------ Set hw context for inline crypto offload
> + | |
> + +--------|--------+
> + |
> + +--------|--------+
> + | HW ACCELERATED | <------ Add tunnel, ESP header etc header to
> + | NIC | packet. Packet Encryption and
> + | | Authentication happens inline.
> + +-----------------+
> +
> +
> +Lookaside protocol offload
> +~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
> +This extends librte_cryptodev to support the programming of IPsec
> +Security Association (SA) as part of a crypto session creation including
> +the definition. In addition to standard crypto processing, as defined by
> +the cryptodev, the security protocol processing is also offloaded to the
> +crypto device.
> +
> +Decryption: The packet is sent to the crypto device for security
> +protocol processing. The device will decrypt the packet and it will also
> +optionally remove additional security headers from the packet.
> +E.g. in case of IPSec, IPSec tunnel headers (if any), ESP/AH headers
> +will be removed from the packet and the decrypted packet may contain
> +plain data only.
> +
> +.. note::
> +
> + In case of IPSec the device may internally manage anti-replay etc.
> + It will provide a configuration option for anti-replay behavior i.e. to drop
> + the packets or pass them to driver with error flags set in descriptor.
> +
> +Encryption: The software will submit the packet to cryptodev as usual
> +for encryption, the hardware device in this case will also add the relevant
> +security protocol header along with encrypting the packet. The software
> +should make sure that the buffer has required head room and tail room
> +for any protocol header addition.
> +
> +.. note::
> +
> + In the case of IPSec, the seq number will be added to the packet,
> + It shall provide an indication when the sequence number is about to
> + overflow.
> +
> +.. code-block:: console
> +
> + Egress Data Path
> + |
> + +--------|--------+
> + | egress IPsec |
> + | | |
> + | +------V------+ |
> + | | SADB lookup | | <------ SA maps to cryptodev session
> + | +------|------+ |
> + | +------|------+ |
> + | | \--------------------\
> + | | Crypto | | | <- Crypto processing through
> + | | /----------------\ | inline crypto PMD
> + | +------|------+ | | |
> + +--------V--------+ | |
> + | | |
> + +--------V--------+ | | create <-- SA is added to hw
> + | L2 Stack | | | inline using existing create
> + +--------|--------+ | | session sym session APIs
> + | | | |
> + +--------V--------+ +---|---|----V---+
> + | | | \---/ | | <--- Add tunnel, ESP header etc
> + | NIC PMD | | INLINE | | header to packet.Packet
> + | | | CRYPTO PMD | | Encryption/Decryption and
> + +--------|--------+ +----------------+ Authentication happens
> + | inline.
> + +--------|--------+
> + | NIC |
> + +--------|--------+
> + V
> +
> +Device Features and Capabilities
> +---------------------------------
> +
> +Device Capabilities For Security Operations
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +The device (crypto or ethernet) capabilities which support security operations,
> +are defined by the security action type, security protocol, protocol
> +capabilities and corresponding crypto capabilities for security. For the full
> +scope of the Security capability see definition of rte_security_capability
> +structure in the *DPDK API Reference*.
> +
> +.. code-block:: c
> +
> + struct rte_security_capability;
> +
> +Each driver (crypto or ethernet) defines its own private array of capabilities
> +for the operations it supports. Below is an example of the capabilities for a
> +PMD which supports the IPSec protocol.
> +
> +.. code-block:: c
> +
> + static const struct rte_security_capability pmd_security_capabilities[] = {
> + { /* IPsec Lookaside Protocol offload ESP Tunnel Egress */
> + .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
> + .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
> + .ipsec = {
> + .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
> + .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
> + .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
> + .options = { 0 }
> + },
> + .crypto_capabilities = pmd_capabilities
> + },
> + { /* IPsec Lookaside Protocol offload ESP Tunnel Ingress */
> + .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
> + .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
> + .ipsec = {
> + .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
> + .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
> + .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
> + .options = { 0 }
> + },
> + .crypto_capabilities = pmd_capabilities
> + },
> + {
> + .action = RTE_SECURITY_ACTION_TYPE_NONE
> + }
> + };
> + static const struct rte_cryptodev_capabilities pmd_capabilities[] = {
> + { /* SHA1 HMAC */
> + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> + .sym = {
> + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
> + .auth = {
> + .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
> + .block_size = 64,
> + .key_size = {
> + .min = 64,
> + .max = 64,
> + .increment = 0
> + },
> + .digest_size = {
> + .min = 12,
> + .max = 12,
> + .increment = 0
> + },
> + .aad_size = { 0 },
> + .iv_size = { 0 }
> + }
> + }
> + },
> + { /* AES CBC */
> + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> + .sym = {
> + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
> + .cipher = {
> + .algo = RTE_CRYPTO_CIPHER_AES_CBC,
> + .block_size = 16,
> + .key_size = {
> + .min = 16,
> + .max = 32,
> + .increment = 8
> + },
> + .iv_size = {
> + .min = 16,
> + .max = 16,
> + .increment = 0
> + }
> + }
> + }
> + }
> + }
> +
> +
> +Capabilities Discovery
> +~~~~~~~~~~~~~~~~~~~~~~
> +
> +Discovering the features and capabilities of a driver (crypto/ethernet)
> +is achieved through the ``rte_security_capabilities_get()`` function.
> +
> +.. code-block:: c
> +
> + const struct rte_security_capability *rte_security_capabilities_get(uint16_t id);
> +
> +This allows the user to query a specific driver and get all device
> +security capabilities. It returns an array of ``rte_security_capability`` structures
> +which contains all the capabilities for that device.
> +
> +Security Session Create/Free
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Security Sessions are created to store the immutable fields of a particular Security
> +Association for a particular protocol which is defined by a security session
> +configuration structure which is used in the operation processing of a packet flow.
> +Sessions are used to manage protocol specific information as well as crypto parameters.
> +Security sessions cache this immutable data in a optimal way for the underlying PMD
> +and this allows further acceleration of the offload of Crypto workloads.
> +
> +The Security framework provides APIs to create and free sessions for crypto/ethernet
> +devices, where sessions are mempool objects. It is the application's responsibility
> +to create and manage the session mempools. The mempool object size should be able to
> +accommodate the driver's private data of security session.
> +
> +Once the session mempools have been created, ``rte_security_session_create()``
> +is used to allocate and initialize a session for the required crypto/ethernet device.
> +
> +Session APIs need a parameter ``rte_security_ctx`` to identify the crypto/ethernet
> +security ops. This parameter can be retreived using the APIs
> +``rte_cryptodev_get_sec_ctx()`` (for crypto device) or ``rte_eth_dev_get_sec_ctx``
> +(for ethernet port).
> +
> +Sessions already created can be updated with ``rte_security_session_update()``.
> +
> +When a session is no longer used, the user must call ``rte_security_session_destroy()``
> +to free the driver private session data and return the memory back to the mempool.
> +
> +For look aside protocol offload to hardware crypto device, the ``rte_crypto_op``
> +created by the application is attached to the security session by the API
> +``rte_security_attach_session()``.
> +
> +For Inline Crypto and Inline protocol offload, device specific defined metadata is
> +updated in the mbuf using ``rte_security_set_pkt_metadata()`` if
> +``DEV_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
> +
> +Security session configuration
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Security Session configuration structure is defined as ``rte_security_session_conf``
> +
> +.. code-block:: c
> +
> + struct rte_security_session_conf {
> + enum rte_security_session_action_type action_type;
> + /**< Type of action to be performed on the session */
> + enum rte_security_session_protocol protocol;
> + /**< Security protocol to be configured */
> + union {
> + struct rte_security_ipsec_xform ipsec;
> + struct rte_security_macsec_xform macsec;
> + };
> + /**< Configuration parameters for security session */
> + struct rte_crypto_sym_xform *crypto_xform;
> + /**< Security Session Crypto Transformations */
> + };
> +
> +The configuration structure reuses the ``rte_crypto_sym_xform`` struct for crypto related
> +configuration. The ``rte_security_session_action_type`` struct is used to specify whether the
> +session is configured for Lookaside Protocol offload or Inline Crypto or Inline Protocol
> +Offload.
> +
> +.. code-block:: c
> +
> + enum rte_security_session_action_type {
> + RTE_SECURITY_ACTION_TYPE_NONE,
> + /**< No security actions */
> + RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
> + /**< Crypto processing for security protocol is processed inline
> + * during transmission */
> + RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
> + /**< All security protocol processing is performed inline during
> + * transmission */
> + RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
> + /**< All security protocol processing including crypto is performed
> + * on a lookaside accelerator */
> + };
> +
> +The ``rte_security_session_protocol`` is defined as
> +
> +.. code-block:: c
> +
> + enum rte_security_session_protocol {
> + RTE_SECURITY_PROTOCOL_IPSEC,
> + /**< IPsec Protocol */
> + RTE_SECURITY_PROTOCOL_MACSEC,
> + /**< MACSec Protocol */
> + };
> +
> +Currently the library defines configuration parameters for IPSec only. For other
> +protocols like MACSec, structures and enums are defined as place holders which
> +will be updated in the future.
> +
> +IPsec related configuration parameters are defined in ``rte_security_ipsec_xform``
> +
> +.. code-block:: c
> +
> + struct rte_security_ipsec_xform {
> + uint32_t spi;
> + /**< SA security parameter index */
> + uint32_t salt;
> + /**< SA salt */
> + struct rte_security_ipsec_sa_options options;
> + /**< various SA options */
> + enum rte_security_ipsec_sa_direction direction;
> + /**< IPSec SA Direction - Egress/Ingress */
> + enum rte_security_ipsec_sa_protocol proto;
> + /**< IPsec SA Protocol - AH/ESP */
> + enum rte_security_ipsec_sa_mode mode;
> + /**< IPsec SA Mode - transport/tunnel */
> + struct rte_security_ipsec_tunnel_param tunnel;
> + /**< Tunnel parameters, NULL for transport mode */
> + };
> +
> +
> +Security API
> +~~~~~~~~~~~~
> +
> +The rte_security Library API is described in the *DPDK API Reference* document.
> +
> +Flow based Security Session
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +In the case of NIC based offloads, the security session specified in the
> +'rte_flow_action_security' must be created on the same port as the
> +flow action that is being specified.
> +
> +The ingress/egress flow attribute should match that specified in the security
> +session if the security session supports the definition of the direction.
> +
> +Multiple flows can be configured to use the same security session. For
> +example if the security session specifies an egress IPsec SA, then multiple
> +flows can be specified to that SA. In the case of an ingress IPsec SA then
> +it is only valid to have a single flow to map to that security session.
> +
> +.. code-block:: console
> +
> + Configuration Path
> + |
> + +--------|--------+
> + | Add/Remove |
> + | IPsec SA | <------ Build security flow action of
> + | | | ipsec transform
> + |--------|--------|
> + |
> + +--------V--------+
> + | Flow API |
> + +--------|--------+
> + |
> + +--------V--------+
> + | |
> + | NIC PMD | <------ Add/Remove SA to/from hw context
> + | |
> + +--------|--------+
> + |
> + +--------|--------+
> + | HW ACCELERATED |
> + | NIC |
> + | |
> + +--------|--------+
> +
> +* Add/Delete SA flow:
> + To add a new inline SA construct a rte_flow_item for Ethernet + IP + ESP
> + using the SA selectors and the ``rte_crypto_ipsec_xform`` as the ``rte_flow_action``.
> + Note that any rte_flow_items may be empty, which means it is not checked.
> +
> +.. code-block:: console
> +
> + In its most basic form, IPsec flow specification is as follows:
> + +-------+ +----------+ +--------+ +-----+
> + | Eth | -> | IP4/6 | -> | ESP | -> | END |
> + +-------+ +----------+ +--------+ +-----+
> +
> + However, the API can represent, IPsec crypto offload with any encapsulation:
> + +-------+ +--------+ +-----+
> + | Eth | -> ... -> | ESP | -> | END |
> + +-------+ +--------+ +-----+
Tested-by: Aviad Yehezkel <aviadye@mellanox.com>
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 02/12] doc: add details of rte security
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 02/12] doc: add details of rte security Akhil Goyal
2017-10-15 12:47 ` Aviad Yehezkel
@ 2017-10-20 9:41 ` Thomas Monjalon
2017-10-21 19:48 ` Akhil Goyal
1 sibling, 1 reply; 195+ messages in thread
From: Thomas Monjalon @ 2017-10-20 9:41 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, sandeep.malik, jerin.jacob,
john.mcnamara, konstantin.ananyev, shahafs, olivier.matz
15/10/2017 00:17, Akhil Goyal:
> --- a/doc/api/doxy-api-index.md
> +++ b/doc/api/doxy-api-index.md
> @@ -58,7 +58,8 @@ The public API headers are grouped by topics:
> [ixgbe] (@ref rte_pmd_ixgbe.h),
> [i40e] (@ref rte_pmd_i40e.h),
> [bnxt] (@ref rte_pmd_bnxt.h),
> - [crypto_scheduler] (@ref rte_cryptodev_scheduler.h)
> + [crypto_scheduler] (@ref rte_cryptodev_scheduler.h),
> + [security] (@ref rte_security.h)
>
This section is "device specific".
Please move "security" in the first section "device" below "cryptodev".
Thanks
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 02/12] doc: add details of rte security
2017-10-20 9:41 ` Thomas Monjalon
@ 2017-10-21 19:48 ` Akhil Goyal
0 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-21 19:48 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, sandeep.malik, jerin.jacob,
john.mcnamara, konstantin.ananyev, shahafs, olivier.matz
On 10/20/2017 3:11 PM, Thomas Monjalon wrote:
> 15/10/2017 00:17, Akhil Goyal:
>> --- a/doc/api/doxy-api-index.md
>> +++ b/doc/api/doxy-api-index.md
>> @@ -58,7 +58,8 @@ The public API headers are grouped by topics:
>> [ixgbe] (@ref rte_pmd_ixgbe.h),
>> [i40e] (@ref rte_pmd_i40e.h),
>> [bnxt] (@ref rte_pmd_bnxt.h),
>> - [crypto_scheduler] (@ref rte_cryptodev_scheduler.h)
>> + [crypto_scheduler] (@ref rte_cryptodev_scheduler.h),
>> + [security] (@ref rte_security.h)
>>
>
> This section is "device specific".
>
> Please move "security" in the first section "device" below "cryptodev".
>
Ok. Thanks for the review.
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v4 03/12] cryptodev: support security APIs
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 " Akhil Goyal
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 01/12] lib/rte_security: add security library Akhil Goyal
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 02/12] doc: add details of rte security Akhil Goyal
@ 2017-10-14 22:17 ` Akhil Goyal
2017-10-15 12:48 ` Aviad Yehezkel
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 04/12] net: add ESP header to generic flow steering Akhil Goyal
` (10 subsequent siblings)
13 siblings, 1 reply; 195+ messages in thread
From: Akhil Goyal @ 2017-10-14 22:17 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
Security ops are added to crypto device to support
protocol offloaded security operations.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
doc/guides/cryptodevs/features/default.ini | 1 +
lib/librte_cryptodev/rte_crypto.h | 3 ++-
lib/librte_cryptodev/rte_crypto_sym.h | 2 ++
lib/librte_cryptodev/rte_cryptodev.c | 10 ++++++++++
lib/librte_cryptodev/rte_cryptodev.h | 7 +++++++
lib/librte_cryptodev/rte_cryptodev_version.map | 1 +
6 files changed, 23 insertions(+), 1 deletion(-)
diff --git a/doc/guides/cryptodevs/features/default.ini b/doc/guides/cryptodevs/features/default.ini
index c98717a..18d66cb 100644
--- a/doc/guides/cryptodevs/features/default.ini
+++ b/doc/guides/cryptodevs/features/default.ini
@@ -10,6 +10,7 @@ Symmetric crypto =
Asymmetric crypto =
Sym operation chaining =
HW Accelerated =
+Protocol offload =
CPU SSE =
CPU AVX =
CPU AVX2 =
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
index 10fe080..3eb9ef9 100644
--- a/lib/librte_cryptodev/rte_crypto.h
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -86,7 +86,8 @@ enum rte_crypto_op_status {
*/
enum rte_crypto_op_sess_type {
RTE_CRYPTO_OP_WITH_SESSION, /**< Session based crypto operation */
- RTE_CRYPTO_OP_SESSIONLESS /**< Session-less crypto operation */
+ RTE_CRYPTO_OP_SESSIONLESS, /**< Session-less crypto operation */
+ RTE_CRYPTO_OP_SECURITY_SESSION /**< Security session crypto operation */
};
/**
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index 0a0ea59..5992063 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -508,6 +508,8 @@ struct rte_crypto_sym_op {
/**< Handle for the initialised session context */
struct rte_crypto_sym_xform *xform;
/**< Session-less API crypto operation parameters */
+ struct rte_security_session *sec_session;
+ /**< Handle for the initialised security session context */
};
RTE_STD_C11
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index e48d562..5a2495b 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -488,6 +488,16 @@ rte_cryptodev_devices_get(const char *driver_name, uint8_t *devices,
return count;
}
+void *
+rte_cryptodev_get_sec_ctx(uint8_t dev_id)
+{
+ if (rte_crypto_devices[dev_id].feature_flags &
+ RTE_CRYPTODEV_FF_SECURITY)
+ return rte_crypto_devices[dev_id].data->security_ctx;
+
+ return NULL;
+}
+
int
rte_cryptodev_socket_id(uint8_t dev_id)
{
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index fd0e3f1..546454b 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -351,6 +351,8 @@ rte_cryptodev_get_aead_algo_enum(enum rte_crypto_aead_algorithm *algo_enum,
/**< Utilises CPU NEON instructions */
#define RTE_CRYPTODEV_FF_CPU_ARM_CE (1ULL << 11)
/**< Utilises ARM CPU Cryptographic Extensions */
+#define RTE_CRYPTODEV_FF_SECURITY (1ULL << 12)
+/**< Support Security Protocol Processing */
/**
@@ -774,6 +776,9 @@ struct rte_cryptodev {
/**< Flag indicating the device is attached */
} __rte_cache_aligned;
+void *
+rte_cryptodev_get_sec_ctx(uint8_t dev_id);
+
/**
*
* The data part, with no function pointers, associated with each device.
@@ -802,6 +807,8 @@ struct rte_cryptodev_data {
void *dev_private;
/**< PMD-specific private data */
+ void *security_ctx;
+ /**< Context for security ops */
} __rte_cache_aligned;
extern struct rte_cryptodev *rte_cryptodevs;
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
index 919b6cc..7ef1b0f 100644
--- a/lib/librte_cryptodev/rte_cryptodev_version.map
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -84,5 +84,6 @@ DPDK_17.11 {
global:
rte_cryptodev_name_get;
+ rte_cryptodev_get_sec_ctx;
} DPDK_17.08;
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 03/12] cryptodev: support security APIs
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 03/12] cryptodev: support security APIs Akhil Goyal
@ 2017-10-15 12:48 ` Aviad Yehezkel
0 siblings, 0 replies; 195+ messages in thread
From: Aviad Yehezkel @ 2017-10-15 12:48 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
On 10/15/2017 1:17 AM, Akhil Goyal wrote:
> Security ops are added to crypto device to support
> protocol offloaded security operations.
>
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> ---
> doc/guides/cryptodevs/features/default.ini | 1 +
> lib/librte_cryptodev/rte_crypto.h | 3 ++-
> lib/librte_cryptodev/rte_crypto_sym.h | 2 ++
> lib/librte_cryptodev/rte_cryptodev.c | 10 ++++++++++
> lib/librte_cryptodev/rte_cryptodev.h | 7 +++++++
> lib/librte_cryptodev/rte_cryptodev_version.map | 1 +
> 6 files changed, 23 insertions(+), 1 deletion(-)
>
> diff --git a/doc/guides/cryptodevs/features/default.ini b/doc/guides/cryptodevs/features/default.ini
> index c98717a..18d66cb 100644
> --- a/doc/guides/cryptodevs/features/default.ini
> +++ b/doc/guides/cryptodevs/features/default.ini
> @@ -10,6 +10,7 @@ Symmetric crypto =
> Asymmetric crypto =
> Sym operation chaining =
> HW Accelerated =
> +Protocol offload =
> CPU SSE =
> CPU AVX =
> CPU AVX2 =
> diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
> index 10fe080..3eb9ef9 100644
> --- a/lib/librte_cryptodev/rte_crypto.h
> +++ b/lib/librte_cryptodev/rte_crypto.h
> @@ -86,7 +86,8 @@ enum rte_crypto_op_status {
> */
> enum rte_crypto_op_sess_type {
> RTE_CRYPTO_OP_WITH_SESSION, /**< Session based crypto operation */
> - RTE_CRYPTO_OP_SESSIONLESS /**< Session-less crypto operation */
> + RTE_CRYPTO_OP_SESSIONLESS, /**< Session-less crypto operation */
> + RTE_CRYPTO_OP_SECURITY_SESSION /**< Security session crypto operation */
> };
>
> /**
> diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
> index 0a0ea59..5992063 100644
> --- a/lib/librte_cryptodev/rte_crypto_sym.h
> +++ b/lib/librte_cryptodev/rte_crypto_sym.h
> @@ -508,6 +508,8 @@ struct rte_crypto_sym_op {
> /**< Handle for the initialised session context */
> struct rte_crypto_sym_xform *xform;
> /**< Session-less API crypto operation parameters */
> + struct rte_security_session *sec_session;
> + /**< Handle for the initialised security session context */
> };
>
> RTE_STD_C11
> diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
> index e48d562..5a2495b 100644
> --- a/lib/librte_cryptodev/rte_cryptodev.c
> +++ b/lib/librte_cryptodev/rte_cryptodev.c
> @@ -488,6 +488,16 @@ rte_cryptodev_devices_get(const char *driver_name, uint8_t *devices,
> return count;
> }
>
> +void *
> +rte_cryptodev_get_sec_ctx(uint8_t dev_id)
> +{
> + if (rte_crypto_devices[dev_id].feature_flags &
> + RTE_CRYPTODEV_FF_SECURITY)
> + return rte_crypto_devices[dev_id].data->security_ctx;
> +
> + return NULL;
> +}
> +
> int
> rte_cryptodev_socket_id(uint8_t dev_id)
> {
> diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
> index fd0e3f1..546454b 100644
> --- a/lib/librte_cryptodev/rte_cryptodev.h
> +++ b/lib/librte_cryptodev/rte_cryptodev.h
> @@ -351,6 +351,8 @@ rte_cryptodev_get_aead_algo_enum(enum rte_crypto_aead_algorithm *algo_enum,
> /**< Utilises CPU NEON instructions */
> #define RTE_CRYPTODEV_FF_CPU_ARM_CE (1ULL << 11)
> /**< Utilises ARM CPU Cryptographic Extensions */
> +#define RTE_CRYPTODEV_FF_SECURITY (1ULL << 12)
> +/**< Support Security Protocol Processing */
>
>
> /**
> @@ -774,6 +776,9 @@ struct rte_cryptodev {
> /**< Flag indicating the device is attached */
> } __rte_cache_aligned;
>
> +void *
> +rte_cryptodev_get_sec_ctx(uint8_t dev_id);
> +
> /**
> *
> * The data part, with no function pointers, associated with each device.
> @@ -802,6 +807,8 @@ struct rte_cryptodev_data {
>
> void *dev_private;
> /**< PMD-specific private data */
> + void *security_ctx;
> + /**< Context for security ops */
> } __rte_cache_aligned;
>
> extern struct rte_cryptodev *rte_cryptodevs;
> diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
> index 919b6cc..7ef1b0f 100644
> --- a/lib/librte_cryptodev/rte_cryptodev_version.map
> +++ b/lib/librte_cryptodev/rte_cryptodev_version.map
> @@ -84,5 +84,6 @@ DPDK_17.11 {
> global:
>
> rte_cryptodev_name_get;
> + rte_cryptodev_get_sec_ctx;
>
> } DPDK_17.08;
Tested-by: Aviad Yehezkel <aviadye@mellanox.com>
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v4 04/12] net: add ESP header to generic flow steering
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 " Akhil Goyal
` (2 preceding siblings ...)
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 03/12] cryptodev: support security APIs Akhil Goyal
@ 2017-10-14 22:17 ` Akhil Goyal
2017-10-15 12:48 ` Aviad Yehezkel
2017-10-20 10:15 ` Thomas Monjalon
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 05/12] mbuf: add security crypto flags and mbuf fields Akhil Goyal
` (9 subsequent siblings)
13 siblings, 2 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-14 22:17 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
From: Boris Pismenny <borisp@mellanox.com>
The ESP header is required for IPsec crypto actions.
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
---
doc/api/doxy-api-index.md | 3 ++-
lib/librte_ether/rte_flow.h | 26 ++++++++++++++++++++
lib/librte_net/Makefile | 2 +-
lib/librte_net/rte_esp.h | 60 +++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 89 insertions(+), 2 deletions(-)
create mode 100644 lib/librte_net/rte_esp.h
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 7c680dc..d59893b 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -111,7 +111,8 @@ The public API headers are grouped by topics:
[LPM IPv6 route] (@ref rte_lpm6.h),
[ACL] (@ref rte_acl.h),
[EFD] (@ref rte_efd.h),
- [member] (@ref rte_member.h)
+ [member] (@ref rte_member.h),
+ [ESP] (@ref rte_esp.h)
- **QoS**:
[metering] (@ref rte_meter.h),
diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
index a0ffb71..7c89089 100644
--- a/lib/librte_ether/rte_flow.h
+++ b/lib/librte_ether/rte_flow.h
@@ -50,6 +50,7 @@
#include <rte_tcp.h>
#include <rte_udp.h>
#include <rte_byteorder.h>
+#include <rte_esp.h>
#ifdef __cplusplus
extern "C" {
@@ -336,6 +337,13 @@ enum rte_flow_item_type {
* See struct rte_flow_item_gtp.
*/
RTE_FLOW_ITEM_TYPE_GTPU,
+
+ /**
+ * Matches a ESP header.
+ *
+ * See struct rte_flow_item_esp.
+ */
+ RTE_FLOW_ITEM_TYPE_ESP,
};
/**
@@ -787,6 +795,24 @@ static const struct rte_flow_item_gtp rte_flow_item_gtp_mask = {
#endif
/**
+ * RTE_FLOW_ITEM_TYPE_ESP
+ *
+ * Matches an ESP header.
+ */
+struct rte_flow_item_esp {
+ struct esp_hdr hdr; /**< ESP header definition. */
+};
+
+/** Default mask for RTE_FLOW_ITEM_TYPE_ESP. */
+#ifndef __cplusplus
+static const struct rte_flow_item_esp rte_flow_item_esp_mask = {
+ .hdr = {
+ .spi = 0xffffffff,
+ },
+};
+#endif
+
+/**
* Matching pattern item definition.
*
* A pattern is formed by stacking items starting from the lowest protocol
diff --git a/lib/librte_net/Makefile b/lib/librte_net/Makefile
index 56727c4..0f87b23 100644
--- a/lib/librte_net/Makefile
+++ b/lib/librte_net/Makefile
@@ -42,7 +42,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_NET) := rte_net.c
SRCS-$(CONFIG_RTE_LIBRTE_NET) += rte_net_crc.c
# install includes
-SYMLINK-$(CONFIG_RTE_LIBRTE_NET)-include := rte_ip.h rte_tcp.h rte_udp.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_NET)-include := rte_ip.h rte_tcp.h rte_udp.h rte_esp.h
SYMLINK-$(CONFIG_RTE_LIBRTE_NET)-include += rte_sctp.h rte_icmp.h rte_arp.h
SYMLINK-$(CONFIG_RTE_LIBRTE_NET)-include += rte_ether.h rte_gre.h rte_net.h
SYMLINK-$(CONFIG_RTE_LIBRTE_NET)-include += rte_net_crc.h
diff --git a/lib/librte_net/rte_esp.h b/lib/librte_net/rte_esp.h
new file mode 100644
index 0000000..e228af0
--- /dev/null
+++ b/lib/librte_net/rte_esp.h
@@ -0,0 +1,60 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright (c) 2016-2017, Mellanox Technologies. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_ESP_H_
+#define _RTE_ESP_H_
+
+/**
+ * @file
+ *
+ * ESP-related defines
+ */
+
+#include <stdint.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * ESP Header
+ */
+struct esp_hdr {
+ uint32_t spi; /**< Security Parameters Index */
+ uint32_t seq; /**< packet sequence number */
+} __attribute__((__packed__));
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* RTE_ESP_H_ */
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 04/12] net: add ESP header to generic flow steering
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 04/12] net: add ESP header to generic flow steering Akhil Goyal
@ 2017-10-15 12:48 ` Aviad Yehezkel
2017-10-20 10:15 ` Thomas Monjalon
1 sibling, 0 replies; 195+ messages in thread
From: Aviad Yehezkel @ 2017-10-15 12:48 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
On 10/15/2017 1:17 AM, Akhil Goyal wrote:
> From: Boris Pismenny <borisp@mellanox.com>
>
> The ESP header is required for IPsec crypto actions.
>
> Signed-off-by: Boris Pismenny <borisp@mellanox.com>
> Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
> ---
> doc/api/doxy-api-index.md | 3 ++-
> lib/librte_ether/rte_flow.h | 26 ++++++++++++++++++++
> lib/librte_net/Makefile | 2 +-
> lib/librte_net/rte_esp.h | 60 +++++++++++++++++++++++++++++++++++++++++++++
> 4 files changed, 89 insertions(+), 2 deletions(-)
> create mode 100644 lib/librte_net/rte_esp.h
>
> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
> index 7c680dc..d59893b 100644
> --- a/doc/api/doxy-api-index.md
> +++ b/doc/api/doxy-api-index.md
> @@ -111,7 +111,8 @@ The public API headers are grouped by topics:
> [LPM IPv6 route] (@ref rte_lpm6.h),
> [ACL] (@ref rte_acl.h),
> [EFD] (@ref rte_efd.h),
> - [member] (@ref rte_member.h)
> + [member] (@ref rte_member.h),
> + [ESP] (@ref rte_esp.h)
>
> - **QoS**:
> [metering] (@ref rte_meter.h),
> diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
> index a0ffb71..7c89089 100644
> --- a/lib/librte_ether/rte_flow.h
> +++ b/lib/librte_ether/rte_flow.h
> @@ -50,6 +50,7 @@
> #include <rte_tcp.h>
> #include <rte_udp.h>
> #include <rte_byteorder.h>
> +#include <rte_esp.h>
>
> #ifdef __cplusplus
> extern "C" {
> @@ -336,6 +337,13 @@ enum rte_flow_item_type {
> * See struct rte_flow_item_gtp.
> */
> RTE_FLOW_ITEM_TYPE_GTPU,
> +
> + /**
> + * Matches a ESP header.
> + *
> + * See struct rte_flow_item_esp.
> + */
> + RTE_FLOW_ITEM_TYPE_ESP,
> };
>
> /**
> @@ -787,6 +795,24 @@ static const struct rte_flow_item_gtp rte_flow_item_gtp_mask = {
> #endif
>
> /**
> + * RTE_FLOW_ITEM_TYPE_ESP
> + *
> + * Matches an ESP header.
> + */
> +struct rte_flow_item_esp {
> + struct esp_hdr hdr; /**< ESP header definition. */
> +};
> +
> +/** Default mask for RTE_FLOW_ITEM_TYPE_ESP. */
> +#ifndef __cplusplus
> +static const struct rte_flow_item_esp rte_flow_item_esp_mask = {
> + .hdr = {
> + .spi = 0xffffffff,
> + },
> +};
> +#endif
> +
> +/**
> * Matching pattern item definition.
> *
> * A pattern is formed by stacking items starting from the lowest protocol
> diff --git a/lib/librte_net/Makefile b/lib/librte_net/Makefile
> index 56727c4..0f87b23 100644
> --- a/lib/librte_net/Makefile
> +++ b/lib/librte_net/Makefile
> @@ -42,7 +42,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_NET) := rte_net.c
> SRCS-$(CONFIG_RTE_LIBRTE_NET) += rte_net_crc.c
>
> # install includes
> -SYMLINK-$(CONFIG_RTE_LIBRTE_NET)-include := rte_ip.h rte_tcp.h rte_udp.h
> +SYMLINK-$(CONFIG_RTE_LIBRTE_NET)-include := rte_ip.h rte_tcp.h rte_udp.h rte_esp.h
> SYMLINK-$(CONFIG_RTE_LIBRTE_NET)-include += rte_sctp.h rte_icmp.h rte_arp.h
> SYMLINK-$(CONFIG_RTE_LIBRTE_NET)-include += rte_ether.h rte_gre.h rte_net.h
> SYMLINK-$(CONFIG_RTE_LIBRTE_NET)-include += rte_net_crc.h
> diff --git a/lib/librte_net/rte_esp.h b/lib/librte_net/rte_esp.h
> new file mode 100644
> index 0000000..e228af0
> --- /dev/null
> +++ b/lib/librte_net/rte_esp.h
> @@ -0,0 +1,60 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright (c) 2016-2017, Mellanox Technologies. All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * * Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * * Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in
> + * the documentation and/or other materials provided with the
> + * distribution.
> + * * Neither the name of Intel Corporation nor the names of its
> + * contributors may be used to endorse or promote products derived
> + * from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef _RTE_ESP_H_
> +#define _RTE_ESP_H_
> +
> +/**
> + * @file
> + *
> + * ESP-related defines
> + */
> +
> +#include <stdint.h>
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +/**
> + * ESP Header
> + */
> +struct esp_hdr {
> + uint32_t spi; /**< Security Parameters Index */
> + uint32_t seq; /**< packet sequence number */
> +} __attribute__((__packed__));
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* RTE_ESP_H_ */
Tested-by: Aviad Yehezkel <aviadye@mellanox.com>
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 04/12] net: add ESP header to generic flow steering
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 04/12] net: add ESP header to generic flow steering Akhil Goyal
2017-10-15 12:48 ` Aviad Yehezkel
@ 2017-10-20 10:15 ` Thomas Monjalon
2017-10-21 19:49 ` Akhil Goyal
1 sibling, 1 reply; 195+ messages in thread
From: Thomas Monjalon @ 2017-10-20 10:15 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, sandeep.malik, jerin.jacob,
john.mcnamara, konstantin.ananyev, shahafs, olivier.matz
15/10/2017 00:17, Akhil Goyal:
> --- a/doc/api/doxy-api-index.md
> +++ b/doc/api/doxy-api-index.md
> @@ -111,7 +111,8 @@ The public API headers are grouped by topics:
> [LPM IPv6 route] (@ref rte_lpm6.h),
> [ACL] (@ref rte_acl.h),
> [EFD] (@ref rte_efd.h),
> - [member] (@ref rte_member.h)
> + [member] (@ref rte_member.h),
> + [ESP] (@ref rte_esp.h)
rte_member should not be in "layers" section.
I will probably move it to "basic".
Please move ESP near IP, maybe between IP and ICMP.
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 04/12] net: add ESP header to generic flow steering
2017-10-20 10:15 ` Thomas Monjalon
@ 2017-10-21 19:49 ` Akhil Goyal
0 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-21 19:49 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, sandeep.malik, jerin.jacob,
john.mcnamara, konstantin.ananyev, shahafs, olivier.matz
On 10/20/2017 3:45 PM, Thomas Monjalon wrote:
> 15/10/2017 00:17, Akhil Goyal:
>> --- a/doc/api/doxy-api-index.md
>> +++ b/doc/api/doxy-api-index.md
>> @@ -111,7 +111,8 @@ The public API headers are grouped by topics:
>> [LPM IPv6 route] (@ref rte_lpm6.h),
>> [ACL] (@ref rte_acl.h),
>> [EFD] (@ref rte_efd.h),
>> - [member] (@ref rte_member.h)
>> + [member] (@ref rte_member.h),
>> + [ESP] (@ref rte_esp.h)
>
> rte_member should not be in "layers" section.
> I will probably move it to "basic".
>
> Please move ESP near IP, maybe between IP and ICMP.
>
>
Ok. Will move it in between IP and ICMP
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v4 05/12] mbuf: add security crypto flags and mbuf fields
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 " Akhil Goyal
` (3 preceding siblings ...)
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 04/12] net: add ESP header to generic flow steering Akhil Goyal
@ 2017-10-14 22:17 ` Akhil Goyal
2017-10-15 12:49 ` Aviad Yehezkel
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 06/12] ethdev: support security APIs Akhil Goyal
` (8 subsequent siblings)
13 siblings, 1 reply; 195+ messages in thread
From: Akhil Goyal @ 2017-10-14 22:17 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
From: Boris Pismenny <borisp@mellanox.com>
Add security crypto flags and update mbuf fields to support
IPsec crypto offload for transmitted packets, and to indicate
crypto result for received packets.
Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
---
lib/librte_mbuf/rte_mbuf.c | 6 ++++++
lib/librte_mbuf/rte_mbuf.h | 35 ++++++++++++++++++++++++++++++++---
lib/librte_mbuf/rte_mbuf_ptype.c | 1 +
lib/librte_mbuf/rte_mbuf_ptype.h | 11 +++++++++++
4 files changed, 50 insertions(+), 3 deletions(-)
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 0e18709..6659261 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -324,6 +324,8 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)
case PKT_RX_QINQ_STRIPPED: return "PKT_RX_QINQ_STRIPPED";
case PKT_RX_LRO: return "PKT_RX_LRO";
case PKT_RX_TIMESTAMP: return "PKT_RX_TIMESTAMP";
+ case PKT_RX_SEC_OFFLOAD: return "PKT_RX_SEC_OFFLOAD";
+ case PKT_RX_SEC_OFFLOAD_FAILED: return "PKT_RX_SEC_OFFLOAD_FAILED";
default: return NULL;
}
}
@@ -359,6 +361,8 @@ rte_get_rx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
{ PKT_RX_QINQ_STRIPPED, PKT_RX_QINQ_STRIPPED, NULL },
{ PKT_RX_LRO, PKT_RX_LRO, NULL },
{ PKT_RX_TIMESTAMP, PKT_RX_TIMESTAMP, NULL },
+ { PKT_RX_SEC_OFFLOAD, PKT_RX_SEC_OFFLOAD, NULL },
+ { PKT_RX_SEC_OFFLOAD_FAILED, PKT_RX_SEC_OFFLOAD_FAILED, NULL },
};
const char *name;
unsigned int i;
@@ -411,6 +415,7 @@ const char *rte_get_tx_ol_flag_name(uint64_t mask)
case PKT_TX_TUNNEL_GENEVE: return "PKT_TX_TUNNEL_GENEVE";
case PKT_TX_TUNNEL_MPLSINUDP: return "PKT_TX_TUNNEL_MPLSINUDP";
case PKT_TX_MACSEC: return "PKT_TX_MACSEC";
+ case PKT_TX_SEC_OFFLOAD: return "PKT_TX_SEC_OFFLOAD";
default: return NULL;
}
}
@@ -444,6 +449,7 @@ rte_get_tx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
{ PKT_TX_TUNNEL_MPLSINUDP, PKT_TX_TUNNEL_MASK,
"PKT_TX_TUNNEL_NONE" },
{ PKT_TX_MACSEC, PKT_TX_MACSEC, NULL },
+ { PKT_TX_SEC_OFFLOAD, PKT_TX_SEC_OFFLOAD, NULL },
};
const char *name;
unsigned int i;
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index cc38040..5d478da 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -189,11 +189,26 @@ extern "C" {
*/
#define PKT_RX_TIMESTAMP (1ULL << 17)
+/**
+ * Indicate that security offload processing was applied on the RX packet.
+ */
+#define PKT_RX_SEC_OFFLOAD (1ULL << 18)
+
+/**
+ * Indicate that security offload processing failed on the RX packet.
+ */
+#define PKT_RX_SEC_OFFLOAD_FAILED (1ULL << 19)
+
/* add new RX flags here */
/* add new TX flags here */
/**
+ * Request security offload processing on the TX packet.
+ */
+#define PKT_TX_SEC_OFFLOAD (1ULL << 43)
+
+/**
* Offload the MACsec. This flag must be set by the application to enable
* this offload feature for a packet to be transmitted.
*/
@@ -316,7 +331,8 @@ extern "C" {
PKT_TX_QINQ_PKT | \
PKT_TX_VLAN_PKT | \
PKT_TX_TUNNEL_MASK | \
- PKT_TX_MACSEC)
+ PKT_TX_MACSEC | \
+ PKT_TX_SEC_OFFLOAD)
#define __RESERVED (1ULL << 61) /**< reserved for future mbuf use */
@@ -456,8 +472,21 @@ struct rte_mbuf {
uint32_t l3_type:4; /**< (Outer) L3 type. */
uint32_t l4_type:4; /**< (Outer) L4 type. */
uint32_t tun_type:4; /**< Tunnel type. */
- uint32_t inner_l2_type:4; /**< Inner L2 type. */
- uint32_t inner_l3_type:4; /**< Inner L3 type. */
+ RTE_STD_C11
+ union {
+ uint8_t inner_esp_next_proto;
+ /**< ESP next protocol type, valid if
+ * RTE_PTYPE_TUNNEL_ESP tunnel type is set
+ * on both Tx and Rx.
+ */
+ __extension__
+ struct {
+ uint8_t inner_l2_type:4;
+ /**< Inner L2 type. */
+ uint8_t inner_l3_type:4;
+ /**< Inner L3 type. */
+ };
+ };
uint32_t inner_l4_type:4; /**< Inner L4 type. */
};
};
diff --git a/lib/librte_mbuf/rte_mbuf_ptype.c b/lib/librte_mbuf/rte_mbuf_ptype.c
index a450814..a623226 100644
--- a/lib/librte_mbuf/rte_mbuf_ptype.c
+++ b/lib/librte_mbuf/rte_mbuf_ptype.c
@@ -91,6 +91,7 @@ const char *rte_get_ptype_tunnel_name(uint32_t ptype)
case RTE_PTYPE_TUNNEL_GRENAT: return "TUNNEL_GRENAT";
case RTE_PTYPE_TUNNEL_GTPC: return "TUNNEL_GTPC";
case RTE_PTYPE_TUNNEL_GTPU: return "TUNNEL_GTPU";
+ case RTE_PTYPE_TUNNEL_ESP: return "TUNNEL_ESP";
default: return "TUNNEL_UNKNOWN";
}
}
diff --git a/lib/librte_mbuf/rte_mbuf_ptype.h b/lib/librte_mbuf/rte_mbuf_ptype.h
index 978c4a2..5c62435 100644
--- a/lib/librte_mbuf/rte_mbuf_ptype.h
+++ b/lib/librte_mbuf/rte_mbuf_ptype.h
@@ -415,6 +415,17 @@ extern "C" {
*/
#define RTE_PTYPE_TUNNEL_GTPU 0x00008000
/**
+ * ESP (IP Encapsulating Security Payload) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=51>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=51>
+ */
+#define RTE_PTYPE_TUNNEL_ESP 0x00009000
+/**
* Mask of tunneling packet types.
*/
#define RTE_PTYPE_TUNNEL_MASK 0x0000f000
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 05/12] mbuf: add security crypto flags and mbuf fields
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 05/12] mbuf: add security crypto flags and mbuf fields Akhil Goyal
@ 2017-10-15 12:49 ` Aviad Yehezkel
0 siblings, 0 replies; 195+ messages in thread
From: Aviad Yehezkel @ 2017-10-15 12:49 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
On 10/15/2017 1:17 AM, Akhil Goyal wrote:
> From: Boris Pismenny <borisp@mellanox.com>
>
> Add security crypto flags and update mbuf fields to support
> IPsec crypto offload for transmitted packets, and to indicate
> crypto result for received packets.
>
> Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
> Signed-off-by: Boris Pismenny <borisp@mellanox.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> ---
> lib/librte_mbuf/rte_mbuf.c | 6 ++++++
> lib/librte_mbuf/rte_mbuf.h | 35 ++++++++++++++++++++++++++++++++---
> lib/librte_mbuf/rte_mbuf_ptype.c | 1 +
> lib/librte_mbuf/rte_mbuf_ptype.h | 11 +++++++++++
> 4 files changed, 50 insertions(+), 3 deletions(-)
>
> diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
> index 0e18709..6659261 100644
> --- a/lib/librte_mbuf/rte_mbuf.c
> +++ b/lib/librte_mbuf/rte_mbuf.c
> @@ -324,6 +324,8 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)
> case PKT_RX_QINQ_STRIPPED: return "PKT_RX_QINQ_STRIPPED";
> case PKT_RX_LRO: return "PKT_RX_LRO";
> case PKT_RX_TIMESTAMP: return "PKT_RX_TIMESTAMP";
> + case PKT_RX_SEC_OFFLOAD: return "PKT_RX_SEC_OFFLOAD";
> + case PKT_RX_SEC_OFFLOAD_FAILED: return "PKT_RX_SEC_OFFLOAD_FAILED";
> default: return NULL;
> }
> }
> @@ -359,6 +361,8 @@ rte_get_rx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
> { PKT_RX_QINQ_STRIPPED, PKT_RX_QINQ_STRIPPED, NULL },
> { PKT_RX_LRO, PKT_RX_LRO, NULL },
> { PKT_RX_TIMESTAMP, PKT_RX_TIMESTAMP, NULL },
> + { PKT_RX_SEC_OFFLOAD, PKT_RX_SEC_OFFLOAD, NULL },
> + { PKT_RX_SEC_OFFLOAD_FAILED, PKT_RX_SEC_OFFLOAD_FAILED, NULL },
> };
> const char *name;
> unsigned int i;
> @@ -411,6 +415,7 @@ const char *rte_get_tx_ol_flag_name(uint64_t mask)
> case PKT_TX_TUNNEL_GENEVE: return "PKT_TX_TUNNEL_GENEVE";
> case PKT_TX_TUNNEL_MPLSINUDP: return "PKT_TX_TUNNEL_MPLSINUDP";
> case PKT_TX_MACSEC: return "PKT_TX_MACSEC";
> + case PKT_TX_SEC_OFFLOAD: return "PKT_TX_SEC_OFFLOAD";
> default: return NULL;
> }
> }
> @@ -444,6 +449,7 @@ rte_get_tx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
> { PKT_TX_TUNNEL_MPLSINUDP, PKT_TX_TUNNEL_MASK,
> "PKT_TX_TUNNEL_NONE" },
> { PKT_TX_MACSEC, PKT_TX_MACSEC, NULL },
> + { PKT_TX_SEC_OFFLOAD, PKT_TX_SEC_OFFLOAD, NULL },
> };
> const char *name;
> unsigned int i;
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> index cc38040..5d478da 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -189,11 +189,26 @@ extern "C" {
> */
> #define PKT_RX_TIMESTAMP (1ULL << 17)
>
> +/**
> + * Indicate that security offload processing was applied on the RX packet.
> + */
> +#define PKT_RX_SEC_OFFLOAD (1ULL << 18)
> +
> +/**
> + * Indicate that security offload processing failed on the RX packet.
> + */
> +#define PKT_RX_SEC_OFFLOAD_FAILED (1ULL << 19)
> +
> /* add new RX flags here */
>
> /* add new TX flags here */
>
> /**
> + * Request security offload processing on the TX packet.
> + */
> +#define PKT_TX_SEC_OFFLOAD (1ULL << 43)
> +
> +/**
> * Offload the MACsec. This flag must be set by the application to enable
> * this offload feature for a packet to be transmitted.
> */
> @@ -316,7 +331,8 @@ extern "C" {
> PKT_TX_QINQ_PKT | \
> PKT_TX_VLAN_PKT | \
> PKT_TX_TUNNEL_MASK | \
> - PKT_TX_MACSEC)
> + PKT_TX_MACSEC | \
> + PKT_TX_SEC_OFFLOAD)
>
> #define __RESERVED (1ULL << 61) /**< reserved for future mbuf use */
>
> @@ -456,8 +472,21 @@ struct rte_mbuf {
> uint32_t l3_type:4; /**< (Outer) L3 type. */
> uint32_t l4_type:4; /**< (Outer) L4 type. */
> uint32_t tun_type:4; /**< Tunnel type. */
> - uint32_t inner_l2_type:4; /**< Inner L2 type. */
> - uint32_t inner_l3_type:4; /**< Inner L3 type. */
> + RTE_STD_C11
> + union {
> + uint8_t inner_esp_next_proto;
> + /**< ESP next protocol type, valid if
> + * RTE_PTYPE_TUNNEL_ESP tunnel type is set
> + * on both Tx and Rx.
> + */
> + __extension__
> + struct {
> + uint8_t inner_l2_type:4;
> + /**< Inner L2 type. */
> + uint8_t inner_l3_type:4;
> + /**< Inner L3 type. */
> + };
> + };
> uint32_t inner_l4_type:4; /**< Inner L4 type. */
> };
> };
> diff --git a/lib/librte_mbuf/rte_mbuf_ptype.c b/lib/librte_mbuf/rte_mbuf_ptype.c
> index a450814..a623226 100644
> --- a/lib/librte_mbuf/rte_mbuf_ptype.c
> +++ b/lib/librte_mbuf/rte_mbuf_ptype.c
> @@ -91,6 +91,7 @@ const char *rte_get_ptype_tunnel_name(uint32_t ptype)
> case RTE_PTYPE_TUNNEL_GRENAT: return "TUNNEL_GRENAT";
> case RTE_PTYPE_TUNNEL_GTPC: return "TUNNEL_GTPC";
> case RTE_PTYPE_TUNNEL_GTPU: return "TUNNEL_GTPU";
> + case RTE_PTYPE_TUNNEL_ESP: return "TUNNEL_ESP";
> default: return "TUNNEL_UNKNOWN";
> }
> }
> diff --git a/lib/librte_mbuf/rte_mbuf_ptype.h b/lib/librte_mbuf/rte_mbuf_ptype.h
> index 978c4a2..5c62435 100644
> --- a/lib/librte_mbuf/rte_mbuf_ptype.h
> +++ b/lib/librte_mbuf/rte_mbuf_ptype.h
> @@ -415,6 +415,17 @@ extern "C" {
> */
> #define RTE_PTYPE_TUNNEL_GTPU 0x00008000
> /**
> + * ESP (IP Encapsulating Security Payload) tunneling packet type.
> + *
> + * Packet format:
> + * <'ether type'=0x0800
> + * | 'version'=4, 'protocol'=51>
> + * or,
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'=51>
> + */
> +#define RTE_PTYPE_TUNNEL_ESP 0x00009000
> +/**
> * Mask of tunneling packet types.
> */
> #define RTE_PTYPE_TUNNEL_MASK 0x0000f000
Tested-by: Aviad Yehezkel <aviadye@mellanox.com>
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v4 06/12] ethdev: support security APIs
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 " Akhil Goyal
` (4 preceding siblings ...)
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 05/12] mbuf: add security crypto flags and mbuf fields Akhil Goyal
@ 2017-10-14 22:17 ` Akhil Goyal
2017-10-15 12:49 ` Aviad Yehezkel
` (3 more replies)
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 07/12] ethdev: add rte flow action for crypto Akhil Goyal
` (7 subsequent siblings)
13 siblings, 4 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-14 22:17 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
From: Declan Doherty <declan.doherty@intel.com>
rte_flow_action type and ethdev updated to support rte_security
sessions for crypto offload to ethernet device.
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
lib/librte_ether/rte_ethdev.c | 11 +++++++++++
lib/librte_ether/rte_ethdev.h | 18 ++++++++++++++++--
lib/librte_ether/rte_ethdev_version.map | 1 +
3 files changed, 28 insertions(+), 2 deletions(-)
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 0b1e928..9520f1e 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -301,6 +301,17 @@ rte_eth_dev_socket_id(uint16_t port_id)
return rte_eth_devices[port_id].data->numa_node;
}
+void *
+rte_eth_dev_get_sec_ctx(uint8_t port_id)
+{
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL);
+
+ if (rte_eth_devices[port_id].data->dev_flags & RTE_ETH_DEV_SECURITY)
+ return rte_eth_devices[port_id].data->security_ctx;
+
+ return NULL;
+}
+
uint16_t
rte_eth_dev_count(void)
{
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index aaf02b3..159bb73 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -180,6 +180,8 @@ extern "C" {
#include <rte_dev.h>
#include <rte_devargs.h>
#include <rte_errno.h>
+#include <rte_common.h>
+
#include "rte_ether.h"
#include "rte_eth_ctrl.h"
#include "rte_dev_info.h"
@@ -379,7 +381,8 @@ struct rte_eth_rxmode {
* This bit is temporary till rxmode bitfield offloads API will
* be deprecated.
*/
- ignore_offload_bitfield : 1;
+ ignore_offload_bitfield : 1,
+ enable_sec : 1; /**< Enable security offload */
};
/**
@@ -707,8 +710,10 @@ struct rte_eth_txmode {
/**< If set, reject sending out tagged pkts */
hw_vlan_reject_untagged : 1,
/**< If set, reject sending out untagged pkts */
- hw_vlan_insert_pvid : 1;
+ hw_vlan_insert_pvid : 1,
/**< If set, enable port based VLAN insertion */
+ enable_sec : 1;
+ /**< Enable security offload */
};
/**
@@ -969,6 +974,7 @@ struct rte_eth_conf {
#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \
DEV_RX_OFFLOAD_VLAN_FILTER | \
DEV_RX_OFFLOAD_VLAN_EXTEND)
+#define DEV_RX_OFFLOAD_SECURITY 0x00000100
/**
* TX offload capabilities of a device.
@@ -998,6 +1004,7 @@ struct rte_eth_conf {
* When set application must guarantee that per-queue all mbufs comes from
* the same mempool and has refcnt = 1.
*/
+#define DEV_TX_OFFLOAD_SECURITY 0x00008000
struct rte_pci_device;
@@ -1736,6 +1743,9 @@ struct rte_eth_dev {
enum rte_eth_dev_state state; /**< Flag indicating the port state */
} __rte_cache_aligned;
+void *
+rte_eth_dev_get_sec_ctx(uint8_t port_id);
+
struct rte_eth_dev_sriov {
uint8_t active; /**< SRIOV is active with 16, 32 or 64 pools */
uint8_t nb_q_per_pool; /**< rx queue number per pool */
@@ -1796,6 +1806,8 @@ struct rte_eth_dev_data {
int numa_node; /**< NUMA node connection */
struct rte_vlan_filter_conf vlan_filter_conf;
/**< VLAN filter configuration. */
+ void *security_ctx;
+ /**< Context for security ops */
};
/** Device supports hotplug detach */
@@ -1806,6 +1818,8 @@ struct rte_eth_dev_data {
#define RTE_ETH_DEV_BONDED_SLAVE 0x0004
/** Device supports device removal interrupt */
#define RTE_ETH_DEV_INTR_RMV 0x0008
+/** Device supports inline security processing */
+#define RTE_ETH_DEV_SECURITY 0x0010
/**
* @internal
diff --git a/lib/librte_ether/rte_ethdev_version.map b/lib/librte_ether/rte_ethdev_version.map
index e27f596..3cc6a64 100644
--- a/lib/librte_ether/rte_ethdev_version.map
+++ b/lib/librte_ether/rte_ethdev_version.map
@@ -194,5 +194,6 @@ DPDK_17.11 {
rte_eth_dev_pool_ops_supported;
rte_eth_dev_reset;
rte_flow_error_set;
+ rte_eth_dev_get_sec_ctx;
} DPDK_17.08;
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 06/12] ethdev: support security APIs
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 06/12] ethdev: support security APIs Akhil Goyal
@ 2017-10-15 12:49 ` Aviad Yehezkel
2017-10-15 13:13 ` Shahaf Shuler
` (2 subsequent siblings)
3 siblings, 0 replies; 195+ messages in thread
From: Aviad Yehezkel @ 2017-10-15 12:49 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
On 10/15/2017 1:17 AM, Akhil Goyal wrote:
> From: Declan Doherty <declan.doherty@intel.com>
>
> rte_flow_action type and ethdev updated to support rte_security
> sessions for crypto offload to ethernet device.
>
> Signed-off-by: Boris Pismenny <borisp@mellanox.com>
> Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> ---
> lib/librte_ether/rte_ethdev.c | 11 +++++++++++
> lib/librte_ether/rte_ethdev.h | 18 ++++++++++++++++--
> lib/librte_ether/rte_ethdev_version.map | 1 +
> 3 files changed, 28 insertions(+), 2 deletions(-)
>
> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> index 0b1e928..9520f1e 100644
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c
> @@ -301,6 +301,17 @@ rte_eth_dev_socket_id(uint16_t port_id)
> return rte_eth_devices[port_id].data->numa_node;
> }
>
> +void *
> +rte_eth_dev_get_sec_ctx(uint8_t port_id)
> +{
> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL);
> +
> + if (rte_eth_devices[port_id].data->dev_flags & RTE_ETH_DEV_SECURITY)
> + return rte_eth_devices[port_id].data->security_ctx;
> +
> + return NULL;
> +}
> +
> uint16_t
> rte_eth_dev_count(void)
> {
> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
> index aaf02b3..159bb73 100644
> --- a/lib/librte_ether/rte_ethdev.h
> +++ b/lib/librte_ether/rte_ethdev.h
> @@ -180,6 +180,8 @@ extern "C" {
> #include <rte_dev.h>
> #include <rte_devargs.h>
> #include <rte_errno.h>
> +#include <rte_common.h>
> +
> #include "rte_ether.h"
> #include "rte_eth_ctrl.h"
> #include "rte_dev_info.h"
> @@ -379,7 +381,8 @@ struct rte_eth_rxmode {
> * This bit is temporary till rxmode bitfield offloads API will
> * be deprecated.
> */
> - ignore_offload_bitfield : 1;
> + ignore_offload_bitfield : 1,
> + enable_sec : 1; /**< Enable security offload */
> };
>
> /**
> @@ -707,8 +710,10 @@ struct rte_eth_txmode {
> /**< If set, reject sending out tagged pkts */
> hw_vlan_reject_untagged : 1,
> /**< If set, reject sending out untagged pkts */
> - hw_vlan_insert_pvid : 1;
> + hw_vlan_insert_pvid : 1,
> /**< If set, enable port based VLAN insertion */
> + enable_sec : 1;
> + /**< Enable security offload */
> };
>
> /**
> @@ -969,6 +974,7 @@ struct rte_eth_conf {
> #define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \
> DEV_RX_OFFLOAD_VLAN_FILTER | \
> DEV_RX_OFFLOAD_VLAN_EXTEND)
> +#define DEV_RX_OFFLOAD_SECURITY 0x00000100
>
> /**
> * TX offload capabilities of a device.
> @@ -998,6 +1004,7 @@ struct rte_eth_conf {
> * When set application must guarantee that per-queue all mbufs comes from
> * the same mempool and has refcnt = 1.
> */
> +#define DEV_TX_OFFLOAD_SECURITY 0x00008000
>
> struct rte_pci_device;
>
> @@ -1736,6 +1743,9 @@ struct rte_eth_dev {
> enum rte_eth_dev_state state; /**< Flag indicating the port state */
> } __rte_cache_aligned;
>
> +void *
> +rte_eth_dev_get_sec_ctx(uint8_t port_id);
> +
> struct rte_eth_dev_sriov {
> uint8_t active; /**< SRIOV is active with 16, 32 or 64 pools */
> uint8_t nb_q_per_pool; /**< rx queue number per pool */
> @@ -1796,6 +1806,8 @@ struct rte_eth_dev_data {
> int numa_node; /**< NUMA node connection */
> struct rte_vlan_filter_conf vlan_filter_conf;
> /**< VLAN filter configuration. */
> + void *security_ctx;
> + /**< Context for security ops */
> };
>
> /** Device supports hotplug detach */
> @@ -1806,6 +1818,8 @@ struct rte_eth_dev_data {
> #define RTE_ETH_DEV_BONDED_SLAVE 0x0004
> /** Device supports device removal interrupt */
> #define RTE_ETH_DEV_INTR_RMV 0x0008
> +/** Device supports inline security processing */
> +#define RTE_ETH_DEV_SECURITY 0x0010
>
> /**
> * @internal
> diff --git a/lib/librte_ether/rte_ethdev_version.map b/lib/librte_ether/rte_ethdev_version.map
> index e27f596..3cc6a64 100644
> --- a/lib/librte_ether/rte_ethdev_version.map
> +++ b/lib/librte_ether/rte_ethdev_version.map
> @@ -194,5 +194,6 @@ DPDK_17.11 {
> rte_eth_dev_pool_ops_supported;
> rte_eth_dev_reset;
> rte_flow_error_set;
> + rte_eth_dev_get_sec_ctx;
>
> } DPDK_17.08;
Tested-by: Aviad Yehezkel <aviadye@mellanox.com>
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 06/12] ethdev: support security APIs
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 06/12] ethdev: support security APIs Akhil Goyal
2017-10-15 12:49 ` Aviad Yehezkel
@ 2017-10-15 13:13 ` Shahaf Shuler
2017-10-16 8:46 ` Nicolau, Radu
2017-10-19 9:23 ` Ananyev, Konstantin
2017-10-20 10:58 ` Thomas Monjalon
3 siblings, 1 reply; 195+ messages in thread
From: Shahaf Shuler @ 2017-10-15 13:13 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, Boris Pismenny, Aviad Yehezkel, Thomas Monjalon,
sandeep.malik, jerin.jacob, john.mcnamara, konstantin.ananyev,
olivier.matz
Hi Akhil,
Sunday, October 15, 2017 1:17 AM, Akhil Goyal:
> From: Declan Doherty <declan.doherty@intel.com>
>
> rte_flow_action type and ethdev updated to support rte_security sessions
> for crypto offload to ethernet device.
>
> Signed-off-by: Boris Pismenny <borisp@mellanox.com>
> Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> ---
> lib/librte_ether/rte_ethdev.c | 11 +++++++++++
> lib/librte_ether/rte_ethdev.h | 18 ++++++++++++++++--
> lib/librte_ether/rte_ethdev_version.map | 1 +
> 3 files changed, 28 insertions(+), 2 deletions(-)
>
> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> index 0b1e928..9520f1e 100644
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c
> @@ -301,6 +301,17 @@ rte_eth_dev_socket_id(uint16_t port_id)
> return rte_eth_devices[port_id].data->numa_node;
> }
>
> +void *
> +rte_eth_dev_get_sec_ctx(uint8_t port_id) {
> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL);
> +
> + if (rte_eth_devices[port_id].data->dev_flags &
> RTE_ETH_DEV_SECURITY)
> + return rte_eth_devices[port_id].data->security_ctx;
> +
> + return NULL;
> +}
> +
> uint16_t
> rte_eth_dev_count(void)
> {
> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
> index aaf02b3..159bb73 100644
> --- a/lib/librte_ether/rte_ethdev.h
> +++ b/lib/librte_ether/rte_ethdev.h
> @@ -180,6 +180,8 @@ extern "C" {
> #include <rte_dev.h>
> #include <rte_devargs.h>
> #include <rte_errno.h>
> +#include <rte_common.h>
> +
> #include "rte_ether.h"
> #include "rte_eth_ctrl.h"
> #include "rte_dev_info.h"
> @@ -379,7 +381,8 @@ struct rte_eth_rxmode {
> * This bit is temporary till rxmode bitfield offloads API will
> * be deprecated.
> */
> - ignore_offload_bitfield : 1;
> + ignore_offload_bitfield : 1,
> + enable_sec : 1; /**< Enable security offload */
I suggest to keep the ignore_offload_bitfield last.
Also you should update the convert function. See:
rte_eth_convert_rx_offload_bitfield
rte_eth_convert_rx_offloads
> };
>
> /**
> @@ -707,8 +710,10 @@ struct rte_eth_txmode {
> /**< If set, reject sending out tagged pkts */
> hw_vlan_reject_untagged : 1,
> /**< If set, reject sending out untagged pkts */
> - hw_vlan_insert_pvid : 1;
> + hw_vlan_insert_pvid : 1,
> /**< If set, enable port based VLAN insertion */
> + enable_sec : 1;
> + /**< Enable security offload */
Am copying the comment and answer from v2 on the Tx offload. Seems like we agreed, why it is not addressed?
From: Radu Nicolau radu.nicolau at intel.com
> Already comment on it in the previous version [1].
> I don't think there is a justification to introduce new approach to set Tx offloads given there is already patch set which provides such new API [2].
> I think this patch should be on top of it.
I agree with you, that is if the new offload API will be merged we will
also change this one. But until then it makes testing and developing
more difficult.
> };
>
> /**
> @@ -969,6 +974,7 @@ struct rte_eth_conf { #define
> DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \
> DEV_RX_OFFLOAD_VLAN_FILTER | \
> DEV_RX_OFFLOAD_VLAN_EXTEND)
> +#define DEV_RX_OFFLOAD_SECURITY 0x00000100
>
> /**
> * TX offload capabilities of a device.
> @@ -998,6 +1004,7 @@ struct rte_eth_conf {
> * When set application must guarantee that per-queue all mbufs comes
> from
> * the same mempool and has refcnt = 1.
> */
> +#define DEV_TX_OFFLOAD_SECURITY 0x00008000
>
> struct rte_pci_device;
>
> @@ -1736,6 +1743,9 @@ struct rte_eth_dev {
> enum rte_eth_dev_state state; /**< Flag indicating the port state */
> } __rte_cache_aligned;
>
> +void *
> +rte_eth_dev_get_sec_ctx(uint8_t port_id);
> +
> struct rte_eth_dev_sriov {
> uint8_t active; /**< SRIOV is active with 16, 32 or 64 pools */
> uint8_t nb_q_per_pool; /**< rx queue number per pool */
> @@ -1796,6 +1806,8 @@ struct rte_eth_dev_data {
> int numa_node; /**< NUMA node connection */
> struct rte_vlan_filter_conf vlan_filter_conf;
> /**< VLAN filter configuration. */
> + void *security_ctx;
> + /**< Context for security ops */
> };
>
> /** Device supports hotplug detach */
> @@ -1806,6 +1818,8 @@ struct rte_eth_dev_data { #define
> RTE_ETH_DEV_BONDED_SLAVE 0x0004
> /** Device supports device removal interrupt */
> #define RTE_ETH_DEV_INTR_RMV 0x0008
> +/** Device supports inline security processing */
> +#define RTE_ETH_DEV_SECURITY 0x0010
I have to insist about this one. I don't understand which extra functionality it provides in compare to the DEV_RX_OFFLOAD_SECURITY or DEV_TX_OFFLOAD_SECURITY.
Answer from previous version was to "allow to advertise that a device has security features without the need to check exactly which ones are they".
I think this is exactly what DEV_RX_OFFLOAD_SECURITY and DEV_TX_OFFLOAD_SECURITY means. Those flags does not provide the full capabilities of the different security offload supported by the device (those should be queried through rte_scurity APIs).
>
> /**
> * @internal
> diff --git a/lib/librte_ether/rte_ethdev_version.map
> b/lib/librte_ether/rte_ethdev_version.map
> index e27f596..3cc6a64 100644
> --- a/lib/librte_ether/rte_ethdev_version.map
> +++ b/lib/librte_ether/rte_ethdev_version.map
> @@ -194,5 +194,6 @@ DPDK_17.11 {
> rte_eth_dev_pool_ops_supported;
> rte_eth_dev_reset;
> rte_flow_error_set;
> + rte_eth_dev_get_sec_ctx;
>
> } DPDK_17.08;
> --
> 2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 06/12] ethdev: support security APIs
2017-10-15 13:13 ` Shahaf Shuler
@ 2017-10-16 8:46 ` Nicolau, Radu
0 siblings, 0 replies; 195+ messages in thread
From: Nicolau, Radu @ 2017-10-16 8:46 UTC (permalink / raw)
To: Shahaf Shuler, Akhil Goyal, dev
Cc: Doherty, Declan, De Lara Guarch, Pablo, hemant.agrawal,
Boris Pismenny, Aviad Yehezkel, Thomas Monjalon, sandeep.malik,
jerin.jacob, Mcnamara, John, Ananyev, Konstantin, olivier.matz
Hi Shahaf,
I will address the issues asap, they didn't made it into v4 because of timing reasons.
Regards,
Radu
> -----Original Message-----
> From: Shahaf Shuler [mailto:shahafs@mellanox.com]
> Sent: Sunday, October 15, 2017 2:13 PM
> To: Akhil Goyal <akhil.goyal@nxp.com>; dev@dpdk.org
> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com; Nicolau,
> Radu <radu.nicolau@intel.com>; Boris Pismenny <borisp@mellanox.com>;
> Aviad Yehezkel <aviadye@mellanox.com>; Thomas Monjalon
> <thomas@monjalon.net>; sandeep.malik@nxp.com;
> jerin.jacob@caviumnetworks.com; Mcnamara, John
> <john.mcnamara@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; olivier.matz@6wind.com
> Subject: RE: [PATCH v4 06/12] ethdev: support security APIs
>
> Hi Akhil,
>
> Sunday, October 15, 2017 1:17 AM, Akhil Goyal:
> > From: Declan Doherty <declan.doherty@intel.com>
> >
> > rte_flow_action type and ethdev updated to support rte_security
> > sessions for crypto offload to ethernet device.
> >
> > Signed-off-by: Boris Pismenny <borisp@mellanox.com>
> > Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
> > Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> > Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> > ---
> > lib/librte_ether/rte_ethdev.c | 11 +++++++++++
> > lib/librte_ether/rte_ethdev.h | 18 ++++++++++++++++--
> > lib/librte_ether/rte_ethdev_version.map | 1 +
> > 3 files changed, 28 insertions(+), 2 deletions(-)
> >
> > diff --git a/lib/librte_ether/rte_ethdev.c
> > b/lib/librte_ether/rte_ethdev.c index 0b1e928..9520f1e 100644
> > --- a/lib/librte_ether/rte_ethdev.c
> > +++ b/lib/librte_ether/rte_ethdev.c
> > @@ -301,6 +301,17 @@ rte_eth_dev_socket_id(uint16_t port_id)
> > return rte_eth_devices[port_id].data->numa_node;
> > }
> >
> > +void *
> > +rte_eth_dev_get_sec_ctx(uint8_t port_id) {
> > + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL);
> > +
> > + if (rte_eth_devices[port_id].data->dev_flags &
> > RTE_ETH_DEV_SECURITY)
> > + return rte_eth_devices[port_id].data->security_ctx;
> > +
> > + return NULL;
> > +}
> > +
> > uint16_t
> > rte_eth_dev_count(void)
> > {
> > diff --git a/lib/librte_ether/rte_ethdev.h
> > b/lib/librte_ether/rte_ethdev.h index aaf02b3..159bb73 100644
> > --- a/lib/librte_ether/rte_ethdev.h
> > +++ b/lib/librte_ether/rte_ethdev.h
> > @@ -180,6 +180,8 @@ extern "C" {
> > #include <rte_dev.h>
> > #include <rte_devargs.h>
> > #include <rte_errno.h>
> > +#include <rte_common.h>
> > +
> > #include "rte_ether.h"
> > #include "rte_eth_ctrl.h"
> > #include "rte_dev_info.h"
> > @@ -379,7 +381,8 @@ struct rte_eth_rxmode {
> > * This bit is temporary till rxmode bitfield offloads API will
> > * be deprecated.
> > */
> > - ignore_offload_bitfield : 1;
> > + ignore_offload_bitfield : 1,
> > + enable_sec : 1; /**< Enable security offload */
>
> I suggest to keep the ignore_offload_bitfield last.
>
> Also you should update the convert function. See:
> rte_eth_convert_rx_offload_bitfield
> rte_eth_convert_rx_offloads
>
> > };
> >
> > /**
> > @@ -707,8 +710,10 @@ struct rte_eth_txmode {
> > /**< If set, reject sending out tagged pkts */
> > hw_vlan_reject_untagged : 1,
> > /**< If set, reject sending out untagged pkts */
> > - hw_vlan_insert_pvid : 1;
> > + hw_vlan_insert_pvid : 1,
> > /**< If set, enable port based VLAN insertion */
> > + enable_sec : 1;
> > + /**< Enable security offload */
>
> Am copying the comment and answer from v2 on the Tx offload. Seems like
> we agreed, why it is not addressed?
>
> From: Radu Nicolau radu.nicolau at intel.com
> > Already comment on it in the previous version [1].
> > I don't think there is a justification to introduce new approach to set Tx
> offloads given there is already patch set which provides such new API [2].
> > I think this patch should be on top of it.
> I agree with you, that is if the new offload API will be merged we will also
> change this one. But until then it makes testing and developing more
> difficult.
>
>
> > };
> >
> > /**
> > @@ -969,6 +974,7 @@ struct rte_eth_conf { #define
> DEV_RX_OFFLOAD_VLAN
> > (DEV_RX_OFFLOAD_VLAN_STRIP | \
> > DEV_RX_OFFLOAD_VLAN_FILTER | \
> > DEV_RX_OFFLOAD_VLAN_EXTEND)
> > +#define DEV_RX_OFFLOAD_SECURITY 0x00000100
> >
> > /**
> > * TX offload capabilities of a device.
> > @@ -998,6 +1004,7 @@ struct rte_eth_conf {
> > * When set application must guarantee that per-queue all mbufs comes
> > from
> > * the same mempool and has refcnt = 1.
> > */
> > +#define DEV_TX_OFFLOAD_SECURITY 0x00008000
> >
> > struct rte_pci_device;
> >
> > @@ -1736,6 +1743,9 @@ struct rte_eth_dev {
> > enum rte_eth_dev_state state; /**< Flag indicating the port state */
> > } __rte_cache_aligned;
> >
> > +void *
> > +rte_eth_dev_get_sec_ctx(uint8_t port_id);
> > +
> > struct rte_eth_dev_sriov {
> > uint8_t active; /**< SRIOV is active with 16, 32 or 64 pools */
> > uint8_t nb_q_per_pool; /**< rx queue number per pool */
> > @@ -1796,6 +1806,8 @@ struct rte_eth_dev_data {
> > int numa_node; /**< NUMA node connection */
> > struct rte_vlan_filter_conf vlan_filter_conf;
> > /**< VLAN filter configuration. */
> > + void *security_ctx;
> > + /**< Context for security ops */
> > };
> >
> > /** Device supports hotplug detach */ @@ -1806,6 +1818,8 @@ struct
> > rte_eth_dev_data { #define RTE_ETH_DEV_BONDED_SLAVE 0x0004
> > /** Device supports device removal interrupt */
> > #define RTE_ETH_DEV_INTR_RMV 0x0008
> > +/** Device supports inline security processing */
> > +#define RTE_ETH_DEV_SECURITY 0x0010
>
> I have to insist about this one. I don't understand which extra functionality it
> provides in compare to the DEV_RX_OFFLOAD_SECURITY or
> DEV_TX_OFFLOAD_SECURITY.
> Answer from previous version was to "allow to advertise that a device has
> security features without the need to check exactly which ones are they".
> I think this is exactly what DEV_RX_OFFLOAD_SECURITY and
> DEV_TX_OFFLOAD_SECURITY means. Those flags does not provide the full
> capabilities of the different security offload supported by the device (those
> should be queried through rte_scurity APIs).
>
> >
> > /**
> > * @internal
> > diff --git a/lib/librte_ether/rte_ethdev_version.map
> > b/lib/librte_ether/rte_ethdev_version.map
> > index e27f596..3cc6a64 100644
> > --- a/lib/librte_ether/rte_ethdev_version.map
> > +++ b/lib/librte_ether/rte_ethdev_version.map
> > @@ -194,5 +194,6 @@ DPDK_17.11 {
> > rte_eth_dev_pool_ops_supported;
> > rte_eth_dev_reset;
> > rte_flow_error_set;
> > + rte_eth_dev_get_sec_ctx;
> >
> > } DPDK_17.08;
> > --
> > 2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 06/12] ethdev: support security APIs
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 06/12] ethdev: support security APIs Akhil Goyal
2017-10-15 12:49 ` Aviad Yehezkel
2017-10-15 13:13 ` Shahaf Shuler
@ 2017-10-19 9:23 ` Ananyev, Konstantin
2017-10-21 16:00 ` Akhil Goyal
2017-10-20 10:58 ` Thomas Monjalon
3 siblings, 1 reply; 195+ messages in thread
From: Ananyev, Konstantin @ 2017-10-19 9:23 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: Doherty, Declan, De Lara Guarch, Pablo, hemant.agrawal, Nicolau,
Radu, borisp, aviadye, thomas, sandeep.malik, jerin.jacob,
Mcnamara, John, shahafs, olivier.matz
Hi guys,
> -----Original Message-----
> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
> Sent: Saturday, October 14, 2017 11:17 PM
> To: dev@dpdk.org
> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com;
> Nicolau, Radu <radu.nicolau@intel.com>; borisp@mellanox.com; aviadye@mellanox.com; thomas@monjalon.net;
> sandeep.malik@nxp.com; jerin.jacob@caviumnetworks.com; Mcnamara, John <john.mcnamara@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; shahafs@mellanox.com; olivier.matz@6wind.com
> Subject: [PATCH v4 06/12] ethdev: support security APIs
>
> From: Declan Doherty <declan.doherty@intel.com>
>
> rte_flow_action type and ethdev updated to support rte_security
> sessions for crypto offload to ethernet device.
>
> Signed-off-by: Boris Pismenny <borisp@mellanox.com>
> Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> ---
> lib/librte_ether/rte_ethdev.c | 11 +++++++++++
> lib/librte_ether/rte_ethdev.h | 18 ++++++++++++++++--
> lib/librte_ether/rte_ethdev_version.map | 1 +
> 3 files changed, 28 insertions(+), 2 deletions(-)
>
> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> index 0b1e928..9520f1e 100644
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c
> @@ -301,6 +301,17 @@ rte_eth_dev_socket_id(uint16_t port_id)
> return rte_eth_devices[port_id].data->numa_node;
> }
>
> +void *
> +rte_eth_dev_get_sec_ctx(uint8_t port_id)
> +{
> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL);
> +
> + if (rte_eth_devices[port_id].data->dev_flags & RTE_ETH_DEV_SECURITY)
As you don't currently support MP, it is probably worth to add somewhere
(here or at PMD layer) check for process type.
Something like:
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return NULL;
or so.
Konstantin
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 06/12] ethdev: support security APIs
2017-10-19 9:23 ` Ananyev, Konstantin
@ 2017-10-21 16:00 ` Akhil Goyal
2017-10-23 9:56 ` Ananyev, Konstantin
0 siblings, 1 reply; 195+ messages in thread
From: Akhil Goyal @ 2017-10-21 16:00 UTC (permalink / raw)
To: Ananyev, Konstantin, dev
Cc: Doherty, Declan, De Lara Guarch, Pablo, hemant.agrawal, Nicolau,
Radu, borisp, aviadye, thomas, sandeep.malik, jerin.jacob,
Mcnamara, John, shahafs, olivier.matz
Hi Konstantin,
On 10/19/2017 2:53 PM, Ananyev, Konstantin wrote:
> Hi guys,
>
>> -----Original Message-----
>> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
>> Sent: Saturday, October 14, 2017 11:17 PM
>> To: dev@dpdk.org
>> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com;
>> Nicolau, Radu <radu.nicolau@intel.com>; borisp@mellanox.com; aviadye@mellanox.com; thomas@monjalon.net;
>> sandeep.malik@nxp.com; jerin.jacob@caviumnetworks.com; Mcnamara, John <john.mcnamara@intel.com>; Ananyev, Konstantin
>> <konstantin.ananyev@intel.com>; shahafs@mellanox.com; olivier.matz@6wind.com
>> Subject: [PATCH v4 06/12] ethdev: support security APIs
>>
>> From: Declan Doherty <declan.doherty@intel.com>
>>
>> rte_flow_action type and ethdev updated to support rte_security
>> sessions for crypto offload to ethernet device.
>>
>> Signed-off-by: Boris Pismenny <borisp@mellanox.com>
>> Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>> ---
>> lib/librte_ether/rte_ethdev.c | 11 +++++++++++
>> lib/librte_ether/rte_ethdev.h | 18 ++++++++++++++++--
>> lib/librte_ether/rte_ethdev_version.map | 1 +
>> 3 files changed, 28 insertions(+), 2 deletions(-)
>>
>> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
>> index 0b1e928..9520f1e 100644
>> --- a/lib/librte_ether/rte_ethdev.c
>> +++ b/lib/librte_ether/rte_ethdev.c
>> @@ -301,6 +301,17 @@ rte_eth_dev_socket_id(uint16_t port_id)
>> return rte_eth_devices[port_id].data->numa_node;
>> }
>>
>> +void *
>> +rte_eth_dev_get_sec_ctx(uint8_t port_id)
>> +{
>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL);
>> +
>> + if (rte_eth_devices[port_id].data->dev_flags & RTE_ETH_DEV_SECURITY)
>
>
> As you don't currently support MP, it is probably worth to add somewhere
> (here or at PMD layer) check for process type.
> Something like:
> if (rte_eal_process_type() != RTE_PROC_PRIMARY)
> return NULL;
> or so.
> Konstantin
>
>
The MP issue is resolved as per my understanding in the v4.
SO I believe this check is not required anymore. Do you see any issue in MP.
-Akhil
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 06/12] ethdev: support security APIs
2017-10-21 16:00 ` Akhil Goyal
@ 2017-10-23 9:56 ` Ananyev, Konstantin
2017-10-23 13:08 ` Nicolau, Radu
0 siblings, 1 reply; 195+ messages in thread
From: Ananyev, Konstantin @ 2017-10-23 9:56 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: Doherty, Declan, De Lara Guarch, Pablo, hemant.agrawal, Nicolau,
Radu, borisp, aviadye, thomas, sandeep.malik, jerin.jacob,
Mcnamara, John, shahafs, olivier.matz
Hi Akhil,
>
> Hi Konstantin,
>
> On 10/19/2017 2:53 PM, Ananyev, Konstantin wrote:
> > Hi guys,
> >
> >> -----Original Message-----
> >> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
> >> Sent: Saturday, October 14, 2017 11:17 PM
> >> To: dev@dpdk.org
> >> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>;
> hemant.agrawal@nxp.com;
> >> Nicolau, Radu <radu.nicolau@intel.com>; borisp@mellanox.com; aviadye@mellanox.com; thomas@monjalon.net;
> >> sandeep.malik@nxp.com; jerin.jacob@caviumnetworks.com; Mcnamara, John <john.mcnamara@intel.com>; Ananyev, Konstantin
> >> <konstantin.ananyev@intel.com>; shahafs@mellanox.com; olivier.matz@6wind.com
> >> Subject: [PATCH v4 06/12] ethdev: support security APIs
> >>
> >> From: Declan Doherty <declan.doherty@intel.com>
> >>
> >> rte_flow_action type and ethdev updated to support rte_security
> >> sessions for crypto offload to ethernet device.
> >>
> >> Signed-off-by: Boris Pismenny <borisp@mellanox.com>
> >> Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
> >> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> >> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> >> ---
> >> lib/librte_ether/rte_ethdev.c | 11 +++++++++++
> >> lib/librte_ether/rte_ethdev.h | 18 ++++++++++++++++--
> >> lib/librte_ether/rte_ethdev_version.map | 1 +
> >> 3 files changed, 28 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> >> index 0b1e928..9520f1e 100644
> >> --- a/lib/librte_ether/rte_ethdev.c
> >> +++ b/lib/librte_ether/rte_ethdev.c
> >> @@ -301,6 +301,17 @@ rte_eth_dev_socket_id(uint16_t port_id)
> >> return rte_eth_devices[port_id].data->numa_node;
> >> }
> >>
> >> +void *
> >> +rte_eth_dev_get_sec_ctx(uint8_t port_id)
> >> +{
> >> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL);
> >> +
> >> + if (rte_eth_devices[port_id].data->dev_flags & RTE_ETH_DEV_SECURITY)
> >
> >
> > As you don't currently support MP, it is probably worth to add somewhere
> > (here or at PMD layer) check for process type.
> > Something like:
> > if (rte_eal_process_type() != RTE_PROC_PRIMARY)
> > return NULL;
> > or so.
> > Konstantin
> >
> >
> The MP issue is resolved as per my understanding in the v4.
As I can see from v4 - MP is still not supported:
1. security_ctx is placed into rte_eth_dev_data (which is shared between multiple processes)
while it still contains a pointer to particular ops functions.
To support MP you'll probably need to split security_ctx into 2 parts:
private to process (ops) and shared between processes (actual data),
or comeup with some other (smarter) way.
2. At least ixgbe_dev_init() right now always blindly allocates new
security_ctx and overwrites eth_dev->data->security_ctx with this new value.
I do remember that for you didn't plan to support MP for 17.11 anyway.
So I suggest for now just to make sure that secondary process wouldn't touch
that shared security_ctx in any way.
The easiest thing would probably be to move it from shared to private part of ethdev:
i.e. move void *security_ctx; from struct rte_eth_dev_data to struct rte_eth_dev
(you'll probably have to do it anyway later, because of #1)
and make sure it is initialized only for primary process.
Konstantin
> SO I believe this check is not required anymore. Do you see any issue in MP.
>
> -Akhil
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 06/12] ethdev: support security APIs
2017-10-23 9:56 ` Ananyev, Konstantin
@ 2017-10-23 13:08 ` Nicolau, Radu
0 siblings, 0 replies; 195+ messages in thread
From: Nicolau, Radu @ 2017-10-23 13:08 UTC (permalink / raw)
To: Ananyev, Konstantin, Akhil Goyal, dev
Cc: Doherty, Declan, De Lara Guarch, Pablo, hemant.agrawal, borisp,
aviadye, thomas, sandeep.malik, jerin.jacob, Mcnamara, John,
shahafs, olivier.matz
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Monday, October 23, 2017 10:57 AM
> To: Akhil Goyal <akhil.goyal@nxp.com>; dev@dpdk.org
> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com; Nicolau,
> Radu <radu.nicolau@intel.com>; borisp@mellanox.com;
> aviadye@mellanox.com; thomas@monjalon.net; sandeep.malik@nxp.com;
> jerin.jacob@caviumnetworks.com; Mcnamara, John
> <john.mcnamara@intel.com>; shahafs@mellanox.com;
> olivier.matz@6wind.com
> Subject: RE: [PATCH v4 06/12] ethdev: support security APIs
>
>
> Hi Akhil,
>
> >
> > Hi Konstantin,
> >
> > On 10/19/2017 2:53 PM, Ananyev, Konstantin wrote:
> > > Hi guys,
> > >
> > >> -----Original Message-----
> > >> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
> > >> Sent: Saturday, October 14, 2017 11:17 PM
> > >> To: dev@dpdk.org
> > >> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch,
> > >> Pablo <pablo.de.lara.guarch@intel.com>;
> > hemant.agrawal@nxp.com;
> > >> Nicolau, Radu <radu.nicolau@intel.com>; borisp@mellanox.com;
> > >> aviadye@mellanox.com; thomas@monjalon.net;
> sandeep.malik@nxp.com;
> > >> jerin.jacob@caviumnetworks.com; Mcnamara, John
> > >> <john.mcnamara@intel.com>; Ananyev, Konstantin
> > >> <konstantin.ananyev@intel.com>; shahafs@mellanox.com;
> > >> olivier.matz@6wind.com
> > >> Subject: [PATCH v4 06/12] ethdev: support security APIs
> > >>
> > >> From: Declan Doherty <declan.doherty@intel.com>
> > >>
> > >> rte_flow_action type and ethdev updated to support rte_security
> > >> sessions for crypto offload to ethernet device.
> > >>
> > >> Signed-off-by: Boris Pismenny <borisp@mellanox.com>
> > >> Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
> > >> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> > >> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> > >> ---
> > >> lib/librte_ether/rte_ethdev.c | 11 +++++++++++
> > >> lib/librte_ether/rte_ethdev.h | 18 ++++++++++++++++--
> > >> lib/librte_ether/rte_ethdev_version.map | 1 +
> > >> 3 files changed, 28 insertions(+), 2 deletions(-)
> > >>
> > >> diff --git a/lib/librte_ether/rte_ethdev.c
> > >> b/lib/librte_ether/rte_ethdev.c index 0b1e928..9520f1e 100644
> > >> --- a/lib/librte_ether/rte_ethdev.c
> > >> +++ b/lib/librte_ether/rte_ethdev.c
> > >> @@ -301,6 +301,17 @@ rte_eth_dev_socket_id(uint16_t port_id)
> > >> return rte_eth_devices[port_id].data->numa_node;
> > >> }
> > >>
> > >> +void *
> > >> +rte_eth_dev_get_sec_ctx(uint8_t port_id) {
> > >> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL);
> > >> +
> > >> + if (rte_eth_devices[port_id].data->dev_flags &
> > >> +RTE_ETH_DEV_SECURITY)
> > >
> > >
> > > As you don't currently support MP, it is probably worth to add
> > > somewhere (here or at PMD layer) check for process type.
> > > Something like:
> > > if (rte_eal_process_type() != RTE_PROC_PRIMARY)
> > > return NULL;
> > > or so.
> > > Konstantin
> > >
> > >
> > The MP issue is resolved as per my understanding in the v4.
>
> As I can see from v4 - MP is still not supported:
>
> 1. security_ctx is placed into rte_eth_dev_data (which is shared between
> multiple processes) while it still contains a pointer to particular ops functions.
> To support MP you'll probably need to split security_ctx into 2 parts:
> private to process (ops) and shared between processes (actual data), or
> comeup with some other (smarter) way.
> 2. At least ixgbe_dev_init() right now always blindly allocates new
> security_ctx and overwrites eth_dev->data->security_ctx with this new
> value.
>
> I do remember that for you didn't plan to support MP for 17.11 anyway.
> So I suggest for now just to make sure that secondary process wouldn't
> touch that shared security_ctx in any way.
> The easiest thing would probably be to move it from shared to private part of
> ethdev:
> i.e. move void *security_ctx; from struct rte_eth_dev_data to struct
> rte_eth_dev (you'll probably have to do it anyway later, because of #1) and
> make sure it is initialized only for primary process.
> Konstantin
>
> > SO I believe this check is not required anymore. Do you see any issue in
> MP.
> >
> > -Akhil
I moved the security_ctx ftom dev->data to dev as Konstantin suggested, and only initialize it for the primary process. This means that the secondary process will not be supported at the moment, but we will follow up for the next release.
Regards,
Radu
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 06/12] ethdev: support security APIs
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 06/12] ethdev: support security APIs Akhil Goyal
` (2 preceding siblings ...)
2017-10-19 9:23 ` Ananyev, Konstantin
@ 2017-10-20 10:58 ` Thomas Monjalon
2017-10-21 19:50 ` Akhil Goyal
3 siblings, 1 reply; 195+ messages in thread
From: Thomas Monjalon @ 2017-10-20 10:58 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, sandeep.malik, jerin.jacob,
john.mcnamara, konstantin.ananyev, shahafs, olivier.matz
15/10/2017 00:17, Akhil Goyal:
> --- a/lib/librte_ether/rte_ethdev_version.map
> +++ b/lib/librte_ether/rte_ethdev_version.map
> @@ -194,5 +194,6 @@ DPDK_17.11 {
> rte_eth_dev_pool_ops_supported;
> rte_eth_dev_reset;
> rte_flow_error_set;
> + rte_eth_dev_get_sec_ctx;
>
> } DPDK_17.08;
Please keep alphabetical order.
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 06/12] ethdev: support security APIs
2017-10-20 10:58 ` Thomas Monjalon
@ 2017-10-21 19:50 ` Akhil Goyal
0 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-21 19:50 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, sandeep.malik, jerin.jacob,
john.mcnamara, konstantin.ananyev, shahafs, olivier.matz
On 10/20/2017 4:28 PM, Thomas Monjalon wrote:
> 15/10/2017 00:17, Akhil Goyal:
>> --- a/lib/librte_ether/rte_ethdev_version.map
>> +++ b/lib/librte_ether/rte_ethdev_version.map
>> @@ -194,5 +194,6 @@ DPDK_17.11 {
>> rte_eth_dev_pool_ops_supported;
>> rte_eth_dev_reset;
>> rte_flow_error_set;
>> + rte_eth_dev_get_sec_ctx;
>>
>> } DPDK_17.08;
>
> Please keep alphabetical order.
>
>
ok. Will fix the same for rte_cryptodev_version.map also.
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v4 07/12] ethdev: add rte flow action for crypto
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 " Akhil Goyal
` (5 preceding siblings ...)
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 06/12] ethdev: support security APIs Akhil Goyal
@ 2017-10-14 22:17 ` Akhil Goyal
2017-10-15 12:49 ` Aviad Yehezkel
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 08/12] doc: add details of rte_flow security actions Akhil Goyal
` (6 subsequent siblings)
13 siblings, 1 reply; 195+ messages in thread
From: Akhil Goyal @ 2017-10-14 22:17 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
From: Boris Pismenny <borisp@mellanox.com>
The crypto action is specified by an application to request
crypto offload for a flow.
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
---
lib/librte_ether/rte_flow.h | 38 ++++++++++++++++++++++++++++++++++++++
1 file changed, 38 insertions(+)
diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
index 7c89089..39f66c2 100644
--- a/lib/librte_ether/rte_flow.h
+++ b/lib/librte_ether/rte_flow.h
@@ -993,6 +993,13 @@ enum rte_flow_action_type {
* See struct rte_flow_action_vf.
*/
RTE_FLOW_ACTION_TYPE_VF,
+ /**
+ * Redirects packets to security engine of current device for security
+ * processing as specified by security session.
+ *
+ * See struct rte_flow_action_security.
+ */
+ RTE_FLOW_ACTION_TYPE_SECURITY
};
/**
@@ -1086,6 +1093,37 @@ struct rte_flow_action_vf {
};
/**
+ * RTE_FLOW_ACTION_TYPE_SECURITY
+ *
+ * Perform the security action on flows matched by the pattern items
+ * according to the configuration of the security session.
+ *
+ * This action modifies the payload of matched flows. For INLINE_CRYPTO, the
+ * security protocol headers and IV are fully provided by the application as
+ * specified in the flow pattern. The payload of matching packets is
+ * encrypted on egress, and decrypted and authenticated on ingress.
+ * For INLINE_PROTOCOL, the security protocol is fully offloaded to HW,
+ * providing full encapsulation and decapsulation of packets in security
+ * protocols. The flow pattern specifies both the outer security header fields
+ * and the inner packet fields. The security session specified in the action
+ * must match the pattern parameters.
+ *
+ * The security session specified in the action must be created on the same
+ * port as the flow action that is being specified.
+ *
+ * The ingress/egress flow attribute should match that specified in the
+ * security session if the security session supports the definition of the
+ * direction.
+ *
+ * Multiple flows can be configured to use the same security session.
+ *
+ * Non-terminating by default.
+ */
+struct rte_flow_action_security {
+ void *security_session; /**< Pointer to security session structure. */
+};
+
+/**
* Definition of a single action.
*
* A list of actions is terminated by a END action.
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 07/12] ethdev: add rte flow action for crypto
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 07/12] ethdev: add rte flow action for crypto Akhil Goyal
@ 2017-10-15 12:49 ` Aviad Yehezkel
0 siblings, 0 replies; 195+ messages in thread
From: Aviad Yehezkel @ 2017-10-15 12:49 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
On 10/15/2017 1:17 AM, Akhil Goyal wrote:
> From: Boris Pismenny <borisp@mellanox.com>
>
> The crypto action is specified by an application to request
> crypto offload for a flow.
>
> Signed-off-by: Boris Pismenny <borisp@mellanox.com>
> Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
> ---
> lib/librte_ether/rte_flow.h | 38 ++++++++++++++++++++++++++++++++++++++
> 1 file changed, 38 insertions(+)
>
> diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
> index 7c89089..39f66c2 100644
> --- a/lib/librte_ether/rte_flow.h
> +++ b/lib/librte_ether/rte_flow.h
> @@ -993,6 +993,13 @@ enum rte_flow_action_type {
> * See struct rte_flow_action_vf.
> */
> RTE_FLOW_ACTION_TYPE_VF,
> + /**
> + * Redirects packets to security engine of current device for security
> + * processing as specified by security session.
> + *
> + * See struct rte_flow_action_security.
> + */
> + RTE_FLOW_ACTION_TYPE_SECURITY
> };
>
> /**
> @@ -1086,6 +1093,37 @@ struct rte_flow_action_vf {
> };
>
> /**
> + * RTE_FLOW_ACTION_TYPE_SECURITY
> + *
> + * Perform the security action on flows matched by the pattern items
> + * according to the configuration of the security session.
> + *
> + * This action modifies the payload of matched flows. For INLINE_CRYPTO, the
> + * security protocol headers and IV are fully provided by the application as
> + * specified in the flow pattern. The payload of matching packets is
> + * encrypted on egress, and decrypted and authenticated on ingress.
> + * For INLINE_PROTOCOL, the security protocol is fully offloaded to HW,
> + * providing full encapsulation and decapsulation of packets in security
> + * protocols. The flow pattern specifies both the outer security header fields
> + * and the inner packet fields. The security session specified in the action
> + * must match the pattern parameters.
> + *
> + * The security session specified in the action must be created on the same
> + * port as the flow action that is being specified.
> + *
> + * The ingress/egress flow attribute should match that specified in the
> + * security session if the security session supports the definition of the
> + * direction.
> + *
> + * Multiple flows can be configured to use the same security session.
> + *
> + * Non-terminating by default.
> + */
> +struct rte_flow_action_security {
> + void *security_session; /**< Pointer to security session structure. */
> +};
> +
> +/**
> * Definition of a single action.
> *
> * A list of actions is terminated by a END action.
Tested-by: Aviad Yehezkel <aviadye@mellanox.com>
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v4 08/12] doc: add details of rte_flow security actions
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 " Akhil Goyal
` (6 preceding siblings ...)
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 07/12] ethdev: add rte flow action for crypto Akhil Goyal
@ 2017-10-14 22:17 ` Akhil Goyal
2017-10-15 12:50 ` Aviad Yehezkel
` (2 more replies)
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 09/12] mk: add rte security into build system Akhil Goyal
` (5 subsequent siblings)
13 siblings, 3 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-14 22:17 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
From: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Reviewed-by: John McNamara <john.mcnamara@intel.com>
---
doc/guides/prog_guide/rte_flow.rst | 84 +++++++++++++++++++++++++++++++++++++-
1 file changed, 82 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 13e3dbe..ac1adf9 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -187,7 +187,7 @@ Pattern item
Pattern items fall in two categories:
- Matching protocol headers and packet data (ANY, RAW, ETH, VLAN, IPV4,
- IPV6, ICMP, UDP, TCP, SCTP, VXLAN, MPLS, GRE and so on), usually
+ IPV6, ICMP, UDP, TCP, SCTP, VXLAN, MPLS, GRE, ESP and so on), usually
associated with a specification structure.
- Matching meta-data or affecting pattern processing (END, VOID, INVERT, PF,
@@ -972,6 +972,14 @@ flow rules.
- ``teid``: tunnel endpoint identifier.
- Default ``mask`` matches teid only.
+Item: ``ESP``
+^^^^^^^^^^^^^
+
+Matches an ESP header.
+
+- ``hdr``: ESP header definition (``rte_esp.h``).
+- Default ``mask`` matches SPI only.
+
Actions
~~~~~~~
@@ -989,7 +997,7 @@ They fall in three categories:
additional processing by subsequent flow rules.
- Other non-terminating meta actions that do not affect the fate of packets
- (END, VOID, MARK, FLAG, COUNT).
+ (END, VOID, MARK, FLAG, COUNT, SECURITY).
When several actions are combined in a flow rule, they should all have
different types (e.g. dropping a packet twice is not possible).
@@ -1371,6 +1379,78 @@ rule or if packets are not addressed to a VF in the first place.
| ``vf`` | VF ID to redirect packets to |
+--------------+--------------------------------+
+Action: ``SECURITY``
+^^^^^^^^^^^^^^^^^^^^
+
+Perform the security action on flows matched by the pattern items
+according to the configuration of the security session.
+
+This action modifies the payload of matched flows. For INLINE_CRYPTO, the
+security protocol headers and IV are fully provided by the application as
+specified in the flow pattern. The payload of matching packets is
+encrypted on egress, and decrypted and authenticated on ingress.
+For INLINE_PROTOCOL, the security protocol is fully offloaded to HW,
+providing full encapsulation and decapsulation of packets in security
+protocols. The flow pattern specifies both the outer security header fields
+and the inner packet fields. The security session specified in the action
+must match the pattern parameters.
+
+The security session specified in the action must be created on the same
+port as the flow action that is being specified.
+
+The ingress/egress flow attribute should match that specified in the
+security session if the security session supports the definition of the
+direction.
+
+Multiple flows can be configured to use the same security session.
+
+- Non-terminating by default.
+
+.. _table_rte_flow_action_security:
+
+.. table:: SECURITY
+
+ +----------------------+--------------------------------------+
+ | Field | Value |
+ +======================+======================================+
+ | ``security_session`` | security session to apply |
+ +----------------------+--------------------------------------+
+
+The following is an example of configuring IPsec inline using the
+INLINE_CRYPTO security session:
+
+The encryption algorithm, keys and salt are part of the opaque
+``rte_security_session``. The SA is identified according to the IP and ESP
+fields in the pattern items.
+
+.. _table_rte_flow_item_esp_inline_example:
+
+.. table:: IPsec inline crypto flow pattern items.
+
+ +-------+----------+
+ | Index | Item |
+ +=======+==========+
+ | 0 | Ethernet |
+ +-------+----------+
+ | 1 | IPv4 |
+ +-------+----------+
+ | 2 | ESP |
+ +-------+----------+
+ | 3 | END |
+ +-------+----------+
+
+.. _table_rte_flow_action_esp_inline_example:
+
+.. table:: IPsec inline flow actions.
+
+ +-------+----------+
+ | Index | Action |
+ +=======+==========+
+ | 0 | SECURITY |
+ +-------+----------+
+ | 1 | END |
+ +-------+----------+
+
Negative types
~~~~~~~~~~~~~~
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 08/12] doc: add details of rte_flow security actions
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 08/12] doc: add details of rte_flow security actions Akhil Goyal
@ 2017-10-15 12:50 ` Aviad Yehezkel
2017-10-16 19:17 ` Mcnamara, John
2017-10-20 11:00 ` Thomas Monjalon
2 siblings, 0 replies; 195+ messages in thread
From: Aviad Yehezkel @ 2017-10-15 12:50 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
On 10/15/2017 1:17 AM, Akhil Goyal wrote:
> From: Boris Pismenny <borisp@mellanox.com>
>
> Signed-off-by: Boris Pismenny <borisp@mellanox.com>
> Reviewed-by: John McNamara <john.mcnamara@intel.com>
> ---
> doc/guides/prog_guide/rte_flow.rst | 84 +++++++++++++++++++++++++++++++++++++-
> 1 file changed, 82 insertions(+), 2 deletions(-)
>
> diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
> index 13e3dbe..ac1adf9 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -187,7 +187,7 @@ Pattern item
> Pattern items fall in two categories:
>
> - Matching protocol headers and packet data (ANY, RAW, ETH, VLAN, IPV4,
> - IPV6, ICMP, UDP, TCP, SCTP, VXLAN, MPLS, GRE and so on), usually
> + IPV6, ICMP, UDP, TCP, SCTP, VXLAN, MPLS, GRE, ESP and so on), usually
> associated with a specification structure.
>
> - Matching meta-data or affecting pattern processing (END, VOID, INVERT, PF,
> @@ -972,6 +972,14 @@ flow rules.
> - ``teid``: tunnel endpoint identifier.
> - Default ``mask`` matches teid only.
>
> +Item: ``ESP``
> +^^^^^^^^^^^^^
> +
> +Matches an ESP header.
> +
> +- ``hdr``: ESP header definition (``rte_esp.h``).
> +- Default ``mask`` matches SPI only.
> +
> Actions
> ~~~~~~~
>
> @@ -989,7 +997,7 @@ They fall in three categories:
> additional processing by subsequent flow rules.
>
> - Other non-terminating meta actions that do not affect the fate of packets
> - (END, VOID, MARK, FLAG, COUNT).
> + (END, VOID, MARK, FLAG, COUNT, SECURITY).
>
> When several actions are combined in a flow rule, they should all have
> different types (e.g. dropping a packet twice is not possible).
> @@ -1371,6 +1379,78 @@ rule or if packets are not addressed to a VF in the first place.
> | ``vf`` | VF ID to redirect packets to |
> +--------------+--------------------------------+
>
> +Action: ``SECURITY``
> +^^^^^^^^^^^^^^^^^^^^
> +
> +Perform the security action on flows matched by the pattern items
> +according to the configuration of the security session.
> +
> +This action modifies the payload of matched flows. For INLINE_CRYPTO, the
> +security protocol headers and IV are fully provided by the application as
> +specified in the flow pattern. The payload of matching packets is
> +encrypted on egress, and decrypted and authenticated on ingress.
> +For INLINE_PROTOCOL, the security protocol is fully offloaded to HW,
> +providing full encapsulation and decapsulation of packets in security
> +protocols. The flow pattern specifies both the outer security header fields
> +and the inner packet fields. The security session specified in the action
> +must match the pattern parameters.
> +
> +The security session specified in the action must be created on the same
> +port as the flow action that is being specified.
> +
> +The ingress/egress flow attribute should match that specified in the
> +security session if the security session supports the definition of the
> +direction.
> +
> +Multiple flows can be configured to use the same security session.
> +
> +- Non-terminating by default.
> +
> +.. _table_rte_flow_action_security:
> +
> +.. table:: SECURITY
> +
> + +----------------------+--------------------------------------+
> + | Field | Value |
> + +======================+======================================+
> + | ``security_session`` | security session to apply |
> + +----------------------+--------------------------------------+
> +
> +The following is an example of configuring IPsec inline using the
> +INLINE_CRYPTO security session:
> +
> +The encryption algorithm, keys and salt are part of the opaque
> +``rte_security_session``. The SA is identified according to the IP and ESP
> +fields in the pattern items.
> +
> +.. _table_rte_flow_item_esp_inline_example:
> +
> +.. table:: IPsec inline crypto flow pattern items.
> +
> + +-------+----------+
> + | Index | Item |
> + +=======+==========+
> + | 0 | Ethernet |
> + +-------+----------+
> + | 1 | IPv4 |
> + +-------+----------+
> + | 2 | ESP |
> + +-------+----------+
> + | 3 | END |
> + +-------+----------+
> +
> +.. _table_rte_flow_action_esp_inline_example:
> +
> +.. table:: IPsec inline flow actions.
> +
> + +-------+----------+
> + | Index | Action |
> + +=======+==========+
> + | 0 | SECURITY |
> + +-------+----------+
> + | 1 | END |
> + +-------+----------+
> +
> Negative types
> ~~~~~~~~~~~~~~
>
Tested-by: Aviad Yehezkel <aviadye@mellanox.com>
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 08/12] doc: add details of rte_flow security actions
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 08/12] doc: add details of rte_flow security actions Akhil Goyal
2017-10-15 12:50 ` Aviad Yehezkel
@ 2017-10-16 19:17 ` Mcnamara, John
2017-10-20 11:00 ` Thomas Monjalon
2 siblings, 0 replies; 195+ messages in thread
From: Mcnamara, John @ 2017-10-16 19:17 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: Doherty, Declan, De Lara Guarch, Pablo, hemant.agrawal, Nicolau,
Radu, borisp, aviadye, thomas, sandeep.malik, jerin.jacob,
Ananyev, Konstantin, shahafs, olivier.matz
> -----Original Message-----
> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
> Sent: Saturday, October 14, 2017 11:18 PM
> To: dev@dpdk.org
> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com; Nicolau, Radu
> <radu.nicolau@intel.com>; borisp@mellanox.com; aviadye@mellanox.com;
> thomas@monjalon.net; sandeep.malik@nxp.com;
> jerin.jacob@caviumnetworks.com; Mcnamara, John <john.mcnamara@intel.com>;
> Ananyev, Konstantin <konstantin.ananyev@intel.com>; shahafs@mellanox.com;
> olivier.matz@6wind.com
> Subject: [PATCH v4 08/12] doc: add details of rte_flow security actions
>
> From: Boris Pismenny <borisp@mellanox.com>
>
> Signed-off-by: Boris Pismenny <borisp@mellanox.com>
> Reviewed-by: John McNamara <john.mcnamara@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 08/12] doc: add details of rte_flow security actions
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 08/12] doc: add details of rte_flow security actions Akhil Goyal
2017-10-15 12:50 ` Aviad Yehezkel
2017-10-16 19:17 ` Mcnamara, John
@ 2017-10-20 11:00 ` Thomas Monjalon
2017-10-21 19:50 ` Akhil Goyal
2 siblings, 1 reply; 195+ messages in thread
From: Thomas Monjalon @ 2017-10-20 11:00 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, sandeep.malik, jerin.jacob,
john.mcnamara, konstantin.ananyev, shahafs, olivier.matz
This patch could be merged with the previous one, adding the action
in the rte_flow code.
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 08/12] doc: add details of rte_flow security actions
2017-10-20 11:00 ` Thomas Monjalon
@ 2017-10-21 19:50 ` Akhil Goyal
0 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-21 19:50 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, sandeep.malik, jerin.jacob,
john.mcnamara, konstantin.ananyev, shahafs, olivier.matz
On 10/20/2017 4:30 PM, Thomas Monjalon wrote:
> This patch could be merged with the previous one, adding the action
> in the rte_flow code.
>
>
ok.
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v4 09/12] mk: add rte security into build system
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 " Akhil Goyal
` (7 preceding siblings ...)
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 08/12] doc: add details of rte_flow security actions Akhil Goyal
@ 2017-10-14 22:17 ` Akhil Goyal
2017-10-15 12:50 ` Aviad Yehezkel
2017-10-20 11:06 ` Thomas Monjalon
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 10/12] net/ixgbe: enable inline ipsec Akhil Goyal
` (4 subsequent siblings)
13 siblings, 2 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-14 22:17 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
---
MAINTAINERS | 6 ++++++
config/common_base | 6 ++++++
lib/Makefile | 5 +++++
mk/rte.app.mk | 1 +
4 files changed, 18 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 8518a99..bc9f9cf 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -275,6 +275,12 @@ T: git://dpdk.org/next/dpdk-next-eventdev
F: lib/librte_eventdev/*eth_rx_adapter*
F: test/test/test_event_eth_rx_adapter.c
+Security API - EXPERIMENTAL
+M: Akhil Goyal <akhil.goyal@nxp.com>
+M: Declan Doherty <declan.doherty@intel.com>
+T: git://dpdk.org/draft/dpdk-draft-ipsec
+F: lib/librte_security/
+F: doc/guides/prog_guide/rte_security.rst
Networking Drivers
------------------
diff --git a/config/common_base b/config/common_base
index d9471e8..2b15f1e 100644
--- a/config/common_base
+++ b/config/common_base
@@ -548,6 +548,12 @@ CONFIG_RTE_LIBRTE_PMD_MRVL_CRYPTO=n
CONFIG_RTE_LIBRTE_PMD_MRVL_CRYPTO_DEBUG=n
#
+# Compile generic security library
+#
+CONFIG_RTE_LIBRTE_SECURITY=y
+CONFIG_RTE_LIBRTE_SECURITY_DEBUG=n
+
+#
# Compile generic event device library
#
CONFIG_RTE_LIBRTE_EVENTDEV=y
diff --git a/lib/Makefile b/lib/Makefile
index 86d475f..379515a 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -50,6 +50,11 @@ DEPDIRS-librte_ether += librte_mbuf
DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
DEPDIRS-librte_cryptodev := librte_eal librte_mempool librte_ring librte_mbuf
DEPDIRS-librte_cryptodev += librte_kvargs
+DEPDIRS-librte_cryptodev += librte_ether
+DIRS-$(CONFIG_RTE_LIBRTE_SECURITY) += librte_security
+DEPDIRS-librte_security := librte_eal librte_mempool librte_ring librte_mbuf
+DEPDIRS-librte_security += librte_ether
+DEPDIRS-librte_security += librte_cryptodev
DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += librte_eventdev
DEPDIRS-librte_eventdev := librte_eal librte_ring librte_ether librte_hash
DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 8192b98..d975fad 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -93,6 +93,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF) += -lrte_mbuf
_LDLIBS-$(CONFIG_RTE_LIBRTE_NET) += -lrte_net
_LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER) += -lrte_ethdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += -lrte_cryptodev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -lrte_security
_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 09/12] mk: add rte security into build system
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 09/12] mk: add rte security into build system Akhil Goyal
@ 2017-10-15 12:50 ` Aviad Yehezkel
2017-10-20 11:06 ` Thomas Monjalon
1 sibling, 0 replies; 195+ messages in thread
From: Aviad Yehezkel @ 2017-10-15 12:50 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
On 10/15/2017 1:17 AM, Akhil Goyal wrote:
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> ---
> MAINTAINERS | 6 ++++++
> config/common_base | 6 ++++++
> lib/Makefile | 5 +++++
> mk/rte.app.mk | 1 +
> 4 files changed, 18 insertions(+)
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 8518a99..bc9f9cf 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -275,6 +275,12 @@ T: git://dpdk.org/next/dpdk-next-eventdev
> F: lib/librte_eventdev/*eth_rx_adapter*
> F: test/test/test_event_eth_rx_adapter.c
>
> +Security API - EXPERIMENTAL
> +M: Akhil Goyal <akhil.goyal@nxp.com>
> +M: Declan Doherty <declan.doherty@intel.com>
> +T: git://dpdk.org/draft/dpdk-draft-ipsec
> +F: lib/librte_security/
> +F: doc/guides/prog_guide/rte_security.rst
>
> Networking Drivers
> ------------------
> diff --git a/config/common_base b/config/common_base
> index d9471e8..2b15f1e 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -548,6 +548,12 @@ CONFIG_RTE_LIBRTE_PMD_MRVL_CRYPTO=n
> CONFIG_RTE_LIBRTE_PMD_MRVL_CRYPTO_DEBUG=n
>
> #
> +# Compile generic security library
> +#
> +CONFIG_RTE_LIBRTE_SECURITY=y
> +CONFIG_RTE_LIBRTE_SECURITY_DEBUG=n
> +
> +#
> # Compile generic event device library
> #
> CONFIG_RTE_LIBRTE_EVENTDEV=y
> diff --git a/lib/Makefile b/lib/Makefile
> index 86d475f..379515a 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -50,6 +50,11 @@ DEPDIRS-librte_ether += librte_mbuf
> DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
> DEPDIRS-librte_cryptodev := librte_eal librte_mempool librte_ring librte_mbuf
> DEPDIRS-librte_cryptodev += librte_kvargs
> +DEPDIRS-librte_cryptodev += librte_ether
> +DIRS-$(CONFIG_RTE_LIBRTE_SECURITY) += librte_security
> +DEPDIRS-librte_security := librte_eal librte_mempool librte_ring librte_mbuf
> +DEPDIRS-librte_security += librte_ether
> +DEPDIRS-librte_security += librte_cryptodev
> DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += librte_eventdev
> DEPDIRS-librte_eventdev := librte_eal librte_ring librte_ether librte_hash
> DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
> diff --git a/mk/rte.app.mk b/mk/rte.app.mk
> index 8192b98..d975fad 100644
> --- a/mk/rte.app.mk
> +++ b/mk/rte.app.mk
> @@ -93,6 +93,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF) += -lrte_mbuf
> _LDLIBS-$(CONFIG_RTE_LIBRTE_NET) += -lrte_net
> _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER) += -lrte_ethdev
> _LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += -lrte_cryptodev
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -lrte_security
> _LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
> _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
> _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
Tested-by: Aviad Yehezkel <aviadye@mellanox.com>
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 09/12] mk: add rte security into build system
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 09/12] mk: add rte security into build system Akhil Goyal
2017-10-15 12:50 ` Aviad Yehezkel
@ 2017-10-20 11:06 ` Thomas Monjalon
2017-10-21 19:44 ` Akhil Goyal
1 sibling, 1 reply; 195+ messages in thread
From: Thomas Monjalon @ 2017-10-20 11:06 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, sandeep.malik, jerin.jacob,
john.mcnamara, konstantin.ananyev, shahafs, olivier.matz
Why not merging this patch with the first one?
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> +Security API - EXPERIMENTAL
> +M: Akhil Goyal <akhil.goyal@nxp.com>
> +M: Declan Doherty <declan.doherty@intel.com>
> +T: git://dpdk.org/draft/dpdk-draft-ipsec
> +F: lib/librte_security/
> +F: doc/guides/prog_guide/rte_security.rst
Do you really want to keep this draft tree?
If no, please do not reference it.
> +# Compile generic security library
> +#
> +CONFIG_RTE_LIBRTE_SECURITY=y
> +CONFIG_RTE_LIBRTE_SECURITY_DEBUG=n
No, DEBUG config options are prohibited.
The new log system allows to change the log level dynamically.
It was mentioned a lot of time in other patch series.
I was in the hope that everybody was now aware of the new log system
and the desire of removing all DEBUG options.
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 09/12] mk: add rte security into build system
2017-10-20 11:06 ` Thomas Monjalon
@ 2017-10-21 19:44 ` Akhil Goyal
0 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-21 19:44 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, sandeep.malik, jerin.jacob,
john.mcnamara, konstantin.ananyev, shahafs, olivier.matz
Hi Thomas,
On 10/20/2017 4:36 PM, Thomas Monjalon wrote:
> Why not merging this patch with the first one?
There are some code changes in ethdev and cryptodev(subsequent patches)
which are used in the first patch. This would break compilation.
So compilation of the lib is done in the last to avoid compilation issues.
>
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> +Security API - EXPERIMENTAL
>> +M: Akhil Goyal <akhil.goyal@nxp.com>
>> +M: Declan Doherty <declan.doherty@intel.com>
>> +T: git://dpdk.org/draft/dpdk-draft-ipsec
>> +F: lib/librte_security/
>> +F: doc/guides/prog_guide/rte_security.rst
>
> Do you really want to keep this draft tree?
> If no, please do not reference it.
ok, will remove it.
>
>> +# Compile generic security library
>> +#
>> +CONFIG_RTE_LIBRTE_SECURITY=y
>> +CONFIG_RTE_LIBRTE_SECURITY_DEBUG=n
>
> No, DEBUG config options are prohibited.
> The new log system allows to change the log level dynamically.
ok will remove it
>
> It was mentioned a lot of time in other patch series.
> I was in the hope that everybody was now aware of the new log system
> and the desire of removing all DEBUG options.
>
>
Thanks,
Akhil
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v4 10/12] net/ixgbe: enable inline ipsec
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 " Akhil Goyal
` (8 preceding siblings ...)
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 09/12] mk: add rte security into build system Akhil Goyal
@ 2017-10-14 22:17 ` Akhil Goyal
2017-10-15 12:51 ` Aviad Yehezkel
` (2 more replies)
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 11/12] crypto/dpaa2_sec: add support for protocol offload ipsec Akhil Goyal
` (3 subsequent siblings)
13 siblings, 3 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-14 22:17 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
From: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
drivers/net/Makefile | 2 +-
drivers/net/ixgbe/Makefile | 2 +-
drivers/net/ixgbe/base/ixgbe_osdep.h | 8 +
drivers/net/ixgbe/ixgbe_ethdev.c | 19 +
drivers/net/ixgbe/ixgbe_ethdev.h | 6 +-
drivers/net/ixgbe/ixgbe_flow.c | 47 +++
drivers/net/ixgbe/ixgbe_ipsec.c | 744 +++++++++++++++++++++++++++++++++
drivers/net/ixgbe/ixgbe_ipsec.h | 147 +++++++
drivers/net/ixgbe/ixgbe_rxtx.c | 53 ++-
drivers/net/ixgbe/ixgbe_rxtx.h | 11 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 50 ++-
11 files changed, 1079 insertions(+), 10 deletions(-)
create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.c
create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.h
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 5d2ad2f..339ff36 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -68,7 +68,7 @@ DEPDIRS-fm10k = $(core-libs) librte_hash
DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e
DEPDIRS-i40e = $(core-libs) librte_hash
DIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe
-DEPDIRS-ixgbe = $(core-libs) librte_hash
+DEPDIRS-ixgbe = $(core-libs) librte_hash librte_security
DIRS-$(CONFIG_RTE_LIBRTE_LIO_PMD) += liquidio
DEPDIRS-liquidio = $(core-libs)
DIRS-$(CONFIG_RTE_LIBRTE_MLX4_PMD) += mlx4
diff --git a/drivers/net/ixgbe/Makefile b/drivers/net/ixgbe/Makefile
index 95c806d..6e963c7 100644
--- a/drivers/net/ixgbe/Makefile
+++ b/drivers/net/ixgbe/Makefile
@@ -118,11 +118,11 @@ SRCS-$(CONFIG_RTE_IXGBE_INC_VECTOR) += ixgbe_rxtx_vec_neon.c
else
SRCS-$(CONFIG_RTE_IXGBE_INC_VECTOR) += ixgbe_rxtx_vec_sse.c
endif
-
ifeq ($(CONFIG_RTE_LIBRTE_IXGBE_BYPASS),y)
SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_bypass.c
SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_82599_bypass.c
endif
+SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_ipsec.c
SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += rte_pmd_ixgbe.c
SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_tm.c
diff --git a/drivers/net/ixgbe/base/ixgbe_osdep.h b/drivers/net/ixgbe/base/ixgbe_osdep.h
index 4aab278..b132a0f 100644
--- a/drivers/net/ixgbe/base/ixgbe_osdep.h
+++ b/drivers/net/ixgbe/base/ixgbe_osdep.h
@@ -161,4 +161,12 @@ static inline uint32_t ixgbe_read_addr(volatile void* addr)
#define IXGBE_WRITE_REG_ARRAY(hw, reg, index, value) \
IXGBE_PCI_REG_WRITE(IXGBE_PCI_REG_ARRAY_ADDR((hw), (reg), (index)), (value))
+#define IXGBE_WRITE_REG_THEN_POLL_MASK(hw, reg, val, mask, poll_ms) \
+{ \
+ uint32_t cnt = poll_ms; \
+ IXGBE_WRITE_REG(hw, (reg), (val)); \
+ while (((IXGBE_READ_REG(hw, (reg))) & (mask)) && (cnt--)) \
+ rte_delay_ms(1); \
+}
+
#endif /* _IXGBE_OS_H_ */
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 14b9c53..fcabd5e 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -61,6 +61,7 @@
#include <rte_random.h>
#include <rte_dev.h>
#include <rte_hash_crc.h>
+#include <rte_security_driver.h>
#include "ixgbe_logs.h"
#include "base/ixgbe_api.h"
@@ -1132,6 +1133,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
IXGBE_DEV_PRIVATE_TO_FILTER_INFO(eth_dev->data->dev_private);
struct ixgbe_bw_conf *bw_conf =
IXGBE_DEV_PRIVATE_TO_BW_CONF(eth_dev->data->dev_private);
+ struct rte_security_ctx *security_instance;
uint32_t ctrl_ext;
uint16_t csum;
int diag, i;
@@ -1139,6 +1141,17 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
PMD_INIT_FUNC_TRACE();
eth_dev->dev_ops = &ixgbe_eth_dev_ops;
+ security_instance = rte_malloc("rte_security_instances_ops",
+ sizeof(struct rte_security_ctx), 0);
+ if (security_instance == NULL)
+ return -ENOMEM;
+ security_instance->state = RTE_SECURITY_INSTANCE_VALID;
+ security_instance->device = (void *)eth_dev;
+ security_instance->ops = &ixgbe_security_ops;
+ security_instance->sess_cnt = 0;
+
+ eth_dev->data->security_ctx = security_instance;
+
eth_dev->rx_pkt_burst = &ixgbe_recv_pkts;
eth_dev->tx_pkt_burst = &ixgbe_xmit_pkts;
eth_dev->tx_pkt_prepare = &ixgbe_prep_pkts;
@@ -1169,6 +1182,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
rte_eth_copy_pci_info(eth_dev, pci_dev);
eth_dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE;
+ eth_dev->data->dev_flags |= RTE_ETH_DEV_SECURITY;
/* Vendor and Device ID need to be set before init of shared code */
hw->device_id = pci_dev->id.device_id;
@@ -1401,6 +1415,8 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
/* Remove all Traffic Manager configuration */
ixgbe_tm_conf_uninit(eth_dev);
+ rte_free(eth_dev->data->security_ctx);
+
return 0;
}
@@ -3695,6 +3711,9 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
hw->mac.type == ixgbe_mac_X550EM_a)
dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_SECURITY;
+ dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
.pthresh = IXGBE_DEFAULT_RX_PTHRESH,
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index e28c856..f5b52c4 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -38,6 +38,7 @@
#include "base/ixgbe_dcb_82599.h"
#include "base/ixgbe_dcb_82598.h"
#include "ixgbe_bypass.h"
+#include "ixgbe_ipsec.h"
#include <rte_time.h>
#include <rte_hash.h>
#include <rte_pci.h>
@@ -486,7 +487,7 @@ struct ixgbe_adapter {
struct ixgbe_filter_info filter;
struct ixgbe_l2_tn_info l2_tn;
struct ixgbe_bw_conf bw_conf;
-
+ struct ixgbe_ipsec ipsec;
bool rx_bulk_alloc_allowed;
bool rx_vec_allowed;
struct rte_timecounter systime_tc;
@@ -543,6 +544,9 @@ struct ixgbe_adapter {
#define IXGBE_DEV_PRIVATE_TO_TM_CONF(adapter) \
(&((struct ixgbe_adapter *)adapter)->tm_conf)
+#define IXGBE_DEV_PRIVATE_TO_IPSEC(adapter)\
+ (&((struct ixgbe_adapter *)adapter)->ipsec)
+
/*
* RX/TX function prototypes
*/
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 904c146..13c8243 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -187,6 +187,9 @@ const struct rte_flow_action *next_no_void_action(
* END
* other members in mask and spec should set to 0x00.
* item->last should be NULL.
+ *
+ * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY.
+ *
*/
static int
cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
@@ -226,6 +229,41 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
return -rte_errno;
}
+ /**
+ * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
+ */
+ act = next_no_void_action(actions, NULL);
+ if (act->type == RTE_FLOW_ACTION_TYPE_SECURITY) {
+ const void *conf = act->conf;
+ /* check if the next not void item is END */
+ act = next_no_void_action(actions, act);
+ if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+ memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "Not supported action.");
+ return -rte_errno;
+ }
+
+ /* get the IP pattern*/
+ item = next_no_void_pattern(pattern, NULL);
+ while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+ item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
+ if (item->last ||
+ item->type == RTE_FLOW_ITEM_TYPE_END) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "IP pattern missing.");
+ return -rte_errno;
+ }
+ item = next_no_void_pattern(pattern, item);
+ }
+
+ filter->proto = IPPROTO_ESP;
+ return ixgbe_crypto_add_ingress_sa_from_flow(conf, item->spec,
+ item->type == RTE_FLOW_ITEM_TYPE_IPV6);
+ }
+
/* the first not void item can be MAC or IPv4 */
item = next_no_void_pattern(pattern, NULL);
@@ -519,6 +557,10 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
+ /* ESP flow not really a flow*/
+ if (filter->proto == IPPROTO_ESP)
+ return 0;
+
/* Ixgbe doesn't support tcp flags. */
if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -2758,6 +2800,11 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
actions, &ntuple_filter, error);
+
+ /* ESP flow not really a flow*/
+ if (ntuple_filter.proto == IPPROTO_ESP)
+ return flow;
+
if (!ret) {
ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
if (!ret) {
diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c
new file mode 100644
index 0000000..6ace305
--- /dev/null
+++ b/drivers/net/ixgbe/ixgbe_ipsec.c
@@ -0,0 +1,744 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_ethdev.h>
+#include <rte_ethdev_pci.h>
+#include <rte_ip.h>
+#include <rte_jhash.h>
+#include <rte_security_driver.h>
+#include <rte_cryptodev.h>
+#include <rte_flow.h>
+
+#include "base/ixgbe_type.h"
+#include "base/ixgbe_api.h"
+#include "ixgbe_ethdev.h"
+#include "ixgbe_ipsec.h"
+
+#define RTE_IXGBE_REGISTER_POLL_WAIT_5_MS 5
+
+#define IXGBE_WAIT_RREAD \
+ IXGBE_WRITE_REG_THEN_POLL_MASK(hw, IXGBE_IPSRXIDX, reg_val, \
+ IPSRXIDX_READ, RTE_IXGBE_REGISTER_POLL_WAIT_5_MS)
+#define IXGBE_WAIT_RWRITE \
+ IXGBE_WRITE_REG_THEN_POLL_MASK(hw, IXGBE_IPSRXIDX, reg_val, \
+ IPSRXIDX_WRITE, RTE_IXGBE_REGISTER_POLL_WAIT_5_MS)
+#define IXGBE_WAIT_TREAD \
+ IXGBE_WRITE_REG_THEN_POLL_MASK(hw, IXGBE_IPSTXIDX, reg_val, \
+ IPSRXIDX_READ, RTE_IXGBE_REGISTER_POLL_WAIT_5_MS)
+#define IXGBE_WAIT_TWRITE \
+ IXGBE_WRITE_REG_THEN_POLL_MASK(hw, IXGBE_IPSTXIDX, reg_val, \
+ IPSRXIDX_WRITE, RTE_IXGBE_REGISTER_POLL_WAIT_5_MS)
+
+#define CMP_IP(a, b) (\
+ (a).ipv6[0] == (b).ipv6[0] && \
+ (a).ipv6[1] == (b).ipv6[1] && \
+ (a).ipv6[2] == (b).ipv6[2] && \
+ (a).ipv6[3] == (b).ipv6[3])
+
+
+static void
+ixgbe_crypto_clear_ipsec_tables(struct rte_eth_dev *dev)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ int i = 0;
+
+ /* clear Rx IP table*/
+ for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
+ uint16_t index = i << 3;
+ uint32_t reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_IP | index;
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3), 0);
+ IXGBE_WAIT_RWRITE;
+ }
+
+ /* clear Rx SPI and Rx/Tx SA tables*/
+ for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+ uint32_t index = i << 3;
+ uint32_t reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_SPI | index;
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI, 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX, 0);
+ IXGBE_WAIT_RWRITE;
+ reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_KEY | index;
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT, 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD, 0);
+ IXGBE_WAIT_RWRITE;
+ reg_val = IPSRXIDX_WRITE | index;
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT, 0);
+ IXGBE_WAIT_TWRITE;
+ }
+}
+
+static int
+ixgbe_crypto_add_sa(struct ixgbe_crypto_session *ic_session)
+{
+ struct rte_eth_dev *dev = ic_session->dev;
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ixgbe_ipsec *priv = IXGBE_DEV_PRIVATE_TO_IPSEC(
+ dev->data->dev_private);
+ uint32_t reg_val;
+ int sa_index = -1;
+
+ if (ic_session->op == IXGBE_OP_AUTHENTICATED_DECRYPTION) {
+ int i, ip_index = -1;
+
+ /* Find a match in the IP table*/
+ for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
+ if (CMP_IP(priv->rx_ip_tbl[i].ip,
+ ic_session->dst_ip)) {
+ ip_index = i;
+ break;
+ }
+ }
+ /* If no match, find a free entry in the IP table*/
+ if (ip_index < 0) {
+ for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
+ if (priv->rx_ip_tbl[i].ref_count == 0) {
+ ip_index = i;
+ break;
+ }
+ }
+ }
+
+ /* Fail if no match and no free entries*/
+ if (ip_index < 0) {
+ PMD_DRV_LOG(ERR,
+ "No free entry left in the Rx IP table\n");
+ return -1;
+ }
+
+ /* Find a free entry in the SA table*/
+ for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+ if (priv->rx_sa_tbl[i].used == 0) {
+ sa_index = i;
+ break;
+ }
+ }
+ /* Fail if no free entries*/
+ if (sa_index < 0) {
+ PMD_DRV_LOG(ERR,
+ "No free entry left in the Rx SA table\n");
+ return -1;
+ }
+
+ priv->rx_ip_tbl[ip_index].ip.ipv6[0] =
+ ic_session->dst_ip.ipv6[0];
+ priv->rx_ip_tbl[ip_index].ip.ipv6[1] =
+ ic_session->dst_ip.ipv6[1];
+ priv->rx_ip_tbl[ip_index].ip.ipv6[2] =
+ ic_session->dst_ip.ipv6[2];
+ priv->rx_ip_tbl[ip_index].ip.ipv6[3] =
+ ic_session->dst_ip.ipv6[3];
+ priv->rx_ip_tbl[ip_index].ref_count++;
+
+ priv->rx_sa_tbl[sa_index].spi =
+ rte_cpu_to_be_32(ic_session->spi);
+ priv->rx_sa_tbl[sa_index].ip_index = ip_index;
+ priv->rx_sa_tbl[sa_index].key[3] =
+ rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[0]);
+ priv->rx_sa_tbl[sa_index].key[2] =
+ rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[4]);
+ priv->rx_sa_tbl[sa_index].key[1] =
+ rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[8]);
+ priv->rx_sa_tbl[sa_index].key[0] =
+ rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[12]);
+ priv->rx_sa_tbl[sa_index].salt =
+ rte_cpu_to_be_32(ic_session->salt);
+ priv->rx_sa_tbl[sa_index].mode = IPSRXMOD_VALID;
+ if (ic_session->op == IXGBE_OP_AUTHENTICATED_DECRYPTION)
+ priv->rx_sa_tbl[sa_index].mode |=
+ (IPSRXMOD_PROTO | IPSRXMOD_DECRYPT);
+ if (ic_session->dst_ip.type == IPv6)
+ priv->rx_sa_tbl[sa_index].mode |= IPSRXMOD_IPV6;
+ priv->rx_sa_tbl[sa_index].used = 1;
+
+ /* write IP table entry*/
+ reg_val = IPSRXIDX_RX_EN | IPSRXIDX_WRITE |
+ IPSRXIDX_TABLE_IP | (ip_index << 3);
+ if (priv->rx_ip_tbl[ip_index].ip.type == IPv4) {
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3),
+ priv->rx_ip_tbl[ip_index].ip.ipv4);
+ } else {
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0),
+ priv->rx_ip_tbl[ip_index].ip.ipv6[0]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1),
+ priv->rx_ip_tbl[ip_index].ip.ipv6[1]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2),
+ priv->rx_ip_tbl[ip_index].ip.ipv6[2]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3),
+ priv->rx_ip_tbl[ip_index].ip.ipv6[3]);
+ }
+ IXGBE_WAIT_RWRITE;
+
+ /* write SPI table entry*/
+ reg_val = IPSRXIDX_RX_EN | IPSRXIDX_WRITE |
+ IPSRXIDX_TABLE_SPI | (sa_index << 3);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI,
+ priv->rx_sa_tbl[sa_index].spi);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX,
+ priv->rx_sa_tbl[sa_index].ip_index);
+ IXGBE_WAIT_RWRITE;
+
+ /* write Key table entry*/
+ reg_val = IPSRXIDX_RX_EN | IPSRXIDX_WRITE |
+ IPSRXIDX_TABLE_KEY | (sa_index << 3);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0),
+ priv->rx_sa_tbl[sa_index].key[0]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1),
+ priv->rx_sa_tbl[sa_index].key[1]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2),
+ priv->rx_sa_tbl[sa_index].key[2]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3),
+ priv->rx_sa_tbl[sa_index].key[3]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT,
+ priv->rx_sa_tbl[sa_index].salt);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD,
+ priv->rx_sa_tbl[sa_index].mode);
+ IXGBE_WAIT_RWRITE;
+
+ } else { /* sess->dir == RTE_CRYPTO_OUTBOUND */
+ int i;
+
+ /* Find a free entry in the SA table*/
+ for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+ if (priv->tx_sa_tbl[i].used == 0) {
+ sa_index = i;
+ break;
+ }
+ }
+ /* Fail if no free entries*/
+ if (sa_index < 0) {
+ PMD_DRV_LOG(ERR,
+ "No free entry left in the Tx SA table\n");
+ return -1;
+ }
+
+ priv->tx_sa_tbl[sa_index].spi =
+ rte_cpu_to_be_32(ic_session->spi);
+ priv->tx_sa_tbl[sa_index].key[3] =
+ rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[0]);
+ priv->tx_sa_tbl[sa_index].key[2] =
+ rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[4]);
+ priv->tx_sa_tbl[sa_index].key[1] =
+ rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[8]);
+ priv->tx_sa_tbl[sa_index].key[0] =
+ rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[12]);
+ priv->tx_sa_tbl[sa_index].salt =
+ rte_cpu_to_be_32(ic_session->salt);
+
+ reg_val = IPSRXIDX_RX_EN | IPSRXIDX_WRITE | (sa_index << 3);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0),
+ priv->tx_sa_tbl[sa_index].key[0]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1),
+ priv->tx_sa_tbl[sa_index].key[1]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2),
+ priv->tx_sa_tbl[sa_index].key[2]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3),
+ priv->tx_sa_tbl[sa_index].key[3]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT,
+ priv->tx_sa_tbl[sa_index].salt);
+ IXGBE_WAIT_TWRITE;
+
+ priv->tx_sa_tbl[i].used = 1;
+ ic_session->sa_index = sa_index;
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_crypto_remove_sa(struct rte_eth_dev *dev,
+ struct ixgbe_crypto_session *ic_session)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ixgbe_ipsec *priv =
+ IXGBE_DEV_PRIVATE_TO_IPSEC(dev->data->dev_private);
+ uint32_t reg_val;
+ int sa_index = -1;
+
+ if (ic_session->op == IXGBE_OP_AUTHENTICATED_DECRYPTION) {
+ int i, ip_index = -1;
+
+ /* Find a match in the IP table*/
+ for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
+ if (CMP_IP(priv->rx_ip_tbl[i].ip, ic_session->dst_ip)) {
+ ip_index = i;
+ break;
+ }
+ }
+
+ /* Fail if no match*/
+ if (ip_index < 0) {
+ PMD_DRV_LOG(ERR,
+ "Entry not found in the Rx IP table\n");
+ return -1;
+ }
+
+ /* Find a free entry in the SA table*/
+ for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+ if (priv->rx_sa_tbl[i].spi ==
+ rte_cpu_to_be_32(ic_session->spi)) {
+ sa_index = i;
+ break;
+ }
+ }
+ /* Fail if no match*/
+ if (sa_index < 0) {
+ PMD_DRV_LOG(ERR,
+ "Entry not found in the Rx SA table\n");
+ return -1;
+ }
+
+ /* Disable and clear Rx SPI and key table table entryes*/
+ reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_SPI | (sa_index << 3);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI, 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX, 0);
+ IXGBE_WAIT_RWRITE;
+ reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_KEY | (sa_index << 3);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT, 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD, 0);
+ IXGBE_WAIT_RWRITE;
+ priv->rx_sa_tbl[sa_index].used = 0;
+
+ /* If last used then clear the IP table entry*/
+ priv->rx_ip_tbl[ip_index].ref_count--;
+ if (priv->rx_ip_tbl[ip_index].ref_count == 0) {
+ reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_IP |
+ (ip_index << 3);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3), 0);
+ }
+ } else { /* session->dir == RTE_CRYPTO_OUTBOUND */
+ int i;
+
+ /* Find a match in the SA table*/
+ for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+ if (priv->tx_sa_tbl[i].spi ==
+ rte_cpu_to_be_32(ic_session->spi)) {
+ sa_index = i;
+ break;
+ }
+ }
+ /* Fail if no match entries*/
+ if (sa_index < 0) {
+ PMD_DRV_LOG(ERR,
+ "Entry not found in the Tx SA table\n");
+ return -1;
+ }
+ reg_val = IPSRXIDX_WRITE | (sa_index << 3);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT, 0);
+ IXGBE_WAIT_TWRITE;
+
+ priv->tx_sa_tbl[sa_index].used = 0;
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_crypto_create_session(void *device,
+ struct rte_security_session_conf *conf,
+ struct rte_security_session *session,
+ struct rte_mempool *mempool)
+{
+ struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
+ struct ixgbe_crypto_session *ic_session = NULL;
+ struct rte_crypto_aead_xform *aead_xform;
+ struct rte_eth_conf *dev_conf = ð_dev->data->dev_conf;
+
+ if (rte_mempool_get(mempool, (void **)&ic_session)) {
+ PMD_DRV_LOG(ERR, "Cannot get object from ic_session mempool");
+ return -ENOMEM;
+ }
+
+ if (conf->crypto_xform->type != RTE_CRYPTO_SYM_XFORM_AEAD ||
+ conf->crypto_xform->aead.algo !=
+ RTE_CRYPTO_AEAD_AES_GCM) {
+ PMD_DRV_LOG(ERR, "Unsupported crypto transformation mode\n");
+ return -ENOTSUP;
+ }
+ aead_xform = &conf->crypto_xform->aead;
+
+ if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
+ if (dev_conf->rxmode.enable_sec) {
+ ic_session->op = IXGBE_OP_AUTHENTICATED_DECRYPTION;
+ } else {
+ PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
+ return -ENOTSUP;
+ }
+ } else {
+ if (dev_conf->txmode.enable_sec) {
+ ic_session->op = IXGBE_OP_AUTHENTICATED_ENCRYPTION;
+ } else {
+ PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
+ return -ENOTSUP;
+ }
+ }
+
+ ic_session->key = aead_xform->key.data;
+ memcpy(&ic_session->salt,
+ &aead_xform->key.data[aead_xform->key.length], 4);
+ ic_session->spi = conf->ipsec.spi;
+ ic_session->dev = eth_dev;
+
+ set_sec_session_private_data(session, ic_session);
+
+ if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
+ if (ixgbe_crypto_add_sa(ic_session)) {
+ PMD_DRV_LOG(ERR, "Failed to add SA\n");
+ return -EPERM;
+ }
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_crypto_remove_session(void *device,
+ struct rte_security_session *session)
+{
+ struct rte_eth_dev *eth_dev = device;
+ struct ixgbe_crypto_session *ic_session =
+ (struct ixgbe_crypto_session *)
+ get_sec_session_private_data(session);
+ struct rte_mempool *mempool = rte_mempool_from_obj(ic_session);
+
+ if (eth_dev != ic_session->dev) {
+ PMD_DRV_LOG(ERR, "Session not bound to this device\n");
+ return -ENODEV;
+ }
+
+ if (ixgbe_crypto_remove_sa(eth_dev, ic_session)) {
+ PMD_DRV_LOG(ERR, "Failed to remove session\n");
+ return -EFAULT;
+ }
+
+ rte_mempool_put(mempool, (void *)ic_session);
+
+ return 0;
+}
+
+static int
+ixgbe_crypto_update_mb(void *device __rte_unused,
+ struct rte_security_session *session,
+ struct rte_mbuf *m, void *params __rte_unused)
+{
+ struct ixgbe_crypto_session *ic_session =
+ get_sec_session_private_data(session);
+ if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
+ struct ixgbe_crypto_tx_desc_md *mdata =
+ (struct ixgbe_crypto_tx_desc_md *)&m->udata64;
+ mdata->enc = 1;
+ mdata->sa_idx = ic_session->sa_index;
+ mdata->pad_len = *rte_pktmbuf_mtod_offset(m,
+ uint8_t *, rte_pktmbuf_pkt_len(m) - 18) + 18;
+ }
+ return 0;
+}
+
+struct rte_cryptodev_capabilities aes_gmac_crypto_capabilities[] = {
+ { /* AES GMAC (128-bit) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_GMAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 12,
+ .max = 12,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 12,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ .op = RTE_CRYPTO_OP_TYPE_UNDEFINED,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_NOT_SPECIFIED
+ }, }
+ },
+};
+
+struct rte_cryptodev_capabilities aes_gcm_gmac_crypto_capabilities[] = {
+ { /* AES GMAC (128-bit) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_GMAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 12,
+ .max = 12,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 12,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ { /* AES GCM (128-bit) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ {.aead = {
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 8,
+ .max = 16,
+ .increment = 4
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 65535,
+ .increment = 1
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 12,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ .op = RTE_CRYPTO_OP_TYPE_UNDEFINED,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_NOT_SPECIFIED
+ }, }
+ },
+};
+
+static const struct rte_security_capability ixgbe_security_capabilities[] = {
+ { /* IPsec Inline Crypto ESP Transport Egress */
+ .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ .options = { 0 }
+ },
+ .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
+ .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+ },
+ { /* IPsec Inline Crypto ESP Transport Ingress */
+ .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+ .options = { 0 }
+ },
+ .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
+ .ol_flags = 0
+ },
+ { /* IPsec Inline Crypto ESP Tunnel Egress */
+ .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ .options = { 0 }
+ },
+ .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
+ .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+ },
+ { /* IPsec Inline Crypto ESP Tunnel Ingress */
+ .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+ .options = { 0 }
+ },
+ .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
+ .ol_flags = 0
+ },
+ {
+ .action = RTE_SECURITY_ACTION_TYPE_NONE
+ }
+};
+
+static const struct rte_security_capability *
+ixgbe_crypto_capabilities_get(void *device __rte_unused)
+{
+ return ixgbe_security_capabilities;
+}
+
+
+int
+ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ uint32_t reg;
+
+ /* sanity checks */
+ if (dev->data->dev_conf.rxmode.enable_lro) {
+ PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
+ return -1;
+ }
+ if (!dev->data->dev_conf.rxmode.hw_strip_crc) {
+ PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
+ return -1;
+ }
+
+
+ /* Set IXGBE_SECTXBUFFAF to 0x15 as required in the datasheet*/
+ IXGBE_WRITE_REG(hw, IXGBE_SECTXBUFFAF, 0x15);
+
+ /* IFG needs to be set to 3 when we are using security. Otherwise a Tx
+ * hang will occur with heavy traffic.
+ */
+ reg = IXGBE_READ_REG(hw, IXGBE_SECTXMINIFG);
+ reg = (reg & 0xFFFFFFF0) | 0x3;
+ IXGBE_WRITE_REG(hw, IXGBE_SECTXMINIFG, reg);
+
+ reg = IXGBE_READ_REG(hw, IXGBE_HLREG0);
+ reg |= IXGBE_HLREG0_TXCRCEN | IXGBE_HLREG0_RXCRCSTRP;
+ IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
+
+ if (dev->data->dev_conf.rxmode.enable_sec) {
+ IXGBE_WRITE_REG(hw, IXGBE_SECRXCTRL, 0);
+ reg = IXGBE_READ_REG(hw, IXGBE_SECRXCTRL);
+ if (reg != 0) {
+ PMD_DRV_LOG(ERR, "Error enabling Rx Crypto");
+ return -1;
+ }
+ }
+ if (dev->data->dev_conf.txmode.enable_sec) {
+ IXGBE_WRITE_REG(hw, IXGBE_SECTXCTRL,
+ IXGBE_SECTXCTRL_STORE_FORWARD);
+ reg = IXGBE_READ_REG(hw, IXGBE_SECTXCTRL);
+ if (reg != IXGBE_SECTXCTRL_STORE_FORWARD) {
+ PMD_DRV_LOG(ERR, "Error enabling Rx Crypto");
+ return -1;
+ }
+ }
+
+ ixgbe_crypto_clear_ipsec_tables(dev);
+
+ return 0;
+}
+
+int
+ixgbe_crypto_add_ingress_sa_from_flow(const void *sess,
+ const void *ip_spec,
+ uint8_t is_ipv6)
+{
+ struct ixgbe_crypto_session *ic_session
+ = get_sec_session_private_data(sess);
+
+ if (ic_session->op == IXGBE_OP_AUTHENTICATED_DECRYPTION) {
+ if (is_ipv6) {
+ const struct rte_flow_item_ipv6 *ipv6 = ip_spec;
+ ic_session->src_ip.type = IPv6;
+ ic_session->dst_ip.type = IPv6;
+ rte_memcpy(ic_session->src_ip.ipv6,
+ ipv6->hdr.src_addr, 16);
+ rte_memcpy(ic_session->dst_ip.ipv6,
+ ipv6->hdr.dst_addr, 16);
+ } else {
+ const struct rte_flow_item_ipv4 *ipv4 = ip_spec;
+ ic_session->src_ip.type = IPv4;
+ ic_session->dst_ip.type = IPv4;
+ ic_session->src_ip.ipv4 = ipv4->hdr.src_addr;
+ ic_session->dst_ip.ipv4 = ipv4->hdr.dst_addr;
+ }
+ return ixgbe_crypto_add_sa(ic_session);
+ }
+
+ return 0;
+}
+
+
+struct rte_security_ops ixgbe_security_ops = {
+ .session_create = ixgbe_crypto_create_session,
+ .session_update = NULL,
+ .session_stats_get = NULL,
+ .session_destroy = ixgbe_crypto_remove_session,
+
+ .set_pkt_metadata = ixgbe_crypto_update_mb,
+
+ .capabilities_get = ixgbe_crypto_capabilities_get
+};
diff --git a/drivers/net/ixgbe/ixgbe_ipsec.h b/drivers/net/ixgbe/ixgbe_ipsec.h
new file mode 100644
index 0000000..9f06235
--- /dev/null
+++ b/drivers/net/ixgbe/ixgbe_ipsec.h
@@ -0,0 +1,147 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef IXGBE_IPSEC_H_
+#define IXGBE_IPSEC_H_
+
+#include <rte_security.h>
+
+#define IPSRXIDX_RX_EN 0x00000001
+#define IPSRXIDX_TABLE_IP 0x00000002
+#define IPSRXIDX_TABLE_SPI 0x00000004
+#define IPSRXIDX_TABLE_KEY 0x00000006
+#define IPSRXIDX_WRITE 0x80000000
+#define IPSRXIDX_READ 0x40000000
+#define IPSRXMOD_VALID 0x00000001
+#define IPSRXMOD_PROTO 0x00000004
+#define IPSRXMOD_DECRYPT 0x00000008
+#define IPSRXMOD_IPV6 0x00000010
+#define IXGBE_ADVTXD_POPTS_IPSEC 0x00000400
+#define IXGBE_ADVTXD_TUCMD_IPSEC_TYPE_ESP 0x00002000
+#define IXGBE_ADVTXD_TUCMD_IPSEC_ENCRYPT_EN 0x00004000
+#define IXGBE_RXDADV_IPSEC_STATUS_SECP 0x00020000
+#define IXGBE_RXDADV_IPSEC_ERROR_BIT_MASK 0x18000000
+#define IXGBE_RXDADV_IPSEC_ERROR_INVALID_PROTOCOL 0x08000000
+#define IXGBE_RXDADV_IPSEC_ERROR_INVALID_LENGTH 0x10000000
+#define IXGBE_RXDADV_IPSEC_ERROR_AUTHENTICATION_FAILED 0x18000000
+
+#define IPSEC_MAX_RX_IP_COUNT 128
+#define IPSEC_MAX_SA_COUNT 1024
+
+enum ixgbe_operation {
+ IXGBE_OP_AUTHENTICATED_ENCRYPTION,
+ IXGBE_OP_AUTHENTICATED_DECRYPTION
+};
+
+enum ixgbe_gcm_key {
+ IXGBE_GCM_KEY_128,
+ IXGBE_GCM_KEY_256
+};
+
+/**
+ * Generic IP address structure
+ * TODO: Find better location for this rte_net.h possibly.
+ **/
+struct ipaddr {
+ enum ipaddr_type {
+ IPv4,
+ IPv6
+ } type;
+ /**< IP Address Type - IPv4/IPv6 */
+
+ union {
+ uint32_t ipv4;
+ uint32_t ipv6[4];
+ };
+};
+
+/** inline crypto crypto private session structure */
+struct ixgbe_crypto_session {
+ enum ixgbe_operation op;
+ uint8_t *key;
+ uint32_t salt;
+ uint32_t sa_index;
+ uint32_t spi;
+ struct ipaddr src_ip;
+ struct ipaddr dst_ip;
+ struct rte_eth_dev *dev;
+} __rte_cache_aligned;
+
+struct ixgbe_crypto_rx_ip_table {
+ struct ipaddr ip;
+ uint16_t ref_count;
+};
+struct ixgbe_crypto_rx_sa_table {
+ uint32_t spi;
+ uint32_t ip_index;
+ uint32_t key[4];
+ uint32_t salt;
+ uint8_t mode;
+ uint8_t used;
+};
+
+struct ixgbe_crypto_tx_sa_table {
+ uint32_t spi;
+ uint32_t key[4];
+ uint32_t salt;
+ uint8_t used;
+};
+
+struct ixgbe_crypto_tx_desc_md {
+ union {
+ uint64_t data;
+ struct {
+ uint32_t sa_idx;
+ uint8_t pad_len;
+ uint8_t enc;
+ };
+ };
+};
+
+struct ixgbe_ipsec {
+ struct ixgbe_crypto_rx_ip_table rx_ip_tbl[IPSEC_MAX_RX_IP_COUNT];
+ struct ixgbe_crypto_rx_sa_table rx_sa_tbl[IPSEC_MAX_SA_COUNT];
+ struct ixgbe_crypto_tx_sa_table tx_sa_tbl[IPSEC_MAX_SA_COUNT];
+};
+
+extern struct rte_security_ops ixgbe_security_ops;
+
+
+int ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev);
+int ixgbe_crypto_add_ingress_sa_from_flow(const void *sess,
+ const void *ip_spec,
+ uint8_t is_ipv6);
+
+
+
+#endif /*IXGBE_IPSEC_H_*/
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 0038dfb..279e3fa 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -93,6 +93,7 @@
PKT_TX_TCP_SEG | \
PKT_TX_MACSEC | \
PKT_TX_OUTER_IP_CKSUM | \
+ PKT_TX_SEC_OFFLOAD | \
IXGBE_TX_IEEE1588_TMST)
#define IXGBE_TX_OFFLOAD_NOTSUP_MASK \
@@ -395,7 +396,8 @@ ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
static inline void
ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
volatile struct ixgbe_adv_tx_context_desc *ctx_txd,
- uint64_t ol_flags, union ixgbe_tx_offload tx_offload)
+ uint64_t ol_flags, union ixgbe_tx_offload tx_offload,
+ struct rte_mbuf *mb)
{
uint32_t type_tucmd_mlhl;
uint32_t mss_l4len_idx = 0;
@@ -479,6 +481,18 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
seqnum_seed |= tx_offload.l2_len
<< IXGBE_ADVTXD_TUNNEL_LEN;
}
+ if (mb->ol_flags & PKT_TX_SEC_OFFLOAD) {
+ struct ixgbe_crypto_tx_desc_md *mdata =
+ (struct ixgbe_crypto_tx_desc_md *)
+ &mb->udata64;
+ seqnum_seed |=
+ (IXGBE_ADVTXD_IPSEC_SA_INDEX_MASK & mdata->sa_idx);
+ type_tucmd_mlhl |= mdata->enc ?
+ (IXGBE_ADVTXD_TUCMD_IPSEC_TYPE_ESP |
+ IXGBE_ADVTXD_TUCMD_IPSEC_ENCRYPT_EN) : 0;
+ type_tucmd_mlhl |=
+ (mdata->pad_len & IXGBE_ADVTXD_IPSEC_ESP_LEN_MASK);
+ }
txq->ctx_cache[ctx_idx].flags = ol_flags;
txq->ctx_cache[ctx_idx].tx_offload.data[0] =
@@ -657,6 +671,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint32_t ctx = 0;
uint32_t new_ctx;
union ixgbe_tx_offload tx_offload;
+ uint8_t use_ipsec;
tx_offload.data[0] = 0;
tx_offload.data[1] = 0;
@@ -684,6 +699,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* are needed for offload functionality.
*/
ol_flags = tx_pkt->ol_flags;
+ use_ipsec = txq->using_ipsec && (ol_flags & PKT_TX_SEC_OFFLOAD);
/* If hardware offload required */
tx_ol_req = ol_flags & IXGBE_TX_OFFLOAD_MASK;
@@ -695,6 +711,13 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.tso_segsz = tx_pkt->tso_segsz;
tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
+ if (use_ipsec) {
+ struct ixgbe_crypto_tx_desc_md *ipsec_mdata =
+ (struct ixgbe_crypto_tx_desc_md *)
+ &tx_pkt->udata64;
+ tx_offload.sa_idx = ipsec_mdata->sa_idx;
+ tx_offload.sec_pad_len = ipsec_mdata->pad_len;
+ }
/* If new context need be built or reuse the exist ctx. */
ctx = what_advctx_update(txq, tx_ol_req,
@@ -855,7 +878,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
}
ixgbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req,
- tx_offload);
+ tx_offload, tx_pkt);
txe->last_id = tx_last;
tx_id = txe->next_id;
@@ -873,6 +896,8 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
}
olinfo_status |= (pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT);
+ if (use_ipsec)
+ olinfo_status |= IXGBE_ADVTXD_POPTS_IPSEC;
m_seg = tx_pkt;
do {
@@ -1447,6 +1472,12 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status)
pkt_flags |= PKT_RX_EIP_CKSUM_BAD;
}
+ if (rx_status & IXGBE_RXD_STAT_SECP) {
+ pkt_flags |= PKT_RX_SEC_OFFLOAD;
+ if (rx_status & IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG)
+ pkt_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+ }
+
return pkt_flags;
}
@@ -2364,8 +2395,9 @@ void __attribute__((cold))
ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ixgbe_tx_queue *txq)
{
/* Use a simple Tx queue (no offloads, no multi segs) if possible */
- if (((txq->txq_flags & IXGBE_SIMPLE_FLAGS) == IXGBE_SIMPLE_FLAGS)
- && (txq->tx_rs_thresh >= RTE_PMD_IXGBE_TX_MAX_BURST)) {
+ if (((txq->txq_flags & IXGBE_SIMPLE_FLAGS) == IXGBE_SIMPLE_FLAGS) &&
+ (txq->tx_rs_thresh >= RTE_PMD_IXGBE_TX_MAX_BURST) &&
+ !(dev->data->dev_conf.txmode.enable_sec)) {
PMD_INIT_LOG(DEBUG, "Using simple tx code path");
dev->tx_pkt_prepare = NULL;
#ifdef RTE_IXGBE_INC_VECTOR
@@ -2535,6 +2567,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->txq_flags = tx_conf->txq_flags;
txq->ops = &def_txq_ops;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
+ txq->using_ipsec = dev->data->dev_conf.txmode.enable_sec;
/*
* Modification to set VFTDT for virtual function if vf is detected
@@ -4519,6 +4552,7 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev)
struct ixgbe_rx_queue *rxq = dev->data->rx_queues[i];
rxq->rx_using_sse = rx_using_sse;
+ rxq->using_ipsec = dev->data->dev_conf.rxmode.enable_sec;
}
}
@@ -5006,6 +5040,17 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
dev->data->dev_conf.lpbk_mode == IXGBE_LPBK_82599_TX_RX)
ixgbe_setup_loopback_link_82599(hw);
+ if (dev->data->dev_conf.rxmode.enable_sec ||
+ dev->data->dev_conf.txmode.enable_sec) {
+ ret = ixgbe_crypto_enable_ipsec(dev);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR,
+ "ixgbe_crypto_enable_ipsec fails with %d.",
+ ret);
+ return ret;
+ }
+ }
+
return 0;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 81c527f..4017831 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -138,8 +138,10 @@ struct ixgbe_rx_queue {
uint16_t rx_nb_avail; /**< nr of staged pkts ready to ret to app */
uint16_t rx_next_avail; /**< idx of next staged pkt to ret to app */
uint16_t rx_free_trigger; /**< triggers rx buffer allocation */
- uint16_t rx_using_sse;
+ uint8_t rx_using_sse;
/**< indicates that vector RX is in use */
+ uint8_t using_ipsec;
+ /**< indicates that IPsec RX feature is in use */
#ifdef RTE_IXGBE_INC_VECTOR
uint16_t rxrearm_nb; /**< number of remaining to be re-armed */
uint16_t rxrearm_start; /**< the idx we start the re-arming from */
@@ -183,6 +185,10 @@ union ixgbe_tx_offload {
/* fields for TX offloading of tunnels */
uint64_t outer_l3_len:8; /**< Outer L3 (IP) Hdr Length. */
uint64_t outer_l2_len:8; /**< Outer L2 (MAC) Hdr Length. */
+
+ /* inline ipsec related*/
+ uint64_t sa_idx:8; /**< TX SA database entry index */
+ uint64_t sec_pad_len:4; /**< padding length */
};
};
@@ -247,6 +253,9 @@ struct ixgbe_tx_queue {
struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
const struct ixgbe_txq_ops *ops; /**< txq ops */
uint8_t tx_deferred_start; /**< not in global dev start. */
+ uint8_t using_ipsec;
+ /**< indicates that IPsec TX feature is in use */
+
};
struct ixgbe_txq_ops {
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index e704a7f..c9b1e2e 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -124,10 +124,12 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq)
static inline void
desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
- struct rte_mbuf **rx_pkts)
+ struct rte_mbuf **rx_pkts, uint8_t use_ipsec)
{
__m128i ptype0, ptype1, vtag0, vtag1, csum;
__m128i rearm0, rearm1, rearm2, rearm3;
+ __m128i sterr0, sterr1, sterr2, sterr3;
+ __m128i tmp1, tmp2, tmp3, tmp4;
/* mask everything except rss type */
const __m128i rsstype_msk = _mm_set_epi16(
@@ -174,6 +176,41 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
0, PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t));
+ const __m128i ipsec_sterr_msk = _mm_set_epi32(
+ 0, IXGBE_RXDADV_IPSEC_STATUS_SECP |
+ IXGBE_RXDADV_IPSEC_ERROR_AUTH_FAILED,
+ 0, 0);
+ const __m128i ipsec_proc_msk = _mm_set_epi32(
+ 0, IXGBE_RXDADV_IPSEC_STATUS_SECP, 0, 0);
+ const __m128i ipsec_err_flag = _mm_set_epi32(
+ 0, PKT_RX_SEC_OFFLOAD_FAILED | PKT_RX_SEC_OFFLOAD,
+ 0, 0);
+ const __m128i ipsec_proc_flag = _mm_set_epi32(
+ 0, PKT_RX_SEC_OFFLOAD, 0, 0);
+
+ if (use_ipsec) {
+ sterr0 = _mm_and_si128(descs[0], ipsec_sterr_msk);
+ sterr1 = _mm_and_si128(descs[1], ipsec_sterr_msk);
+ sterr2 = _mm_and_si128(descs[2], ipsec_sterr_msk);
+ sterr3 = _mm_and_si128(descs[3], ipsec_sterr_msk);
+ tmp1 = _mm_cmpeq_epi32(sterr0, ipsec_sterr_msk);
+ tmp2 = _mm_cmpeq_epi32(sterr0, ipsec_proc_msk);
+ tmp3 = _mm_cmpeq_epi32(sterr1, ipsec_sterr_msk);
+ tmp4 = _mm_cmpeq_epi32(sterr1, ipsec_proc_msk);
+ sterr0 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
+ _mm_and_si128(tmp2, ipsec_proc_flag));
+ sterr1 = _mm_or_si128(_mm_and_si128(tmp3, ipsec_err_flag),
+ _mm_and_si128(tmp4, ipsec_proc_flag));
+ tmp1 = _mm_cmpeq_epi32(sterr2, ipsec_sterr_msk);
+ tmp2 = _mm_cmpeq_epi32(sterr2, ipsec_proc_msk);
+ tmp3 = _mm_cmpeq_epi32(sterr3, ipsec_sterr_msk);
+ tmp4 = _mm_cmpeq_epi32(sterr3, ipsec_proc_msk);
+ sterr2 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
+ _mm_and_si128(tmp2, ipsec_proc_flag));
+ sterr3 = _mm_or_si128(_mm_and_si128(tmp3, ipsec_err_flag),
+ _mm_and_si128(tmp4, ipsec_proc_flag));
+ }
+
ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
@@ -221,6 +258,13 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
rearm2 = _mm_blend_epi16(mbuf_init, _mm_slli_si128(vtag1, 4), 0x10);
rearm3 = _mm_blend_epi16(mbuf_init, _mm_slli_si128(vtag1, 2), 0x10);
+ if (use_ipsec) {
+ rearm0 = _mm_or_si128(rearm0, sterr0);
+ rearm1 = _mm_or_si128(rearm1, sterr1);
+ rearm2 = _mm_or_si128(rearm2, sterr2);
+ rearm3 = _mm_or_si128(rearm3, sterr3);
+ }
+
/* write the rearm data and the olflags in one write */
RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) !=
offsetof(struct rte_mbuf, rearm_data) + 8);
@@ -310,6 +354,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
volatile union ixgbe_adv_rx_desc *rxdp;
struct ixgbe_rx_entry *sw_ring;
uint16_t nb_pkts_recd;
+ uint8_t use_ipsec = rxq->using_ipsec;
int pos;
uint64_t var;
__m128i shuf_msk;
@@ -471,7 +516,8 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
/* set ol_flags with vlan packet type */
- desc_to_olflags_v(descs, mbuf_init, vlan_flags, &rx_pkts[pos]);
+ desc_to_olflags_v(descs, mbuf_init, vlan_flags,
+ &rx_pkts[pos], use_ipsec);
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
pkt_mb4 = _mm_add_epi16(pkt_mb4, crc_adjust);
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 10/12] net/ixgbe: enable inline ipsec
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 10/12] net/ixgbe: enable inline ipsec Akhil Goyal
@ 2017-10-15 12:51 ` Aviad Yehezkel
2017-10-16 10:41 ` Thomas Monjalon
2017-10-18 21:29 ` Ananyev, Konstantin
2017-10-19 9:04 ` Ananyev, Konstantin
2 siblings, 1 reply; 195+ messages in thread
From: Aviad Yehezkel @ 2017-10-15 12:51 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
On 10/15/2017 1:17 AM, Akhil Goyal wrote:
> From: Radu Nicolau <radu.nicolau@intel.com>
>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> ---
> drivers/net/Makefile | 2 +-
> drivers/net/ixgbe/Makefile | 2 +-
> drivers/net/ixgbe/base/ixgbe_osdep.h | 8 +
> drivers/net/ixgbe/ixgbe_ethdev.c | 19 +
> drivers/net/ixgbe/ixgbe_ethdev.h | 6 +-
> drivers/net/ixgbe/ixgbe_flow.c | 47 +++
> drivers/net/ixgbe/ixgbe_ipsec.c | 744 +++++++++++++++++++++++++++++++++
> drivers/net/ixgbe/ixgbe_ipsec.h | 147 +++++++
> drivers/net/ixgbe/ixgbe_rxtx.c | 53 ++-
> drivers/net/ixgbe/ixgbe_rxtx.h | 11 +-
> drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 50 ++-
> 11 files changed, 1079 insertions(+), 10 deletions(-)
> create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.c
> create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.h
>
> diff --git a/drivers/net/Makefile b/drivers/net/Makefile
> index 5d2ad2f..339ff36 100644
> --- a/drivers/net/Makefile
> +++ b/drivers/net/Makefile
> @@ -68,7 +68,7 @@ DEPDIRS-fm10k = $(core-libs) librte_hash
> DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e
> DEPDIRS-i40e = $(core-libs) librte_hash
> DIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe
> -DEPDIRS-ixgbe = $(core-libs) librte_hash
> +DEPDIRS-ixgbe = $(core-libs) librte_hash librte_security
> DIRS-$(CONFIG_RTE_LIBRTE_LIO_PMD) += liquidio
> DEPDIRS-liquidio = $(core-libs)
> DIRS-$(CONFIG_RTE_LIBRTE_MLX4_PMD) += mlx4
> diff --git a/drivers/net/ixgbe/Makefile b/drivers/net/ixgbe/Makefile
> index 95c806d..6e963c7 100644
> --- a/drivers/net/ixgbe/Makefile
> +++ b/drivers/net/ixgbe/Makefile
> @@ -118,11 +118,11 @@ SRCS-$(CONFIG_RTE_IXGBE_INC_VECTOR) += ixgbe_rxtx_vec_neon.c
> else
> SRCS-$(CONFIG_RTE_IXGBE_INC_VECTOR) += ixgbe_rxtx_vec_sse.c
> endif
> -
> ifeq ($(CONFIG_RTE_LIBRTE_IXGBE_BYPASS),y)
> SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_bypass.c
> SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_82599_bypass.c
> endif
> +SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_ipsec.c
> SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += rte_pmd_ixgbe.c
> SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_tm.c
>
> diff --git a/drivers/net/ixgbe/base/ixgbe_osdep.h b/drivers/net/ixgbe/base/ixgbe_osdep.h
> index 4aab278..b132a0f 100644
> --- a/drivers/net/ixgbe/base/ixgbe_osdep.h
> +++ b/drivers/net/ixgbe/base/ixgbe_osdep.h
> @@ -161,4 +161,12 @@ static inline uint32_t ixgbe_read_addr(volatile void* addr)
> #define IXGBE_WRITE_REG_ARRAY(hw, reg, index, value) \
> IXGBE_PCI_REG_WRITE(IXGBE_PCI_REG_ARRAY_ADDR((hw), (reg), (index)), (value))
>
> +#define IXGBE_WRITE_REG_THEN_POLL_MASK(hw, reg, val, mask, poll_ms) \
> +{ \
> + uint32_t cnt = poll_ms; \
> + IXGBE_WRITE_REG(hw, (reg), (val)); \
> + while (((IXGBE_READ_REG(hw, (reg))) & (mask)) && (cnt--)) \
> + rte_delay_ms(1); \
> +}
> +
> #endif /* _IXGBE_OS_H_ */
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
> index 14b9c53..fcabd5e 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -61,6 +61,7 @@
> #include <rte_random.h>
> #include <rte_dev.h>
> #include <rte_hash_crc.h>
> +#include <rte_security_driver.h>
>
> #include "ixgbe_logs.h"
> #include "base/ixgbe_api.h"
> @@ -1132,6 +1133,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
> IXGBE_DEV_PRIVATE_TO_FILTER_INFO(eth_dev->data->dev_private);
> struct ixgbe_bw_conf *bw_conf =
> IXGBE_DEV_PRIVATE_TO_BW_CONF(eth_dev->data->dev_private);
> + struct rte_security_ctx *security_instance;
> uint32_t ctrl_ext;
> uint16_t csum;
> int diag, i;
> @@ -1139,6 +1141,17 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
> PMD_INIT_FUNC_TRACE();
>
> eth_dev->dev_ops = &ixgbe_eth_dev_ops;
> + security_instance = rte_malloc("rte_security_instances_ops",
> + sizeof(struct rte_security_ctx), 0);
> + if (security_instance == NULL)
> + return -ENOMEM;
> + security_instance->state = RTE_SECURITY_INSTANCE_VALID;
> + security_instance->device = (void *)eth_dev;
> + security_instance->ops = &ixgbe_security_ops;
> + security_instance->sess_cnt = 0;
> +
> + eth_dev->data->security_ctx = security_instance;
> +
> eth_dev->rx_pkt_burst = &ixgbe_recv_pkts;
> eth_dev->tx_pkt_burst = &ixgbe_xmit_pkts;
> eth_dev->tx_pkt_prepare = &ixgbe_prep_pkts;
> @@ -1169,6 +1182,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
>
> rte_eth_copy_pci_info(eth_dev, pci_dev);
> eth_dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE;
> + eth_dev->data->dev_flags |= RTE_ETH_DEV_SECURITY;
>
> /* Vendor and Device ID need to be set before init of shared code */
> hw->device_id = pci_dev->id.device_id;
> @@ -1401,6 +1415,8 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
> /* Remove all Traffic Manager configuration */
> ixgbe_tm_conf_uninit(eth_dev);
>
> + rte_free(eth_dev->data->security_ctx);
> +
> return 0;
> }
>
> @@ -3695,6 +3711,9 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
> hw->mac.type == ixgbe_mac_X550EM_a)
> dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
>
> + dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_SECURITY;
> + dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
> +
> dev_info->default_rxconf = (struct rte_eth_rxconf) {
> .rx_thresh = {
> .pthresh = IXGBE_DEFAULT_RX_PTHRESH,
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
> index e28c856..f5b52c4 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.h
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.h
> @@ -38,6 +38,7 @@
> #include "base/ixgbe_dcb_82599.h"
> #include "base/ixgbe_dcb_82598.h"
> #include "ixgbe_bypass.h"
> +#include "ixgbe_ipsec.h"
> #include <rte_time.h>
> #include <rte_hash.h>
> #include <rte_pci.h>
> @@ -486,7 +487,7 @@ struct ixgbe_adapter {
> struct ixgbe_filter_info filter;
> struct ixgbe_l2_tn_info l2_tn;
> struct ixgbe_bw_conf bw_conf;
> -
> + struct ixgbe_ipsec ipsec;
> bool rx_bulk_alloc_allowed;
> bool rx_vec_allowed;
> struct rte_timecounter systime_tc;
> @@ -543,6 +544,9 @@ struct ixgbe_adapter {
> #define IXGBE_DEV_PRIVATE_TO_TM_CONF(adapter) \
> (&((struct ixgbe_adapter *)adapter)->tm_conf)
>
> +#define IXGBE_DEV_PRIVATE_TO_IPSEC(adapter)\
> + (&((struct ixgbe_adapter *)adapter)->ipsec)
> +
> /*
> * RX/TX function prototypes
> */
> diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
> index 904c146..13c8243 100644
> --- a/drivers/net/ixgbe/ixgbe_flow.c
> +++ b/drivers/net/ixgbe/ixgbe_flow.c
> @@ -187,6 +187,9 @@ const struct rte_flow_action *next_no_void_action(
> * END
> * other members in mask and spec should set to 0x00.
> * item->last should be NULL.
> + *
> + * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY.
> + *
> */
> static int
> cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
> @@ -226,6 +229,41 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
> return -rte_errno;
> }
>
> + /**
> + * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
> + */
> + act = next_no_void_action(actions, NULL);
> + if (act->type == RTE_FLOW_ACTION_TYPE_SECURITY) {
> + const void *conf = act->conf;
> + /* check if the next not void item is END */
> + act = next_no_void_action(actions, act);
> + if (act->type != RTE_FLOW_ACTION_TYPE_END) {
> + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
> + rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ACTION,
> + act, "Not supported action.");
> + return -rte_errno;
> + }
> +
> + /* get the IP pattern*/
> + item = next_no_void_pattern(pattern, NULL);
> + while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
> + item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
> + if (item->last ||
> + item->type == RTE_FLOW_ITEM_TYPE_END) {
> + rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ITEM,
> + item, "IP pattern missing.");
> + return -rte_errno;
> + }
> + item = next_no_void_pattern(pattern, item);
> + }
> +
> + filter->proto = IPPROTO_ESP;
> + return ixgbe_crypto_add_ingress_sa_from_flow(conf, item->spec,
> + item->type == RTE_FLOW_ITEM_TYPE_IPV6);
> + }
> +
> /* the first not void item can be MAC or IPv4 */
> item = next_no_void_pattern(pattern, NULL);
>
> @@ -519,6 +557,10 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
> if (ret)
> return ret;
>
> + /* ESP flow not really a flow*/
> + if (filter->proto == IPPROTO_ESP)
> + return 0;
> +
> /* Ixgbe doesn't support tcp flags. */
> if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
> memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
> @@ -2758,6 +2800,11 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
> memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
> ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
> actions, &ntuple_filter, error);
> +
> + /* ESP flow not really a flow*/
> + if (ntuple_filter.proto == IPPROTO_ESP)
> + return flow;
> +
> if (!ret) {
> ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
> if (!ret) {
> diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c
> new file mode 100644
> index 0000000..6ace305
> --- /dev/null
> +++ b/drivers/net/ixgbe/ixgbe_ipsec.c
> @@ -0,0 +1,744 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
> + * All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * * Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * * Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in
> + * the documentation and/or other materials provided with the
> + * distribution.
> + * * Neither the name of Intel Corporation nor the names of its
> + * contributors may be used to endorse or promote products derived
> + * from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#include <rte_ethdev.h>
> +#include <rte_ethdev_pci.h>
> +#include <rte_ip.h>
> +#include <rte_jhash.h>
> +#include <rte_security_driver.h>
> +#include <rte_cryptodev.h>
> +#include <rte_flow.h>
> +
> +#include "base/ixgbe_type.h"
> +#include "base/ixgbe_api.h"
> +#include "ixgbe_ethdev.h"
> +#include "ixgbe_ipsec.h"
> +
> +#define RTE_IXGBE_REGISTER_POLL_WAIT_5_MS 5
> +
> +#define IXGBE_WAIT_RREAD \
> + IXGBE_WRITE_REG_THEN_POLL_MASK(hw, IXGBE_IPSRXIDX, reg_val, \
> + IPSRXIDX_READ, RTE_IXGBE_REGISTER_POLL_WAIT_5_MS)
> +#define IXGBE_WAIT_RWRITE \
> + IXGBE_WRITE_REG_THEN_POLL_MASK(hw, IXGBE_IPSRXIDX, reg_val, \
> + IPSRXIDX_WRITE, RTE_IXGBE_REGISTER_POLL_WAIT_5_MS)
> +#define IXGBE_WAIT_TREAD \
> + IXGBE_WRITE_REG_THEN_POLL_MASK(hw, IXGBE_IPSTXIDX, reg_val, \
> + IPSRXIDX_READ, RTE_IXGBE_REGISTER_POLL_WAIT_5_MS)
> +#define IXGBE_WAIT_TWRITE \
> + IXGBE_WRITE_REG_THEN_POLL_MASK(hw, IXGBE_IPSTXIDX, reg_val, \
> + IPSRXIDX_WRITE, RTE_IXGBE_REGISTER_POLL_WAIT_5_MS)
> +
> +#define CMP_IP(a, b) (\
> + (a).ipv6[0] == (b).ipv6[0] && \
> + (a).ipv6[1] == (b).ipv6[1] && \
> + (a).ipv6[2] == (b).ipv6[2] && \
> + (a).ipv6[3] == (b).ipv6[3])
> +
> +
> +static void
> +ixgbe_crypto_clear_ipsec_tables(struct rte_eth_dev *dev)
> +{
> + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> + int i = 0;
> +
> + /* clear Rx IP table*/
> + for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
> + uint16_t index = i << 3;
> + uint32_t reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_IP | index;
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3), 0);
> + IXGBE_WAIT_RWRITE;
> + }
> +
> + /* clear Rx SPI and Rx/Tx SA tables*/
> + for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
> + uint32_t index = i << 3;
> + uint32_t reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_SPI | index;
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI, 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX, 0);
> + IXGBE_WAIT_RWRITE;
> + reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_KEY | index;
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT, 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD, 0);
> + IXGBE_WAIT_RWRITE;
> + reg_val = IPSRXIDX_WRITE | index;
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT, 0);
> + IXGBE_WAIT_TWRITE;
> + }
> +}
> +
> +static int
> +ixgbe_crypto_add_sa(struct ixgbe_crypto_session *ic_session)
> +{
> + struct rte_eth_dev *dev = ic_session->dev;
> + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> + struct ixgbe_ipsec *priv = IXGBE_DEV_PRIVATE_TO_IPSEC(
> + dev->data->dev_private);
> + uint32_t reg_val;
> + int sa_index = -1;
> +
> + if (ic_session->op == IXGBE_OP_AUTHENTICATED_DECRYPTION) {
> + int i, ip_index = -1;
> +
> + /* Find a match in the IP table*/
> + for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
> + if (CMP_IP(priv->rx_ip_tbl[i].ip,
> + ic_session->dst_ip)) {
> + ip_index = i;
> + break;
> + }
> + }
> + /* If no match, find a free entry in the IP table*/
> + if (ip_index < 0) {
> + for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
> + if (priv->rx_ip_tbl[i].ref_count == 0) {
> + ip_index = i;
> + break;
> + }
> + }
> + }
> +
> + /* Fail if no match and no free entries*/
> + if (ip_index < 0) {
> + PMD_DRV_LOG(ERR,
> + "No free entry left in the Rx IP table\n");
> + return -1;
> + }
> +
> + /* Find a free entry in the SA table*/
> + for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
> + if (priv->rx_sa_tbl[i].used == 0) {
> + sa_index = i;
> + break;
> + }
> + }
> + /* Fail if no free entries*/
> + if (sa_index < 0) {
> + PMD_DRV_LOG(ERR,
> + "No free entry left in the Rx SA table\n");
> + return -1;
> + }
> +
> + priv->rx_ip_tbl[ip_index].ip.ipv6[0] =
> + ic_session->dst_ip.ipv6[0];
> + priv->rx_ip_tbl[ip_index].ip.ipv6[1] =
> + ic_session->dst_ip.ipv6[1];
> + priv->rx_ip_tbl[ip_index].ip.ipv6[2] =
> + ic_session->dst_ip.ipv6[2];
> + priv->rx_ip_tbl[ip_index].ip.ipv6[3] =
> + ic_session->dst_ip.ipv6[3];
> + priv->rx_ip_tbl[ip_index].ref_count++;
> +
> + priv->rx_sa_tbl[sa_index].spi =
> + rte_cpu_to_be_32(ic_session->spi);
> + priv->rx_sa_tbl[sa_index].ip_index = ip_index;
> + priv->rx_sa_tbl[sa_index].key[3] =
> + rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[0]);
> + priv->rx_sa_tbl[sa_index].key[2] =
> + rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[4]);
> + priv->rx_sa_tbl[sa_index].key[1] =
> + rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[8]);
> + priv->rx_sa_tbl[sa_index].key[0] =
> + rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[12]);
> + priv->rx_sa_tbl[sa_index].salt =
> + rte_cpu_to_be_32(ic_session->salt);
> + priv->rx_sa_tbl[sa_index].mode = IPSRXMOD_VALID;
> + if (ic_session->op == IXGBE_OP_AUTHENTICATED_DECRYPTION)
> + priv->rx_sa_tbl[sa_index].mode |=
> + (IPSRXMOD_PROTO | IPSRXMOD_DECRYPT);
> + if (ic_session->dst_ip.type == IPv6)
> + priv->rx_sa_tbl[sa_index].mode |= IPSRXMOD_IPV6;
> + priv->rx_sa_tbl[sa_index].used = 1;
> +
> + /* write IP table entry*/
> + reg_val = IPSRXIDX_RX_EN | IPSRXIDX_WRITE |
> + IPSRXIDX_TABLE_IP | (ip_index << 3);
> + if (priv->rx_ip_tbl[ip_index].ip.type == IPv4) {
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3),
> + priv->rx_ip_tbl[ip_index].ip.ipv4);
> + } else {
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0),
> + priv->rx_ip_tbl[ip_index].ip.ipv6[0]);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1),
> + priv->rx_ip_tbl[ip_index].ip.ipv6[1]);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2),
> + priv->rx_ip_tbl[ip_index].ip.ipv6[2]);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3),
> + priv->rx_ip_tbl[ip_index].ip.ipv6[3]);
> + }
> + IXGBE_WAIT_RWRITE;
> +
> + /* write SPI table entry*/
> + reg_val = IPSRXIDX_RX_EN | IPSRXIDX_WRITE |
> + IPSRXIDX_TABLE_SPI | (sa_index << 3);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI,
> + priv->rx_sa_tbl[sa_index].spi);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX,
> + priv->rx_sa_tbl[sa_index].ip_index);
> + IXGBE_WAIT_RWRITE;
> +
> + /* write Key table entry*/
> + reg_val = IPSRXIDX_RX_EN | IPSRXIDX_WRITE |
> + IPSRXIDX_TABLE_KEY | (sa_index << 3);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0),
> + priv->rx_sa_tbl[sa_index].key[0]);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1),
> + priv->rx_sa_tbl[sa_index].key[1]);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2),
> + priv->rx_sa_tbl[sa_index].key[2]);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3),
> + priv->rx_sa_tbl[sa_index].key[3]);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT,
> + priv->rx_sa_tbl[sa_index].salt);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD,
> + priv->rx_sa_tbl[sa_index].mode);
> + IXGBE_WAIT_RWRITE;
> +
> + } else { /* sess->dir == RTE_CRYPTO_OUTBOUND */
> + int i;
> +
> + /* Find a free entry in the SA table*/
> + for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
> + if (priv->tx_sa_tbl[i].used == 0) {
> + sa_index = i;
> + break;
> + }
> + }
> + /* Fail if no free entries*/
> + if (sa_index < 0) {
> + PMD_DRV_LOG(ERR,
> + "No free entry left in the Tx SA table\n");
> + return -1;
> + }
> +
> + priv->tx_sa_tbl[sa_index].spi =
> + rte_cpu_to_be_32(ic_session->spi);
> + priv->tx_sa_tbl[sa_index].key[3] =
> + rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[0]);
> + priv->tx_sa_tbl[sa_index].key[2] =
> + rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[4]);
> + priv->tx_sa_tbl[sa_index].key[1] =
> + rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[8]);
> + priv->tx_sa_tbl[sa_index].key[0] =
> + rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[12]);
> + priv->tx_sa_tbl[sa_index].salt =
> + rte_cpu_to_be_32(ic_session->salt);
> +
> + reg_val = IPSRXIDX_RX_EN | IPSRXIDX_WRITE | (sa_index << 3);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0),
> + priv->tx_sa_tbl[sa_index].key[0]);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1),
> + priv->tx_sa_tbl[sa_index].key[1]);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2),
> + priv->tx_sa_tbl[sa_index].key[2]);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3),
> + priv->tx_sa_tbl[sa_index].key[3]);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT,
> + priv->tx_sa_tbl[sa_index].salt);
> + IXGBE_WAIT_TWRITE;
> +
> + priv->tx_sa_tbl[i].used = 1;
> + ic_session->sa_index = sa_index;
> + }
> +
> + return 0;
> +}
> +
> +static int
> +ixgbe_crypto_remove_sa(struct rte_eth_dev *dev,
> + struct ixgbe_crypto_session *ic_session)
> +{
> + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> + struct ixgbe_ipsec *priv =
> + IXGBE_DEV_PRIVATE_TO_IPSEC(dev->data->dev_private);
> + uint32_t reg_val;
> + int sa_index = -1;
> +
> + if (ic_session->op == IXGBE_OP_AUTHENTICATED_DECRYPTION) {
> + int i, ip_index = -1;
> +
> + /* Find a match in the IP table*/
> + for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
> + if (CMP_IP(priv->rx_ip_tbl[i].ip, ic_session->dst_ip)) {
> + ip_index = i;
> + break;
> + }
> + }
> +
> + /* Fail if no match*/
> + if (ip_index < 0) {
> + PMD_DRV_LOG(ERR,
> + "Entry not found in the Rx IP table\n");
> + return -1;
> + }
> +
> + /* Find a free entry in the SA table*/
> + for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
> + if (priv->rx_sa_tbl[i].spi ==
> + rte_cpu_to_be_32(ic_session->spi)) {
> + sa_index = i;
> + break;
> + }
> + }
> + /* Fail if no match*/
> + if (sa_index < 0) {
> + PMD_DRV_LOG(ERR,
> + "Entry not found in the Rx SA table\n");
> + return -1;
> + }
> +
> + /* Disable and clear Rx SPI and key table table entryes*/
> + reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_SPI | (sa_index << 3);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI, 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX, 0);
> + IXGBE_WAIT_RWRITE;
> + reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_KEY | (sa_index << 3);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT, 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD, 0);
> + IXGBE_WAIT_RWRITE;
> + priv->rx_sa_tbl[sa_index].used = 0;
> +
> + /* If last used then clear the IP table entry*/
> + priv->rx_ip_tbl[ip_index].ref_count--;
> + if (priv->rx_ip_tbl[ip_index].ref_count == 0) {
> + reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_IP |
> + (ip_index << 3);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3), 0);
> + }
> + } else { /* session->dir == RTE_CRYPTO_OUTBOUND */
> + int i;
> +
> + /* Find a match in the SA table*/
> + for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
> + if (priv->tx_sa_tbl[i].spi ==
> + rte_cpu_to_be_32(ic_session->spi)) {
> + sa_index = i;
> + break;
> + }
> + }
> + /* Fail if no match entries*/
> + if (sa_index < 0) {
> + PMD_DRV_LOG(ERR,
> + "Entry not found in the Tx SA table\n");
> + return -1;
> + }
> + reg_val = IPSRXIDX_WRITE | (sa_index << 3);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT, 0);
> + IXGBE_WAIT_TWRITE;
> +
> + priv->tx_sa_tbl[sa_index].used = 0;
> + }
> +
> + return 0;
> +}
> +
> +static int
> +ixgbe_crypto_create_session(void *device,
> + struct rte_security_session_conf *conf,
> + struct rte_security_session *session,
> + struct rte_mempool *mempool)
> +{
> + struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
> + struct ixgbe_crypto_session *ic_session = NULL;
> + struct rte_crypto_aead_xform *aead_xform;
> + struct rte_eth_conf *dev_conf = ð_dev->data->dev_conf;
> +
> + if (rte_mempool_get(mempool, (void **)&ic_session)) {
> + PMD_DRV_LOG(ERR, "Cannot get object from ic_session mempool");
> + return -ENOMEM;
> + }
> +
> + if (conf->crypto_xform->type != RTE_CRYPTO_SYM_XFORM_AEAD ||
> + conf->crypto_xform->aead.algo !=
> + RTE_CRYPTO_AEAD_AES_GCM) {
> + PMD_DRV_LOG(ERR, "Unsupported crypto transformation mode\n");
> + return -ENOTSUP;
> + }
> + aead_xform = &conf->crypto_xform->aead;
> +
> + if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
> + if (dev_conf->rxmode.enable_sec) {
> + ic_session->op = IXGBE_OP_AUTHENTICATED_DECRYPTION;
> + } else {
> + PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
> + return -ENOTSUP;
> + }
> + } else {
> + if (dev_conf->txmode.enable_sec) {
> + ic_session->op = IXGBE_OP_AUTHENTICATED_ENCRYPTION;
> + } else {
> + PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
> + return -ENOTSUP;
> + }
> + }
> +
> + ic_session->key = aead_xform->key.data;
> + memcpy(&ic_session->salt,
> + &aead_xform->key.data[aead_xform->key.length], 4);
> + ic_session->spi = conf->ipsec.spi;
> + ic_session->dev = eth_dev;
> +
> + set_sec_session_private_data(session, ic_session);
> +
> + if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
> + if (ixgbe_crypto_add_sa(ic_session)) {
> + PMD_DRV_LOG(ERR, "Failed to add SA\n");
> + return -EPERM;
> + }
> + }
> +
> + return 0;
> +}
> +
> +static int
> +ixgbe_crypto_remove_session(void *device,
> + struct rte_security_session *session)
> +{
> + struct rte_eth_dev *eth_dev = device;
> + struct ixgbe_crypto_session *ic_session =
> + (struct ixgbe_crypto_session *)
> + get_sec_session_private_data(session);
> + struct rte_mempool *mempool = rte_mempool_from_obj(ic_session);
> +
> + if (eth_dev != ic_session->dev) {
> + PMD_DRV_LOG(ERR, "Session not bound to this device\n");
> + return -ENODEV;
> + }
> +
> + if (ixgbe_crypto_remove_sa(eth_dev, ic_session)) {
> + PMD_DRV_LOG(ERR, "Failed to remove session\n");
> + return -EFAULT;
> + }
> +
> + rte_mempool_put(mempool, (void *)ic_session);
> +
> + return 0;
> +}
> +
> +static int
> +ixgbe_crypto_update_mb(void *device __rte_unused,
> + struct rte_security_session *session,
> + struct rte_mbuf *m, void *params __rte_unused)
> +{
> + struct ixgbe_crypto_session *ic_session =
> + get_sec_session_private_data(session);
> + if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
> + struct ixgbe_crypto_tx_desc_md *mdata =
> + (struct ixgbe_crypto_tx_desc_md *)&m->udata64;
> + mdata->enc = 1;
> + mdata->sa_idx = ic_session->sa_index;
> + mdata->pad_len = *rte_pktmbuf_mtod_offset(m,
> + uint8_t *, rte_pktmbuf_pkt_len(m) - 18) + 18;
> + }
> + return 0;
> +}
> +
> +struct rte_cryptodev_capabilities aes_gmac_crypto_capabilities[] = {
> + { /* AES GMAC (128-bit) */
> + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> + {.sym = {
> + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
> + {.auth = {
> + .algo = RTE_CRYPTO_AUTH_AES_GMAC,
> + .block_size = 16,
> + .key_size = {
> + .min = 16,
> + .max = 16,
> + .increment = 0
> + },
> + .digest_size = {
> + .min = 12,
> + .max = 12,
> + .increment = 0
> + },
> + .iv_size = {
> + .min = 12,
> + .max = 12,
> + .increment = 0
> + }
> + }, }
> + }, }
> + },
> + {
> + .op = RTE_CRYPTO_OP_TYPE_UNDEFINED,
> + {.sym = {
> + .xform_type = RTE_CRYPTO_SYM_XFORM_NOT_SPECIFIED
> + }, }
> + },
> +};
> +
> +struct rte_cryptodev_capabilities aes_gcm_gmac_crypto_capabilities[] = {
> + { /* AES GMAC (128-bit) */
> + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> + {.sym = {
> + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
> + {.auth = {
> + .algo = RTE_CRYPTO_AUTH_AES_GMAC,
> + .block_size = 16,
> + .key_size = {
> + .min = 16,
> + .max = 16,
> + .increment = 0
> + },
> + .digest_size = {
> + .min = 12,
> + .max = 12,
> + .increment = 0
> + },
> + .iv_size = {
> + .min = 12,
> + .max = 12,
> + .increment = 0
> + }
> + }, }
> + }, }
> + },
> + { /* AES GCM (128-bit) */
> + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> + {.sym = {
> + .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
> + {.aead = {
> + .algo = RTE_CRYPTO_AEAD_AES_GCM,
> + .block_size = 16,
> + .key_size = {
> + .min = 16,
> + .max = 16,
> + .increment = 0
> + },
> + .digest_size = {
> + .min = 8,
> + .max = 16,
> + .increment = 4
> + },
> + .aad_size = {
> + .min = 0,
> + .max = 65535,
> + .increment = 1
> + },
> + .iv_size = {
> + .min = 12,
> + .max = 12,
> + .increment = 0
> + }
> + }, }
> + }, }
> + },
> + {
> + .op = RTE_CRYPTO_OP_TYPE_UNDEFINED,
> + {.sym = {
> + .xform_type = RTE_CRYPTO_SYM_XFORM_NOT_SPECIFIED
> + }, }
> + },
> +};
> +
> +static const struct rte_security_capability ixgbe_security_capabilities[] = {
> + { /* IPsec Inline Crypto ESP Transport Egress */
> + .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
> + .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
> + .ipsec = {
> + .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
> + .mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
> + .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
> + .options = { 0 }
> + },
> + .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
> + .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
> + },
> + { /* IPsec Inline Crypto ESP Transport Ingress */
> + .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
> + .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
> + .ipsec = {
> + .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
> + .mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
> + .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
> + .options = { 0 }
> + },
> + .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
> + .ol_flags = 0
> + },
> + { /* IPsec Inline Crypto ESP Tunnel Egress */
> + .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
> + .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
> + .ipsec = {
> + .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
> + .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
> + .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
> + .options = { 0 }
> + },
> + .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
> + .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
> + },
> + { /* IPsec Inline Crypto ESP Tunnel Ingress */
> + .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
> + .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
> + .ipsec = {
> + .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
> + .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
> + .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
> + .options = { 0 }
> + },
> + .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
> + .ol_flags = 0
> + },
> + {
> + .action = RTE_SECURITY_ACTION_TYPE_NONE
> + }
> +};
> +
> +static const struct rte_security_capability *
> +ixgbe_crypto_capabilities_get(void *device __rte_unused)
> +{
> + return ixgbe_security_capabilities;
> +}
> +
> +
> +int
> +ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
> +{
> + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> + uint32_t reg;
> +
> + /* sanity checks */
> + if (dev->data->dev_conf.rxmode.enable_lro) {
> + PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
> + return -1;
> + }
> + if (!dev->data->dev_conf.rxmode.hw_strip_crc) {
> + PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
> + return -1;
> + }
> +
> +
> + /* Set IXGBE_SECTXBUFFAF to 0x15 as required in the datasheet*/
> + IXGBE_WRITE_REG(hw, IXGBE_SECTXBUFFAF, 0x15);
> +
> + /* IFG needs to be set to 3 when we are using security. Otherwise a Tx
> + * hang will occur with heavy traffic.
> + */
> + reg = IXGBE_READ_REG(hw, IXGBE_SECTXMINIFG);
> + reg = (reg & 0xFFFFFFF0) | 0x3;
> + IXGBE_WRITE_REG(hw, IXGBE_SECTXMINIFG, reg);
> +
> + reg = IXGBE_READ_REG(hw, IXGBE_HLREG0);
> + reg |= IXGBE_HLREG0_TXCRCEN | IXGBE_HLREG0_RXCRCSTRP;
> + IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
> +
> + if (dev->data->dev_conf.rxmode.enable_sec) {
> + IXGBE_WRITE_REG(hw, IXGBE_SECRXCTRL, 0);
> + reg = IXGBE_READ_REG(hw, IXGBE_SECRXCTRL);
> + if (reg != 0) {
> + PMD_DRV_LOG(ERR, "Error enabling Rx Crypto");
> + return -1;
> + }
> + }
> + if (dev->data->dev_conf.txmode.enable_sec) {
> + IXGBE_WRITE_REG(hw, IXGBE_SECTXCTRL,
> + IXGBE_SECTXCTRL_STORE_FORWARD);
> + reg = IXGBE_READ_REG(hw, IXGBE_SECTXCTRL);
> + if (reg != IXGBE_SECTXCTRL_STORE_FORWARD) {
> + PMD_DRV_LOG(ERR, "Error enabling Rx Crypto");
> + return -1;
> + }
> + }
> +
> + ixgbe_crypto_clear_ipsec_tables(dev);
> +
> + return 0;
> +}
> +
> +int
> +ixgbe_crypto_add_ingress_sa_from_flow(const void *sess,
> + const void *ip_spec,
> + uint8_t is_ipv6)
> +{
> + struct ixgbe_crypto_session *ic_session
> + = get_sec_session_private_data(sess);
> +
> + if (ic_session->op == IXGBE_OP_AUTHENTICATED_DECRYPTION) {
> + if (is_ipv6) {
> + const struct rte_flow_item_ipv6 *ipv6 = ip_spec;
> + ic_session->src_ip.type = IPv6;
> + ic_session->dst_ip.type = IPv6;
> + rte_memcpy(ic_session->src_ip.ipv6,
> + ipv6->hdr.src_addr, 16);
> + rte_memcpy(ic_session->dst_ip.ipv6,
> + ipv6->hdr.dst_addr, 16);
> + } else {
> + const struct rte_flow_item_ipv4 *ipv4 = ip_spec;
> + ic_session->src_ip.type = IPv4;
> + ic_session->dst_ip.type = IPv4;
> + ic_session->src_ip.ipv4 = ipv4->hdr.src_addr;
> + ic_session->dst_ip.ipv4 = ipv4->hdr.dst_addr;
> + }
> + return ixgbe_crypto_add_sa(ic_session);
> + }
> +
> + return 0;
> +}
> +
> +
> +struct rte_security_ops ixgbe_security_ops = {
> + .session_create = ixgbe_crypto_create_session,
> + .session_update = NULL,
> + .session_stats_get = NULL,
> + .session_destroy = ixgbe_crypto_remove_session,
> +
> + .set_pkt_metadata = ixgbe_crypto_update_mb,
> +
> + .capabilities_get = ixgbe_crypto_capabilities_get
> +};
> diff --git a/drivers/net/ixgbe/ixgbe_ipsec.h b/drivers/net/ixgbe/ixgbe_ipsec.h
> new file mode 100644
> index 0000000..9f06235
> --- /dev/null
> +++ b/drivers/net/ixgbe/ixgbe_ipsec.h
> @@ -0,0 +1,147 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
> + * All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * * Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * * Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in
> + * the documentation and/or other materials provided with the
> + * distribution.
> + * * Neither the name of Intel Corporation nor the names of its
> + * contributors may be used to endorse or promote products derived
> + * from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef IXGBE_IPSEC_H_
> +#define IXGBE_IPSEC_H_
> +
> +#include <rte_security.h>
> +
> +#define IPSRXIDX_RX_EN 0x00000001
> +#define IPSRXIDX_TABLE_IP 0x00000002
> +#define IPSRXIDX_TABLE_SPI 0x00000004
> +#define IPSRXIDX_TABLE_KEY 0x00000006
> +#define IPSRXIDX_WRITE 0x80000000
> +#define IPSRXIDX_READ 0x40000000
> +#define IPSRXMOD_VALID 0x00000001
> +#define IPSRXMOD_PROTO 0x00000004
> +#define IPSRXMOD_DECRYPT 0x00000008
> +#define IPSRXMOD_IPV6 0x00000010
> +#define IXGBE_ADVTXD_POPTS_IPSEC 0x00000400
> +#define IXGBE_ADVTXD_TUCMD_IPSEC_TYPE_ESP 0x00002000
> +#define IXGBE_ADVTXD_TUCMD_IPSEC_ENCRYPT_EN 0x00004000
> +#define IXGBE_RXDADV_IPSEC_STATUS_SECP 0x00020000
> +#define IXGBE_RXDADV_IPSEC_ERROR_BIT_MASK 0x18000000
> +#define IXGBE_RXDADV_IPSEC_ERROR_INVALID_PROTOCOL 0x08000000
> +#define IXGBE_RXDADV_IPSEC_ERROR_INVALID_LENGTH 0x10000000
> +#define IXGBE_RXDADV_IPSEC_ERROR_AUTHENTICATION_FAILED 0x18000000
> +
> +#define IPSEC_MAX_RX_IP_COUNT 128
> +#define IPSEC_MAX_SA_COUNT 1024
> +
> +enum ixgbe_operation {
> + IXGBE_OP_AUTHENTICATED_ENCRYPTION,
> + IXGBE_OP_AUTHENTICATED_DECRYPTION
> +};
> +
> +enum ixgbe_gcm_key {
> + IXGBE_GCM_KEY_128,
> + IXGBE_GCM_KEY_256
> +};
> +
> +/**
> + * Generic IP address structure
> + * TODO: Find better location for this rte_net.h possibly.
> + **/
> +struct ipaddr {
> + enum ipaddr_type {
> + IPv4,
> + IPv6
> + } type;
> + /**< IP Address Type - IPv4/IPv6 */
> +
> + union {
> + uint32_t ipv4;
> + uint32_t ipv6[4];
> + };
> +};
> +
> +/** inline crypto crypto private session structure */
> +struct ixgbe_crypto_session {
> + enum ixgbe_operation op;
> + uint8_t *key;
> + uint32_t salt;
> + uint32_t sa_index;
> + uint32_t spi;
> + struct ipaddr src_ip;
> + struct ipaddr dst_ip;
> + struct rte_eth_dev *dev;
> +} __rte_cache_aligned;
> +
> +struct ixgbe_crypto_rx_ip_table {
> + struct ipaddr ip;
> + uint16_t ref_count;
> +};
> +struct ixgbe_crypto_rx_sa_table {
> + uint32_t spi;
> + uint32_t ip_index;
> + uint32_t key[4];
> + uint32_t salt;
> + uint8_t mode;
> + uint8_t used;
> +};
> +
> +struct ixgbe_crypto_tx_sa_table {
> + uint32_t spi;
> + uint32_t key[4];
> + uint32_t salt;
> + uint8_t used;
> +};
> +
> +struct ixgbe_crypto_tx_desc_md {
> + union {
> + uint64_t data;
> + struct {
> + uint32_t sa_idx;
> + uint8_t pad_len;
> + uint8_t enc;
> + };
> + };
> +};
> +
> +struct ixgbe_ipsec {
> + struct ixgbe_crypto_rx_ip_table rx_ip_tbl[IPSEC_MAX_RX_IP_COUNT];
> + struct ixgbe_crypto_rx_sa_table rx_sa_tbl[IPSEC_MAX_SA_COUNT];
> + struct ixgbe_crypto_tx_sa_table tx_sa_tbl[IPSEC_MAX_SA_COUNT];
> +};
> +
> +extern struct rte_security_ops ixgbe_security_ops;
> +
> +
> +int ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev);
> +int ixgbe_crypto_add_ingress_sa_from_flow(const void *sess,
> + const void *ip_spec,
> + uint8_t is_ipv6);
> +
> +
> +
> +#endif /*IXGBE_IPSEC_H_*/
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
> index 0038dfb..279e3fa 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> @@ -93,6 +93,7 @@
> PKT_TX_TCP_SEG | \
> PKT_TX_MACSEC | \
> PKT_TX_OUTER_IP_CKSUM | \
> + PKT_TX_SEC_OFFLOAD | \
> IXGBE_TX_IEEE1588_TMST)
>
> #define IXGBE_TX_OFFLOAD_NOTSUP_MASK \
> @@ -395,7 +396,8 @@ ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
> static inline void
> ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
> volatile struct ixgbe_adv_tx_context_desc *ctx_txd,
> - uint64_t ol_flags, union ixgbe_tx_offload tx_offload)
> + uint64_t ol_flags, union ixgbe_tx_offload tx_offload,
> + struct rte_mbuf *mb)
> {
> uint32_t type_tucmd_mlhl;
> uint32_t mss_l4len_idx = 0;
> @@ -479,6 +481,18 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
> seqnum_seed |= tx_offload.l2_len
> << IXGBE_ADVTXD_TUNNEL_LEN;
> }
> + if (mb->ol_flags & PKT_TX_SEC_OFFLOAD) {
> + struct ixgbe_crypto_tx_desc_md *mdata =
> + (struct ixgbe_crypto_tx_desc_md *)
> + &mb->udata64;
> + seqnum_seed |=
> + (IXGBE_ADVTXD_IPSEC_SA_INDEX_MASK & mdata->sa_idx);
> + type_tucmd_mlhl |= mdata->enc ?
> + (IXGBE_ADVTXD_TUCMD_IPSEC_TYPE_ESP |
> + IXGBE_ADVTXD_TUCMD_IPSEC_ENCRYPT_EN) : 0;
> + type_tucmd_mlhl |=
> + (mdata->pad_len & IXGBE_ADVTXD_IPSEC_ESP_LEN_MASK);
> + }
>
> txq->ctx_cache[ctx_idx].flags = ol_flags;
> txq->ctx_cache[ctx_idx].tx_offload.data[0] =
> @@ -657,6 +671,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> uint32_t ctx = 0;
> uint32_t new_ctx;
> union ixgbe_tx_offload tx_offload;
> + uint8_t use_ipsec;
>
> tx_offload.data[0] = 0;
> tx_offload.data[1] = 0;
> @@ -684,6 +699,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> * are needed for offload functionality.
> */
> ol_flags = tx_pkt->ol_flags;
> + use_ipsec = txq->using_ipsec && (ol_flags & PKT_TX_SEC_OFFLOAD);
>
> /* If hardware offload required */
> tx_ol_req = ol_flags & IXGBE_TX_OFFLOAD_MASK;
> @@ -695,6 +711,13 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> tx_offload.tso_segsz = tx_pkt->tso_segsz;
> tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
> tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
> + if (use_ipsec) {
> + struct ixgbe_crypto_tx_desc_md *ipsec_mdata =
> + (struct ixgbe_crypto_tx_desc_md *)
> + &tx_pkt->udata64;
> + tx_offload.sa_idx = ipsec_mdata->sa_idx;
> + tx_offload.sec_pad_len = ipsec_mdata->pad_len;
> + }
>
> /* If new context need be built or reuse the exist ctx. */
> ctx = what_advctx_update(txq, tx_ol_req,
> @@ -855,7 +878,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> }
>
> ixgbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req,
> - tx_offload);
> + tx_offload, tx_pkt);
>
> txe->last_id = tx_last;
> tx_id = txe->next_id;
> @@ -873,6 +896,8 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> }
>
> olinfo_status |= (pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT);
> + if (use_ipsec)
> + olinfo_status |= IXGBE_ADVTXD_POPTS_IPSEC;
>
> m_seg = tx_pkt;
> do {
> @@ -1447,6 +1472,12 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status)
> pkt_flags |= PKT_RX_EIP_CKSUM_BAD;
> }
>
> + if (rx_status & IXGBE_RXD_STAT_SECP) {
> + pkt_flags |= PKT_RX_SEC_OFFLOAD;
> + if (rx_status & IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG)
> + pkt_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
> + }
> +
> return pkt_flags;
> }
>
> @@ -2364,8 +2395,9 @@ void __attribute__((cold))
> ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ixgbe_tx_queue *txq)
> {
> /* Use a simple Tx queue (no offloads, no multi segs) if possible */
> - if (((txq->txq_flags & IXGBE_SIMPLE_FLAGS) == IXGBE_SIMPLE_FLAGS)
> - && (txq->tx_rs_thresh >= RTE_PMD_IXGBE_TX_MAX_BURST)) {
> + if (((txq->txq_flags & IXGBE_SIMPLE_FLAGS) == IXGBE_SIMPLE_FLAGS) &&
> + (txq->tx_rs_thresh >= RTE_PMD_IXGBE_TX_MAX_BURST) &&
> + !(dev->data->dev_conf.txmode.enable_sec)) {
> PMD_INIT_LOG(DEBUG, "Using simple tx code path");
> dev->tx_pkt_prepare = NULL;
> #ifdef RTE_IXGBE_INC_VECTOR
> @@ -2535,6 +2567,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
> txq->txq_flags = tx_conf->txq_flags;
> txq->ops = &def_txq_ops;
> txq->tx_deferred_start = tx_conf->tx_deferred_start;
> + txq->using_ipsec = dev->data->dev_conf.txmode.enable_sec;
>
> /*
> * Modification to set VFTDT for virtual function if vf is detected
> @@ -4519,6 +4552,7 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev)
> struct ixgbe_rx_queue *rxq = dev->data->rx_queues[i];
>
> rxq->rx_using_sse = rx_using_sse;
> + rxq->using_ipsec = dev->data->dev_conf.rxmode.enable_sec;
> }
> }
>
> @@ -5006,6 +5040,17 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
> dev->data->dev_conf.lpbk_mode == IXGBE_LPBK_82599_TX_RX)
> ixgbe_setup_loopback_link_82599(hw);
>
> + if (dev->data->dev_conf.rxmode.enable_sec ||
> + dev->data->dev_conf.txmode.enable_sec) {
> + ret = ixgbe_crypto_enable_ipsec(dev);
> + if (ret != 0) {
> + PMD_DRV_LOG(ERR,
> + "ixgbe_crypto_enable_ipsec fails with %d.",
> + ret);
> + return ret;
> + }
> + }
> +
> return 0;
> }
>
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
> index 81c527f..4017831 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx.h
> +++ b/drivers/net/ixgbe/ixgbe_rxtx.h
> @@ -138,8 +138,10 @@ struct ixgbe_rx_queue {
> uint16_t rx_nb_avail; /**< nr of staged pkts ready to ret to app */
> uint16_t rx_next_avail; /**< idx of next staged pkt to ret to app */
> uint16_t rx_free_trigger; /**< triggers rx buffer allocation */
> - uint16_t rx_using_sse;
> + uint8_t rx_using_sse;
> /**< indicates that vector RX is in use */
> + uint8_t using_ipsec;
> + /**< indicates that IPsec RX feature is in use */
> #ifdef RTE_IXGBE_INC_VECTOR
> uint16_t rxrearm_nb; /**< number of remaining to be re-armed */
> uint16_t rxrearm_start; /**< the idx we start the re-arming from */
> @@ -183,6 +185,10 @@ union ixgbe_tx_offload {
> /* fields for TX offloading of tunnels */
> uint64_t outer_l3_len:8; /**< Outer L3 (IP) Hdr Length. */
> uint64_t outer_l2_len:8; /**< Outer L2 (MAC) Hdr Length. */
> +
> + /* inline ipsec related*/
> + uint64_t sa_idx:8; /**< TX SA database entry index */
> + uint64_t sec_pad_len:4; /**< padding length */
> };
> };
>
> @@ -247,6 +253,9 @@ struct ixgbe_tx_queue {
> struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
> const struct ixgbe_txq_ops *ops; /**< txq ops */
> uint8_t tx_deferred_start; /**< not in global dev start. */
> + uint8_t using_ipsec;
> + /**< indicates that IPsec TX feature is in use */
> +
> };
>
> struct ixgbe_txq_ops {
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
> index e704a7f..c9b1e2e 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
> +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
> @@ -124,10 +124,12 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq)
>
> static inline void
> desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
> - struct rte_mbuf **rx_pkts)
> + struct rte_mbuf **rx_pkts, uint8_t use_ipsec)
> {
> __m128i ptype0, ptype1, vtag0, vtag1, csum;
> __m128i rearm0, rearm1, rearm2, rearm3;
> + __m128i sterr0, sterr1, sterr2, sterr3;
> + __m128i tmp1, tmp2, tmp3, tmp4;
>
> /* mask everything except rss type */
> const __m128i rsstype_msk = _mm_set_epi16(
> @@ -174,6 +176,41 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
> 0, PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
> PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t));
>
> + const __m128i ipsec_sterr_msk = _mm_set_epi32(
> + 0, IXGBE_RXDADV_IPSEC_STATUS_SECP |
> + IXGBE_RXDADV_IPSEC_ERROR_AUTH_FAILED,
> + 0, 0);
> + const __m128i ipsec_proc_msk = _mm_set_epi32(
> + 0, IXGBE_RXDADV_IPSEC_STATUS_SECP, 0, 0);
> + const __m128i ipsec_err_flag = _mm_set_epi32(
> + 0, PKT_RX_SEC_OFFLOAD_FAILED | PKT_RX_SEC_OFFLOAD,
> + 0, 0);
> + const __m128i ipsec_proc_flag = _mm_set_epi32(
> + 0, PKT_RX_SEC_OFFLOAD, 0, 0);
> +
> + if (use_ipsec) {
> + sterr0 = _mm_and_si128(descs[0], ipsec_sterr_msk);
> + sterr1 = _mm_and_si128(descs[1], ipsec_sterr_msk);
> + sterr2 = _mm_and_si128(descs[2], ipsec_sterr_msk);
> + sterr3 = _mm_and_si128(descs[3], ipsec_sterr_msk);
> + tmp1 = _mm_cmpeq_epi32(sterr0, ipsec_sterr_msk);
> + tmp2 = _mm_cmpeq_epi32(sterr0, ipsec_proc_msk);
> + tmp3 = _mm_cmpeq_epi32(sterr1, ipsec_sterr_msk);
> + tmp4 = _mm_cmpeq_epi32(sterr1, ipsec_proc_msk);
> + sterr0 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
> + _mm_and_si128(tmp2, ipsec_proc_flag));
> + sterr1 = _mm_or_si128(_mm_and_si128(tmp3, ipsec_err_flag),
> + _mm_and_si128(tmp4, ipsec_proc_flag));
> + tmp1 = _mm_cmpeq_epi32(sterr2, ipsec_sterr_msk);
> + tmp2 = _mm_cmpeq_epi32(sterr2, ipsec_proc_msk);
> + tmp3 = _mm_cmpeq_epi32(sterr3, ipsec_sterr_msk);
> + tmp4 = _mm_cmpeq_epi32(sterr3, ipsec_proc_msk);
> + sterr2 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
> + _mm_and_si128(tmp2, ipsec_proc_flag));
> + sterr3 = _mm_or_si128(_mm_and_si128(tmp3, ipsec_err_flag),
> + _mm_and_si128(tmp4, ipsec_proc_flag));
> + }
> +
> ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
> ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
> vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
> @@ -221,6 +258,13 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
> rearm2 = _mm_blend_epi16(mbuf_init, _mm_slli_si128(vtag1, 4), 0x10);
> rearm3 = _mm_blend_epi16(mbuf_init, _mm_slli_si128(vtag1, 2), 0x10);
>
> + if (use_ipsec) {
> + rearm0 = _mm_or_si128(rearm0, sterr0);
> + rearm1 = _mm_or_si128(rearm1, sterr1);
> + rearm2 = _mm_or_si128(rearm2, sterr2);
> + rearm3 = _mm_or_si128(rearm3, sterr3);
> + }
> +
> /* write the rearm data and the olflags in one write */
> RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) !=
> offsetof(struct rte_mbuf, rearm_data) + 8);
> @@ -310,6 +354,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
> volatile union ixgbe_adv_rx_desc *rxdp;
> struct ixgbe_rx_entry *sw_ring;
> uint16_t nb_pkts_recd;
> + uint8_t use_ipsec = rxq->using_ipsec;
> int pos;
> uint64_t var;
> __m128i shuf_msk;
> @@ -471,7 +516,8 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
> sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
>
> /* set ol_flags with vlan packet type */
> - desc_to_olflags_v(descs, mbuf_init, vlan_flags, &rx_pkts[pos]);
> + desc_to_olflags_v(descs, mbuf_init, vlan_flags,
> + &rx_pkts[pos], use_ipsec);
>
> /* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
> pkt_mb4 = _mm_add_epi16(pkt_mb4, crc_adjust);
Tested-by: Aviad Yehezkel <aviadye@mellanox.com>
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 10/12] net/ixgbe: enable inline ipsec
2017-10-15 12:51 ` Aviad Yehezkel
@ 2017-10-16 10:41 ` Thomas Monjalon
0 siblings, 0 replies; 195+ messages in thread
From: Thomas Monjalon @ 2017-10-16 10:41 UTC (permalink / raw)
To: Aviad Yehezkel
Cc: Akhil Goyal, dev, declan.doherty, pablo.de.lara.guarch,
hemant.agrawal, radu.nicolau, borisp, aviadye, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
15/10/2017 14:51, Aviad Yehezkel:
>
> On 10/15/2017 1:17 AM, Akhil Goyal wrote:
> > From: Radu Nicolau <radu.nicolau@intel.com>
> >
> > Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> > Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> > ---
> > drivers/net/Makefile | 2 +-
> > drivers/net/ixgbe/Makefile | 2 +-
> > drivers/net/ixgbe/base/ixgbe_osdep.h | 8 +
> > drivers/net/ixgbe/ixgbe_ethdev.c | 19 +
> > drivers/net/ixgbe/ixgbe_ethdev.h | 6 +-
> > drivers/net/ixgbe/ixgbe_flow.c | 47 +++
> > drivers/net/ixgbe/ixgbe_ipsec.c | 744 +++++++++++++++++++++++++++++++++
> > drivers/net/ixgbe/ixgbe_ipsec.h | 147 +++++++
> > drivers/net/ixgbe/ixgbe_rxtx.c | 53 ++-
> > drivers/net/ixgbe/ixgbe_rxtx.h | 11 +-
> > drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 50 ++-
> > 11 files changed, 1079 insertions(+), 10 deletions(-)
> > create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.c
> > create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.h
> >
[all code lines cut]
Please Aviad, remove the useless lines when replying.
It is really annoying to scroll the whole patch to find where you replied.
> Tested-by: Aviad Yehezkel <aviadye@mellanox.com>
Really? You have tested the ixgbe driver?
When providing a test acknowledgement, it is more valuable to provide
a brief test report:
- which hardware
- which use case
- results
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 10/12] net/ixgbe: enable inline ipsec
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 10/12] net/ixgbe: enable inline ipsec Akhil Goyal
2017-10-15 12:51 ` Aviad Yehezkel
@ 2017-10-18 21:29 ` Ananyev, Konstantin
2017-10-19 10:51 ` Radu Nicolau
2017-10-19 9:04 ` Ananyev, Konstantin
2 siblings, 1 reply; 195+ messages in thread
From: Ananyev, Konstantin @ 2017-10-18 21:29 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: Doherty, Declan, De Lara Guarch, Pablo, hemant.agrawal, Nicolau,
Radu, borisp, aviadye, thomas, sandeep.malik, jerin.jacob,
Mcnamara, John, shahafs, olivier.matz
Hi Radu,
Few comments from me below.
Konstantin
> -----Original Message-----
> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
> Sent: Saturday, October 14, 2017 11:18 PM
> To: dev@dpdk.org
> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com;
> Nicolau, Radu <radu.nicolau@intel.com>; borisp@mellanox.com; aviadye@mellanox.com; thomas@monjalon.net;
> sandeep.malik@nxp.com; jerin.jacob@caviumnetworks.com; Mcnamara, John <john.mcnamara@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; shahafs@mellanox.com; olivier.matz@6wind.com
> Subject: [PATCH v4 10/12] net/ixgbe: enable inline ipsec
>
> From: Radu Nicolau <radu.nicolau@intel.com>
>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> ---
> drivers/net/Makefile | 2 +-
> drivers/net/ixgbe/Makefile | 2 +-
> drivers/net/ixgbe/base/ixgbe_osdep.h | 8 +
> drivers/net/ixgbe/ixgbe_ethdev.c | 19 +
> drivers/net/ixgbe/ixgbe_ethdev.h | 6 +-
> drivers/net/ixgbe/ixgbe_flow.c | 47 +++
> drivers/net/ixgbe/ixgbe_ipsec.c | 744 +++++++++++++++++++++++++++++++++
> drivers/net/ixgbe/ixgbe_ipsec.h | 147 +++++++
> drivers/net/ixgbe/ixgbe_rxtx.c | 53 ++-
> drivers/net/ixgbe/ixgbe_rxtx.h | 11 +-
> drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 50 ++-
> 11 files changed, 1079 insertions(+), 10 deletions(-)
> create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.c
> create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.h
>
> diff --git a/drivers/net/Makefile b/drivers/net/Makefile
> index 5d2ad2f..339ff36 100644
> --- a/drivers/net/Makefile
> +++ b/drivers/net/Makefile
> @@ -68,7 +68,7 @@ DEPDIRS-fm10k = $(core-libs) librte_hash
> DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e
> DEPDIRS-i40e = $(core-libs) librte_hash
> DIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe
> -DEPDIRS-ixgbe = $(core-libs) librte_hash
> +DEPDIRS-ixgbe = $(core-libs) librte_hash librte_security
> DIRS-$(CONFIG_RTE_LIBRTE_LIO_PMD) += liquidio
> DEPDIRS-liquidio = $(core-libs)
> DIRS-$(CONFIG_RTE_LIBRTE_MLX4_PMD) += mlx4
> diff --git a/drivers/net/ixgbe/Makefile b/drivers/net/ixgbe/Makefile
> index 95c806d..6e963c7 100644
> --- a/drivers/net/ixgbe/Makefile
> +++ b/drivers/net/ixgbe/Makefile
> @@ -118,11 +118,11 @@ SRCS-$(CONFIG_RTE_IXGBE_INC_VECTOR) += ixgbe_rxtx_vec_neon.c
> else
> SRCS-$(CONFIG_RTE_IXGBE_INC_VECTOR) += ixgbe_rxtx_vec_sse.c
> endif
> -
> ifeq ($(CONFIG_RTE_LIBRTE_IXGBE_BYPASS),y)
> SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_bypass.c
> SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_82599_bypass.c
> endif
> +SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_ipsec.c
> SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += rte_pmd_ixgbe.c
> SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_tm.c
>
> diff --git a/drivers/net/ixgbe/base/ixgbe_osdep.h b/drivers/net/ixgbe/base/ixgbe_osdep.h
> index 4aab278..b132a0f 100644
> --- a/drivers/net/ixgbe/base/ixgbe_osdep.h
> +++ b/drivers/net/ixgbe/base/ixgbe_osdep.h
> @@ -161,4 +161,12 @@ static inline uint32_t ixgbe_read_addr(volatile void* addr)
> #define IXGBE_WRITE_REG_ARRAY(hw, reg, index, value) \
> IXGBE_PCI_REG_WRITE(IXGBE_PCI_REG_ARRAY_ADDR((hw), (reg), (index)), (value))
>
> +#define IXGBE_WRITE_REG_THEN_POLL_MASK(hw, reg, val, mask, poll_ms) \
> +{ \
> + uint32_t cnt = poll_ms; \
> + IXGBE_WRITE_REG(hw, (reg), (val)); \
> + while (((IXGBE_READ_REG(hw, (reg))) & (mask)) && (cnt--)) \
> + rte_delay_ms(1); \
> +}
> +
As you have a macro that consists from multiple statements, you'll need a do { ..} while (0) wrapper
around it.
Though I still suggest to make it an inline function - would be much better.
> #endif /* _IXGBE_OS_H_ */
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
> index 14b9c53..fcabd5e 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -61,6 +61,7 @@
> #include <rte_random.h>
> #include <rte_dev.h>
> #include <rte_hash_crc.h>
> +#include <rte_security_driver.h>
>
> #include "ixgbe_logs.h"
> #include "base/ixgbe_api.h"
> @@ -1132,6 +1133,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
> IXGBE_DEV_PRIVATE_TO_FILTER_INFO(eth_dev->data->dev_private);
> struct ixgbe_bw_conf *bw_conf =
> IXGBE_DEV_PRIVATE_TO_BW_CONF(eth_dev->data->dev_private);
> + struct rte_security_ctx *security_instance;
> uint32_t ctrl_ext;
> uint16_t csum;
> int diag, i;
> @@ -1139,6 +1141,17 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
> PMD_INIT_FUNC_TRACE();
>
> eth_dev->dev_ops = &ixgbe_eth_dev_ops;
> + security_instance = rte_malloc("rte_security_instances_ops",
> + sizeof(struct rte_security_ctx), 0);
> + if (security_instance == NULL)
> + return -ENOMEM;
> + security_instance->state = RTE_SECURITY_INSTANCE_VALID;
> + security_instance->device = (void *)eth_dev;
> + security_instance->ops = &ixgbe_security_ops;
> + security_instance->sess_cnt = 0;
> +
> + eth_dev->data->security_ctx = security_instance;
> +
> eth_dev->rx_pkt_burst = &ixgbe_recv_pkts;
> eth_dev->tx_pkt_burst = &ixgbe_xmit_pkts;
> eth_dev->tx_pkt_prepare = &ixgbe_prep_pkts;
> @@ -1169,6 +1182,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
>
> rte_eth_copy_pci_info(eth_dev, pci_dev);
> eth_dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE;
> + eth_dev->data->dev_flags |= RTE_ETH_DEV_SECURITY;
>
> /* Vendor and Device ID need to be set before init of shared code */
> hw->device_id = pci_dev->id.device_id;
> @@ -1401,6 +1415,8 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
> /* Remove all Traffic Manager configuration */
> ixgbe_tm_conf_uninit(eth_dev);
>
> + rte_free(eth_dev->data->security_ctx);
> +
> return 0;
> }
>
> @@ -3695,6 +3711,9 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
> hw->mac.type == ixgbe_mac_X550EM_a)
> dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
>
> + dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_SECURITY;
> + dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
> +
> dev_info->default_rxconf = (struct rte_eth_rxconf) {
> .rx_thresh = {
> .pthresh = IXGBE_DEFAULT_RX_PTHRESH,
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
> index e28c856..f5b52c4 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.h
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.h
> @@ -38,6 +38,7 @@
> #include "base/ixgbe_dcb_82599.h"
> #include "base/ixgbe_dcb_82598.h"
> #include "ixgbe_bypass.h"
> +#include "ixgbe_ipsec.h"
> #include <rte_time.h>
> #include <rte_hash.h>
> #include <rte_pci.h>
> @@ -486,7 +487,7 @@ struct ixgbe_adapter {
> struct ixgbe_filter_info filter;
> struct ixgbe_l2_tn_info l2_tn;
> struct ixgbe_bw_conf bw_conf;
> -
> + struct ixgbe_ipsec ipsec;
> bool rx_bulk_alloc_allowed;
> bool rx_vec_allowed;
> struct rte_timecounter systime_tc;
> @@ -543,6 +544,9 @@ struct ixgbe_adapter {
> #define IXGBE_DEV_PRIVATE_TO_TM_CONF(adapter) \
> (&((struct ixgbe_adapter *)adapter)->tm_conf)
>
> +#define IXGBE_DEV_PRIVATE_TO_IPSEC(adapter)\
> + (&((struct ixgbe_adapter *)adapter)->ipsec)
> +
> /*
> * RX/TX function prototypes
> */
> diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
> index 904c146..13c8243 100644
> --- a/drivers/net/ixgbe/ixgbe_flow.c
> +++ b/drivers/net/ixgbe/ixgbe_flow.c
> @@ -187,6 +187,9 @@ const struct rte_flow_action *next_no_void_action(
> * END
> * other members in mask and spec should set to 0x00.
> * item->last should be NULL.
> + *
> + * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY.
> + *
> */
> static int
> cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
> @@ -226,6 +229,41 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
> return -rte_errno;
> }
>
> + /**
> + * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
> + */
> + act = next_no_void_action(actions, NULL);
> + if (act->type == RTE_FLOW_ACTION_TYPE_SECURITY) {
> + const void *conf = act->conf;
> + /* check if the next not void item is END */
> + act = next_no_void_action(actions, act);
> + if (act->type != RTE_FLOW_ACTION_TYPE_END) {
> + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
> + rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ACTION,
> + act, "Not supported action.");
> + return -rte_errno;
> + }
> +
> + /* get the IP pattern*/
> + item = next_no_void_pattern(pattern, NULL);
> + while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
> + item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
> + if (item->last ||
> + item->type == RTE_FLOW_ITEM_TYPE_END) {
> + rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ITEM,
> + item, "IP pattern missing.");
> + return -rte_errno;
> + }
> + item = next_no_void_pattern(pattern, item);
> + }
> +
> + filter->proto = IPPROTO_ESP;
> + return ixgbe_crypto_add_ingress_sa_from_flow(conf, item->spec,
> + item->type == RTE_FLOW_ITEM_TYPE_IPV6);
> + }
> +
> /* the first not void item can be MAC or IPv4 */
> item = next_no_void_pattern(pattern, NULL);
>
> @@ -519,6 +557,10 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
> if (ret)
> return ret;
>
> + /* ESP flow not really a flow*/
> + if (filter->proto == IPPROTO_ESP)
> + return 0;
> +
> /* Ixgbe doesn't support tcp flags. */
> if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
> memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
> @@ -2758,6 +2800,11 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
> memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
> ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
> actions, &ntuple_filter, error);
> +
> + /* ESP flow not really a flow*/
> + if (ntuple_filter.proto == IPPROTO_ESP)
> + return flow;
> +
> if (!ret) {
> ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
> if (!ret) {
> diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c
> new file mode 100644
> index 0000000..6ace305
> --- /dev/null
> +++ b/drivers/net/ixgbe/ixgbe_ipsec.c
> @@ -0,0 +1,744 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
> + * All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * * Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * * Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in
> + * the documentation and/or other materials provided with the
> + * distribution.
> + * * Neither the name of Intel Corporation nor the names of its
> + * contributors may be used to endorse or promote products derived
> + * from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#include <rte_ethdev.h>
> +#include <rte_ethdev_pci.h>
> +#include <rte_ip.h>
> +#include <rte_jhash.h>
> +#include <rte_security_driver.h>
> +#include <rte_cryptodev.h>
> +#include <rte_flow.h>
> +
> +#include "base/ixgbe_type.h"
> +#include "base/ixgbe_api.h"
> +#include "ixgbe_ethdev.h"
> +#include "ixgbe_ipsec.h"
> +
> +#define RTE_IXGBE_REGISTER_POLL_WAIT_5_MS 5
> +
> +#define IXGBE_WAIT_RREAD \
> + IXGBE_WRITE_REG_THEN_POLL_MASK(hw, IXGBE_IPSRXIDX, reg_val, \
> + IPSRXIDX_READ, RTE_IXGBE_REGISTER_POLL_WAIT_5_MS)
> +#define IXGBE_WAIT_RWRITE \
> + IXGBE_WRITE_REG_THEN_POLL_MASK(hw, IXGBE_IPSRXIDX, reg_val, \
> + IPSRXIDX_WRITE, RTE_IXGBE_REGISTER_POLL_WAIT_5_MS)
> +#define IXGBE_WAIT_TREAD \
> + IXGBE_WRITE_REG_THEN_POLL_MASK(hw, IXGBE_IPSTXIDX, reg_val, \
> + IPSRXIDX_READ, RTE_IXGBE_REGISTER_POLL_WAIT_5_MS)
> +#define IXGBE_WAIT_TWRITE \
> + IXGBE_WRITE_REG_THEN_POLL_MASK(hw, IXGBE_IPSTXIDX, reg_val, \
> + IPSRXIDX_WRITE, RTE_IXGBE_REGISTER_POLL_WAIT_5_MS)
> +
> +#define CMP_IP(a, b) (\
> + (a).ipv6[0] == (b).ipv6[0] && \
> + (a).ipv6[1] == (b).ipv6[1] && \
> + (a).ipv6[2] == (b).ipv6[2] && \
> + (a).ipv6[3] == (b).ipv6[3])
> +
> +
> +static void
> +ixgbe_crypto_clear_ipsec_tables(struct rte_eth_dev *dev)
> +{
> + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> + int i = 0;
> +
> + /* clear Rx IP table*/
> + for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
> + uint16_t index = i << 3;
> + uint32_t reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_IP | index;
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3), 0);
> + IXGBE_WAIT_RWRITE;
> + }
> +
> + /* clear Rx SPI and Rx/Tx SA tables*/
> + for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
> + uint32_t index = i << 3;
> + uint32_t reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_SPI | index;
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI, 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX, 0);
> + IXGBE_WAIT_RWRITE;
> + reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_KEY | index;
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT, 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD, 0);
> + IXGBE_WAIT_RWRITE;
> + reg_val = IPSRXIDX_WRITE | index;
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT, 0);
> + IXGBE_WAIT_TWRITE;
> + }
> +}
> +
> +static int
> +ixgbe_crypto_add_sa(struct ixgbe_crypto_session *ic_session)
> +{
> + struct rte_eth_dev *dev = ic_session->dev;
> + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> + struct ixgbe_ipsec *priv = IXGBE_DEV_PRIVATE_TO_IPSEC(
> + dev->data->dev_private);
> + uint32_t reg_val;
> + int sa_index = -1;
> +
> + if (ic_session->op == IXGBE_OP_AUTHENTICATED_DECRYPTION) {
> + int i, ip_index = -1;
> +
> + /* Find a match in the IP table*/
> + for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
> + if (CMP_IP(priv->rx_ip_tbl[i].ip,
> + ic_session->dst_ip)) {
> + ip_index = i;
> + break;
> + }
> + }
> + /* If no match, find a free entry in the IP table*/
> + if (ip_index < 0) {
> + for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
> + if (priv->rx_ip_tbl[i].ref_count == 0) {
> + ip_index = i;
> + break;
> + }
> + }
> + }
> +
> + /* Fail if no match and no free entries*/
> + if (ip_index < 0) {
> + PMD_DRV_LOG(ERR,
> + "No free entry left in the Rx IP table\n");
> + return -1;
> + }
> +
> + /* Find a free entry in the SA table*/
> + for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
> + if (priv->rx_sa_tbl[i].used == 0) {
> + sa_index = i;
> + break;
> + }
> + }
> + /* Fail if no free entries*/
> + if (sa_index < 0) {
> + PMD_DRV_LOG(ERR,
> + "No free entry left in the Rx SA table\n");
> + return -1;
> + }
> +
> + priv->rx_ip_tbl[ip_index].ip.ipv6[0] =
> + ic_session->dst_ip.ipv6[0];
> + priv->rx_ip_tbl[ip_index].ip.ipv6[1] =
> + ic_session->dst_ip.ipv6[1];
> + priv->rx_ip_tbl[ip_index].ip.ipv6[2] =
> + ic_session->dst_ip.ipv6[2];
> + priv->rx_ip_tbl[ip_index].ip.ipv6[3] =
> + ic_session->dst_ip.ipv6[3];
> + priv->rx_ip_tbl[ip_index].ref_count++;
> +
> + priv->rx_sa_tbl[sa_index].spi =
> + rte_cpu_to_be_32(ic_session->spi);
> + priv->rx_sa_tbl[sa_index].ip_index = ip_index;
> + priv->rx_sa_tbl[sa_index].key[3] =
> + rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[0]);
> + priv->rx_sa_tbl[sa_index].key[2] =
> + rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[4]);
> + priv->rx_sa_tbl[sa_index].key[1] =
> + rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[8]);
> + priv->rx_sa_tbl[sa_index].key[0] =
> + rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[12]);
> + priv->rx_sa_tbl[sa_index].salt =
> + rte_cpu_to_be_32(ic_session->salt);
> + priv->rx_sa_tbl[sa_index].mode = IPSRXMOD_VALID;
> + if (ic_session->op == IXGBE_OP_AUTHENTICATED_DECRYPTION)
> + priv->rx_sa_tbl[sa_index].mode |=
> + (IPSRXMOD_PROTO | IPSRXMOD_DECRYPT);
> + if (ic_session->dst_ip.type == IPv6)
> + priv->rx_sa_tbl[sa_index].mode |= IPSRXMOD_IPV6;
> + priv->rx_sa_tbl[sa_index].used = 1;
> +
> + /* write IP table entry*/
> + reg_val = IPSRXIDX_RX_EN | IPSRXIDX_WRITE |
> + IPSRXIDX_TABLE_IP | (ip_index << 3);
> + if (priv->rx_ip_tbl[ip_index].ip.type == IPv4) {
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3),
> + priv->rx_ip_tbl[ip_index].ip.ipv4);
> + } else {
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0),
> + priv->rx_ip_tbl[ip_index].ip.ipv6[0]);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1),
> + priv->rx_ip_tbl[ip_index].ip.ipv6[1]);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2),
> + priv->rx_ip_tbl[ip_index].ip.ipv6[2]);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3),
> + priv->rx_ip_tbl[ip_index].ip.ipv6[3]);
> + }
> + IXGBE_WAIT_RWRITE;
> +
> + /* write SPI table entry*/
> + reg_val = IPSRXIDX_RX_EN | IPSRXIDX_WRITE |
> + IPSRXIDX_TABLE_SPI | (sa_index << 3);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI,
> + priv->rx_sa_tbl[sa_index].spi);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX,
> + priv->rx_sa_tbl[sa_index].ip_index);
> + IXGBE_WAIT_RWRITE;
> +
> + /* write Key table entry*/
> + reg_val = IPSRXIDX_RX_EN | IPSRXIDX_WRITE |
> + IPSRXIDX_TABLE_KEY | (sa_index << 3);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0),
> + priv->rx_sa_tbl[sa_index].key[0]);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1),
> + priv->rx_sa_tbl[sa_index].key[1]);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2),
> + priv->rx_sa_tbl[sa_index].key[2]);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3),
> + priv->rx_sa_tbl[sa_index].key[3]);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT,
> + priv->rx_sa_tbl[sa_index].salt);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD,
> + priv->rx_sa_tbl[sa_index].mode);
> + IXGBE_WAIT_RWRITE;
> +
> + } else { /* sess->dir == RTE_CRYPTO_OUTBOUND */
> + int i;
> +
> + /* Find a free entry in the SA table*/
> + for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
> + if (priv->tx_sa_tbl[i].used == 0) {
> + sa_index = i;
> + break;
> + }
> + }
> + /* Fail if no free entries*/
> + if (sa_index < 0) {
> + PMD_DRV_LOG(ERR,
> + "No free entry left in the Tx SA table\n");
> + return -1;
> + }
> +
> + priv->tx_sa_tbl[sa_index].spi =
> + rte_cpu_to_be_32(ic_session->spi);
> + priv->tx_sa_tbl[sa_index].key[3] =
> + rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[0]);
> + priv->tx_sa_tbl[sa_index].key[2] =
> + rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[4]);
> + priv->tx_sa_tbl[sa_index].key[1] =
> + rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[8]);
> + priv->tx_sa_tbl[sa_index].key[0] =
> + rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[12]);
> + priv->tx_sa_tbl[sa_index].salt =
> + rte_cpu_to_be_32(ic_session->salt);
> +
> + reg_val = IPSRXIDX_RX_EN | IPSRXIDX_WRITE | (sa_index << 3);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0),
> + priv->tx_sa_tbl[sa_index].key[0]);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1),
> + priv->tx_sa_tbl[sa_index].key[1]);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2),
> + priv->tx_sa_tbl[sa_index].key[2]);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3),
> + priv->tx_sa_tbl[sa_index].key[3]);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT,
> + priv->tx_sa_tbl[sa_index].salt);
> + IXGBE_WAIT_TWRITE;
> +
> + priv->tx_sa_tbl[i].used = 1;
> + ic_session->sa_index = sa_index;
> + }
> +
> + return 0;
> +}
> +
> +static int
> +ixgbe_crypto_remove_sa(struct rte_eth_dev *dev,
> + struct ixgbe_crypto_session *ic_session)
> +{
> + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> + struct ixgbe_ipsec *priv =
> + IXGBE_DEV_PRIVATE_TO_IPSEC(dev->data->dev_private);
> + uint32_t reg_val;
> + int sa_index = -1;
> +
> + if (ic_session->op == IXGBE_OP_AUTHENTICATED_DECRYPTION) {
> + int i, ip_index = -1;
> +
> + /* Find a match in the IP table*/
> + for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
> + if (CMP_IP(priv->rx_ip_tbl[i].ip, ic_session->dst_ip)) {
> + ip_index = i;
> + break;
> + }
> + }
> +
> + /* Fail if no match*/
> + if (ip_index < 0) {
> + PMD_DRV_LOG(ERR,
> + "Entry not found in the Rx IP table\n");
> + return -1;
> + }
> +
> + /* Find a free entry in the SA table*/
> + for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
> + if (priv->rx_sa_tbl[i].spi ==
> + rte_cpu_to_be_32(ic_session->spi)) {
> + sa_index = i;
> + break;
> + }
> + }
> + /* Fail if no match*/
> + if (sa_index < 0) {
> + PMD_DRV_LOG(ERR,
> + "Entry not found in the Rx SA table\n");
> + return -1;
> + }
> +
> + /* Disable and clear Rx SPI and key table table entryes*/
> + reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_SPI | (sa_index << 3);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI, 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX, 0);
> + IXGBE_WAIT_RWRITE;
> + reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_KEY | (sa_index << 3);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT, 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD, 0);
> + IXGBE_WAIT_RWRITE;
> + priv->rx_sa_tbl[sa_index].used = 0;
> +
> + /* If last used then clear the IP table entry*/
> + priv->rx_ip_tbl[ip_index].ref_count--;
> + if (priv->rx_ip_tbl[ip_index].ref_count == 0) {
> + reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_IP |
> + (ip_index << 3);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3), 0);
> + }
> + } else { /* session->dir == RTE_CRYPTO_OUTBOUND */
> + int i;
> +
> + /* Find a match in the SA table*/
> + for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
> + if (priv->tx_sa_tbl[i].spi ==
> + rte_cpu_to_be_32(ic_session->spi)) {
> + sa_index = i;
> + break;
> + }
> + }
> + /* Fail if no match entries*/
> + if (sa_index < 0) {
> + PMD_DRV_LOG(ERR,
> + "Entry not found in the Tx SA table\n");
> + return -1;
> + }
> + reg_val = IPSRXIDX_WRITE | (sa_index << 3);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3), 0);
> + IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT, 0);
> + IXGBE_WAIT_TWRITE;
> +
> + priv->tx_sa_tbl[sa_index].used = 0;
> + }
> +
> + return 0;
> +}
> +
> +static int
> +ixgbe_crypto_create_session(void *device,
> + struct rte_security_session_conf *conf,
> + struct rte_security_session *session,
> + struct rte_mempool *mempool)
> +{
> + struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
> + struct ixgbe_crypto_session *ic_session = NULL;
> + struct rte_crypto_aead_xform *aead_xform;
> + struct rte_eth_conf *dev_conf = ð_dev->data->dev_conf;
> +
> + if (rte_mempool_get(mempool, (void **)&ic_session)) {
> + PMD_DRV_LOG(ERR, "Cannot get object from ic_session mempool");
> + return -ENOMEM;
> + }
> +
> + if (conf->crypto_xform->type != RTE_CRYPTO_SYM_XFORM_AEAD ||
> + conf->crypto_xform->aead.algo !=
> + RTE_CRYPTO_AEAD_AES_GCM) {
> + PMD_DRV_LOG(ERR, "Unsupported crypto transformation mode\n");
> + return -ENOTSUP;
> + }
> + aead_xform = &conf->crypto_xform->aead;
> +
> + if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
> + if (dev_conf->rxmode.enable_sec) {
> + ic_session->op = IXGBE_OP_AUTHENTICATED_DECRYPTION;
> + } else {
> + PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
> + return -ENOTSUP;
> + }
> + } else {
> + if (dev_conf->txmode.enable_sec) {
> + ic_session->op = IXGBE_OP_AUTHENTICATED_ENCRYPTION;
> + } else {
> + PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
> + return -ENOTSUP;
> + }
> + }
> +
> + ic_session->key = aead_xform->key.data;
> + memcpy(&ic_session->salt,
> + &aead_xform->key.data[aead_xform->key.length], 4);
> + ic_session->spi = conf->ipsec.spi;
> + ic_session->dev = eth_dev;
> +
> + set_sec_session_private_data(session, ic_session);
> +
> + if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
> + if (ixgbe_crypto_add_sa(ic_session)) {
> + PMD_DRV_LOG(ERR, "Failed to add SA\n");
> + return -EPERM;
> + }
> + }
> +
> + return 0;
> +}
> +
> +static int
> +ixgbe_crypto_remove_session(void *device,
> + struct rte_security_session *session)
> +{
> + struct rte_eth_dev *eth_dev = device;
> + struct ixgbe_crypto_session *ic_session =
> + (struct ixgbe_crypto_session *)
> + get_sec_session_private_data(session);
> + struct rte_mempool *mempool = rte_mempool_from_obj(ic_session);
> +
> + if (eth_dev != ic_session->dev) {
> + PMD_DRV_LOG(ERR, "Session not bound to this device\n");
> + return -ENODEV;
> + }
> +
> + if (ixgbe_crypto_remove_sa(eth_dev, ic_session)) {
> + PMD_DRV_LOG(ERR, "Failed to remove session\n");
> + return -EFAULT;
> + }
> +
> + rte_mempool_put(mempool, (void *)ic_session);
> +
> + return 0;
> +}
> +
> +static int
> +ixgbe_crypto_update_mb(void *device __rte_unused,
> + struct rte_security_session *session,
> + struct rte_mbuf *m, void *params __rte_unused)
> +{
> + struct ixgbe_crypto_session *ic_session =
> + get_sec_session_private_data(session);
> + if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
> + struct ixgbe_crypto_tx_desc_md *mdata =
> + (struct ixgbe_crypto_tx_desc_md *)&m->udata64;
> + mdata->enc = 1;
> + mdata->sa_idx = ic_session->sa_index;
> + mdata->pad_len = *rte_pktmbuf_mtod_offset(m,
> + uint8_t *, rte_pktmbuf_pkt_len(m) - 18) + 18;
Could you explain what pad_len supposed to contain?
Also what is a magical constant '18'?
Could you create some macro if needed?
> + }
> + return 0;
> +}
> +
> +struct rte_cryptodev_capabilities aes_gmac_crypto_capabilities[] = {
> + { /* AES GMAC (128-bit) */
> + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> + {.sym = {
> + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
> + {.auth = {
> + .algo = RTE_CRYPTO_AUTH_AES_GMAC,
> + .block_size = 16,
> + .key_size = {
> + .min = 16,
> + .max = 16,
> + .increment = 0
> + },
> + .digest_size = {
> + .min = 12,
> + .max = 12,
> + .increment = 0
> + },
> + .iv_size = {
> + .min = 12,
> + .max = 12,
> + .increment = 0
> + }
> + }, }
> + }, }
> + },
> + {
> + .op = RTE_CRYPTO_OP_TYPE_UNDEFINED,
> + {.sym = {
> + .xform_type = RTE_CRYPTO_SYM_XFORM_NOT_SPECIFIED
> + }, }
> + },
> +};
> +
> +struct rte_cryptodev_capabilities aes_gcm_gmac_crypto_capabilities[] = {
> + { /* AES GMAC (128-bit) */
> + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> + {.sym = {
> + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
> + {.auth = {
> + .algo = RTE_CRYPTO_AUTH_AES_GMAC,
> + .block_size = 16,
> + .key_size = {
> + .min = 16,
> + .max = 16,
> + .increment = 0
> + },
> + .digest_size = {
> + .min = 12,
> + .max = 12,
> + .increment = 0
> + },
> + .iv_size = {
> + .min = 12,
> + .max = 12,
> + .increment = 0
> + }
> + }, }
> + }, }
> + },
> + { /* AES GCM (128-bit) */
> + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> + {.sym = {
> + .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
> + {.aead = {
> + .algo = RTE_CRYPTO_AEAD_AES_GCM,
> + .block_size = 16,
> + .key_size = {
> + .min = 16,
> + .max = 16,
> + .increment = 0
> + },
> + .digest_size = {
> + .min = 8,
> + .max = 16,
> + .increment = 4
> + },
> + .aad_size = {
> + .min = 0,
> + .max = 65535,
> + .increment = 1
> + },
> + .iv_size = {
> + .min = 12,
> + .max = 12,
> + .increment = 0
> + }
> + }, }
> + }, }
> + },
> + {
> + .op = RTE_CRYPTO_OP_TYPE_UNDEFINED,
> + {.sym = {
> + .xform_type = RTE_CRYPTO_SYM_XFORM_NOT_SPECIFIED
> + }, }
> + },
> +};
> +
> +static const struct rte_security_capability ixgbe_security_capabilities[] = {
> + { /* IPsec Inline Crypto ESP Transport Egress */
> + .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
> + .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
> + .ipsec = {
> + .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
> + .mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
> + .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
> + .options = { 0 }
> + },
> + .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
> + .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
> + },
> + { /* IPsec Inline Crypto ESP Transport Ingress */
> + .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
> + .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
> + .ipsec = {
> + .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
> + .mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
> + .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
> + .options = { 0 }
> + },
> + .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
> + .ol_flags = 0
> + },
> + { /* IPsec Inline Crypto ESP Tunnel Egress */
> + .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
> + .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
> + .ipsec = {
> + .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
> + .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
> + .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
> + .options = { 0 }
> + },
> + .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
> + .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
> + },
> + { /* IPsec Inline Crypto ESP Tunnel Ingress */
> + .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
> + .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
> + .ipsec = {
> + .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
> + .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
> + .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
> + .options = { 0 }
> + },
> + .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
> + .ol_flags = 0
> + },
> + {
> + .action = RTE_SECURITY_ACTION_TYPE_NONE
> + }
> +};
> +
> +static const struct rte_security_capability *
> +ixgbe_crypto_capabilities_get(void *device __rte_unused)
> +{
As a nit: if ixgbe_security_capabilities are not used in any other place -
you can move its definition inside that function.
> + return ixgbe_security_capabilities;
> +}
> +
> +
> +int
> +ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
> +{
> + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> + uint32_t reg;
> +
> + /* sanity checks */
> + if (dev->data->dev_conf.rxmode.enable_lro) {
> + PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
> + return -1;
> + }
> + if (!dev->data->dev_conf.rxmode.hw_strip_crc) {
> + PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
> + return -1;
> + }
> +
> +
> + /* Set IXGBE_SECTXBUFFAF to 0x15 as required in the datasheet*/
> + IXGBE_WRITE_REG(hw, IXGBE_SECTXBUFFAF, 0x15);
> +
> + /* IFG needs to be set to 3 when we are using security. Otherwise a Tx
> + * hang will occur with heavy traffic.
> + */
> + reg = IXGBE_READ_REG(hw, IXGBE_SECTXMINIFG);
> + reg = (reg & 0xFFFFFFF0) | 0x3;
> + IXGBE_WRITE_REG(hw, IXGBE_SECTXMINIFG, reg);
> +
> + reg = IXGBE_READ_REG(hw, IXGBE_HLREG0);
> + reg |= IXGBE_HLREG0_TXCRCEN | IXGBE_HLREG0_RXCRCSTRP;
> + IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
> +
> + if (dev->data->dev_conf.rxmode.enable_sec) {
> + IXGBE_WRITE_REG(hw, IXGBE_SECRXCTRL, 0);
> + reg = IXGBE_READ_REG(hw, IXGBE_SECRXCTRL);
> + if (reg != 0) {
> + PMD_DRV_LOG(ERR, "Error enabling Rx Crypto");
> + return -1;
> + }
> + }
> + if (dev->data->dev_conf.txmode.enable_sec) {
> + IXGBE_WRITE_REG(hw, IXGBE_SECTXCTRL,
> + IXGBE_SECTXCTRL_STORE_FORWARD);
> + reg = IXGBE_READ_REG(hw, IXGBE_SECTXCTRL);
> + if (reg != IXGBE_SECTXCTRL_STORE_FORWARD) {
> + PMD_DRV_LOG(ERR, "Error enabling Rx Crypto");
> + return -1;
> + }
> + }
> +
> + ixgbe_crypto_clear_ipsec_tables(dev);
> +
> + return 0;
> +}
> +
> +int
> +ixgbe_crypto_add_ingress_sa_from_flow(const void *sess,
> + const void *ip_spec,
> + uint8_t is_ipv6)
> +{
> + struct ixgbe_crypto_session *ic_session
> + = get_sec_session_private_data(sess);
> +
> + if (ic_session->op == IXGBE_OP_AUTHENTICATED_DECRYPTION) {
> + if (is_ipv6) {
> + const struct rte_flow_item_ipv6 *ipv6 = ip_spec;
> + ic_session->src_ip.type = IPv6;
> + ic_session->dst_ip.type = IPv6;
> + rte_memcpy(ic_session->src_ip.ipv6,
> + ipv6->hdr.src_addr, 16);
> + rte_memcpy(ic_session->dst_ip.ipv6,
> + ipv6->hdr.dst_addr, 16);
> + } else {
> + const struct rte_flow_item_ipv4 *ipv4 = ip_spec;
> + ic_session->src_ip.type = IPv4;
> + ic_session->dst_ip.type = IPv4;
> + ic_session->src_ip.ipv4 = ipv4->hdr.src_addr;
> + ic_session->dst_ip.ipv4 = ipv4->hdr.dst_addr;
> + }
> + return ixgbe_crypto_add_sa(ic_session);
> + }
> +
> + return 0;
> +}
> +
> +
> +struct rte_security_ops ixgbe_security_ops = {
> + .session_create = ixgbe_crypto_create_session,
> + .session_update = NULL,
> + .session_stats_get = NULL,
> + .session_destroy = ixgbe_crypto_remove_session,
> +
> + .set_pkt_metadata = ixgbe_crypto_update_mb,
> +
> + .capabilities_get = ixgbe_crypto_capabilities_get
> +};
> diff --git a/drivers/net/ixgbe/ixgbe_ipsec.h b/drivers/net/ixgbe/ixgbe_ipsec.h
> new file mode 100644
> index 0000000..9f06235
> --- /dev/null
> +++ b/drivers/net/ixgbe/ixgbe_ipsec.h
> @@ -0,0 +1,147 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
> + * All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * * Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * * Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in
> + * the documentation and/or other materials provided with the
> + * distribution.
> + * * Neither the name of Intel Corporation nor the names of its
> + * contributors may be used to endorse or promote products derived
> + * from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef IXGBE_IPSEC_H_
> +#define IXGBE_IPSEC_H_
> +
> +#include <rte_security.h>
> +
> +#define IPSRXIDX_RX_EN 0x00000001
> +#define IPSRXIDX_TABLE_IP 0x00000002
> +#define IPSRXIDX_TABLE_SPI 0x00000004
> +#define IPSRXIDX_TABLE_KEY 0x00000006
> +#define IPSRXIDX_WRITE 0x80000000
> +#define IPSRXIDX_READ 0x40000000
> +#define IPSRXMOD_VALID 0x00000001
> +#define IPSRXMOD_PROTO 0x00000004
> +#define IPSRXMOD_DECRYPT 0x00000008
> +#define IPSRXMOD_IPV6 0x00000010
> +#define IXGBE_ADVTXD_POPTS_IPSEC 0x00000400
> +#define IXGBE_ADVTXD_TUCMD_IPSEC_TYPE_ESP 0x00002000
> +#define IXGBE_ADVTXD_TUCMD_IPSEC_ENCRYPT_EN 0x00004000
> +#define IXGBE_RXDADV_IPSEC_STATUS_SECP 0x00020000
> +#define IXGBE_RXDADV_IPSEC_ERROR_BIT_MASK 0x18000000
> +#define IXGBE_RXDADV_IPSEC_ERROR_INVALID_PROTOCOL 0x08000000
> +#define IXGBE_RXDADV_IPSEC_ERROR_INVALID_LENGTH 0x10000000
> +#define IXGBE_RXDADV_IPSEC_ERROR_AUTHENTICATION_FAILED 0x18000000
> +
> +#define IPSEC_MAX_RX_IP_COUNT 128
> +#define IPSEC_MAX_SA_COUNT 1024
> +
> +enum ixgbe_operation {
> + IXGBE_OP_AUTHENTICATED_ENCRYPTION,
> + IXGBE_OP_AUTHENTICATED_DECRYPTION
> +};
> +
> +enum ixgbe_gcm_key {
> + IXGBE_GCM_KEY_128,
> + IXGBE_GCM_KEY_256
> +};
> +
> +/**
> + * Generic IP address structure
> + * TODO: Find better location for this rte_net.h possibly.
> + **/
> +struct ipaddr {
> + enum ipaddr_type {
> + IPv4,
> + IPv6
> + } type;
> + /**< IP Address Type - IPv4/IPv6 */
> +
> + union {
> + uint32_t ipv4;
> + uint32_t ipv6[4];
> + };
> +};
> +
> +/** inline crypto crypto private session structure */
> +struct ixgbe_crypto_session {
> + enum ixgbe_operation op;
> + uint8_t *key;
> + uint32_t salt;
> + uint32_t sa_index;
> + uint32_t spi;
> + struct ipaddr src_ip;
> + struct ipaddr dst_ip;
> + struct rte_eth_dev *dev;
> +} __rte_cache_aligned;
> +
> +struct ixgbe_crypto_rx_ip_table {
> + struct ipaddr ip;
> + uint16_t ref_count;
> +};
> +struct ixgbe_crypto_rx_sa_table {
> + uint32_t spi;
> + uint32_t ip_index;
> + uint32_t key[4];
> + uint32_t salt;
> + uint8_t mode;
> + uint8_t used;
> +};
> +
> +struct ixgbe_crypto_tx_sa_table {
> + uint32_t spi;
> + uint32_t key[4];
> + uint32_t salt;
> + uint8_t used;
> +};
> +
> +struct ixgbe_crypto_tx_desc_md {
> + union {
> + uint64_t data;
> + struct {
> + uint32_t sa_idx;
> + uint8_t pad_len;
> + uint8_t enc;
> + };
> + };
> +};
Why just not:
union ixgbe_crypto_tx_desc_md {
uint64_t data;
struct {...};
};
?
> +
> +struct ixgbe_ipsec {
> + struct ixgbe_crypto_rx_ip_table rx_ip_tbl[IPSEC_MAX_RX_IP_COUNT];
> + struct ixgbe_crypto_rx_sa_table rx_sa_tbl[IPSEC_MAX_SA_COUNT];
> + struct ixgbe_crypto_tx_sa_table tx_sa_tbl[IPSEC_MAX_SA_COUNT];
> +};
> +
> +extern struct rte_security_ops ixgbe_security_ops;
> +
> +
> +int ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev);
> +int ixgbe_crypto_add_ingress_sa_from_flow(const void *sess,
> + const void *ip_spec,
> + uint8_t is_ipv6);
> +
> +
> +
> +#endif /*IXGBE_IPSEC_H_*/
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
> index 0038dfb..279e3fa 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> @@ -93,6 +93,7 @@
> PKT_TX_TCP_SEG | \
> PKT_TX_MACSEC | \
> PKT_TX_OUTER_IP_CKSUM | \
> + PKT_TX_SEC_OFFLOAD | \
> IXGBE_TX_IEEE1588_TMST)
>
> #define IXGBE_TX_OFFLOAD_NOTSUP_MASK \
> @@ -395,7 +396,8 @@ ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
> static inline void
> ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
> volatile struct ixgbe_adv_tx_context_desc *ctx_txd,
> - uint64_t ol_flags, union ixgbe_tx_offload tx_offload)
> + uint64_t ol_flags, union ixgbe_tx_offload tx_offload,
> + struct rte_mbuf *mb)
I don't think you need to pass mb as a parameter to that function:
you already have ol_flags as a parameter and all you need is just struct ixgbe_crypto_tx_desc_md md
here as an extra parameter.
> {
> uint32_t type_tucmd_mlhl;
> uint32_t mss_l4len_idx = 0;
> @@ -479,6 +481,18 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
> seqnum_seed |= tx_offload.l2_len
> << IXGBE_ADVTXD_TUNNEL_LEN;
> }
> + if (mb->ol_flags & PKT_TX_SEC_OFFLOAD) {
> + struct ixgbe_crypto_tx_desc_md *mdata =
> + (struct ixgbe_crypto_tx_desc_md *)
> + &mb->udata64;
> + seqnum_seed |=
> + (IXGBE_ADVTXD_IPSEC_SA_INDEX_MASK & mdata->sa_idx);
> + type_tucmd_mlhl |= mdata->enc ?
> + (IXGBE_ADVTXD_TUCMD_IPSEC_TYPE_ESP |
> + IXGBE_ADVTXD_TUCMD_IPSEC_ENCRYPT_EN) : 0;
> + type_tucmd_mlhl |=
> + (mdata->pad_len & IXGBE_ADVTXD_IPSEC_ESP_LEN_MASK);
Shouldn't we also update tx_offload_mask here?
> + }
>
> txq->ctx_cache[ctx_idx].flags = ol_flags;
> txq->ctx_cache[ctx_idx].tx_offload.data[0] =
> @@ -657,6 +671,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> uint32_t ctx = 0;
> uint32_t new_ctx;
> union ixgbe_tx_offload tx_offload;
> + uint8_t use_ipsec;
>
> tx_offload.data[0] = 0;
> tx_offload.data[1] = 0;
> @@ -684,6 +699,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> * are needed for offload functionality.
> */
> ol_flags = tx_pkt->ol_flags;
> + use_ipsec = txq->using_ipsec && (ol_flags & PKT_TX_SEC_OFFLOAD);
>
> /* If hardware offload required */
> tx_ol_req = ol_flags & IXGBE_TX_OFFLOAD_MASK;
> @@ -695,6 +711,13 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> tx_offload.tso_segsz = tx_pkt->tso_segsz;
> tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
> tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
> + if (use_ipsec) {
> + struct ixgbe_crypto_tx_desc_md *ipsec_mdata =
> + (struct ixgbe_crypto_tx_desc_md *)
> + &tx_pkt->udata64;
> + tx_offload.sa_idx = ipsec_mdata->sa_idx;
> + tx_offload.sec_pad_len = ipsec_mdata->pad_len;
> + }
>
> /* If new context need be built or reuse the exist ctx. */
> ctx = what_advctx_update(txq, tx_ol_req,
> @@ -855,7 +878,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> }
>
> ixgbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req,
> - tx_offload);
> + tx_offload, tx_pkt);
>
> txe->last_id = tx_last;
> tx_id = txe->next_id;
> @@ -873,6 +896,8 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> }
>
> olinfo_status |= (pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT);
> + if (use_ipsec)
> + olinfo_status |= IXGBE_ADVTXD_POPTS_IPSEC;
>
> m_seg = tx_pkt;
> do {
> @@ -1447,6 +1472,12 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status)
> pkt_flags |= PKT_RX_EIP_CKSUM_BAD;
> }
>
> + if (rx_status & IXGBE_RXD_STAT_SECP) {
> + pkt_flags |= PKT_RX_SEC_OFFLOAD;
> + if (rx_status & IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG)
> + pkt_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
> + }
> +
> return pkt_flags;
> }
>
> @@ -2364,8 +2395,9 @@ void __attribute__((cold))
> ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ixgbe_tx_queue *txq)
> {
> /* Use a simple Tx queue (no offloads, no multi segs) if possible */
> - if (((txq->txq_flags & IXGBE_SIMPLE_FLAGS) == IXGBE_SIMPLE_FLAGS)
> - && (txq->tx_rs_thresh >= RTE_PMD_IXGBE_TX_MAX_BURST)) {
> + if (((txq->txq_flags & IXGBE_SIMPLE_FLAGS) == IXGBE_SIMPLE_FLAGS) &&
> + (txq->tx_rs_thresh >= RTE_PMD_IXGBE_TX_MAX_BURST) &&
> + !(dev->data->dev_conf.txmode.enable_sec)) {
> PMD_INIT_LOG(DEBUG, "Using simple tx code path");
> dev->tx_pkt_prepare = NULL;
> #ifdef RTE_IXGBE_INC_VECTOR
> @@ -2535,6 +2567,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
> txq->txq_flags = tx_conf->txq_flags;
> txq->ops = &def_txq_ops;
> txq->tx_deferred_start = tx_conf->tx_deferred_start;
> + txq->using_ipsec = dev->data->dev_conf.txmode.enable_sec;
>
> /*
> * Modification to set VFTDT for virtual function if vf is detected
> @@ -4519,6 +4552,7 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev)
> struct ixgbe_rx_queue *rxq = dev->data->rx_queues[i];
>
> rxq->rx_using_sse = rx_using_sse;
> + rxq->using_ipsec = dev->data->dev_conf.rxmode.enable_sec;
> }
> }
>
> @@ -5006,6 +5040,17 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
> dev->data->dev_conf.lpbk_mode == IXGBE_LPBK_82599_TX_RX)
> ixgbe_setup_loopback_link_82599(hw);
>
> + if (dev->data->dev_conf.rxmode.enable_sec ||
> + dev->data->dev_conf.txmode.enable_sec) {
> + ret = ixgbe_crypto_enable_ipsec(dev);
> + if (ret != 0) {
> + PMD_DRV_LOG(ERR,
> + "ixgbe_crypto_enable_ipsec fails with %d.",
> + ret);
> + return ret;
> + }
> + }
> +
> return 0;
> }
>
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
> index 81c527f..4017831 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx.h
> +++ b/drivers/net/ixgbe/ixgbe_rxtx.h
> @@ -138,8 +138,10 @@ struct ixgbe_rx_queue {
> uint16_t rx_nb_avail; /**< nr of staged pkts ready to ret to app */
> uint16_t rx_next_avail; /**< idx of next staged pkt to ret to app */
> uint16_t rx_free_trigger; /**< triggers rx buffer allocation */
> - uint16_t rx_using_sse;
> + uint8_t rx_using_sse;
> /**< indicates that vector RX is in use */
> + uint8_t using_ipsec;
> + /**< indicates that IPsec RX feature is in use */
> #ifdef RTE_IXGBE_INC_VECTOR
> uint16_t rxrearm_nb; /**< number of remaining to be re-armed */
> uint16_t rxrearm_start; /**< the idx we start the re-arming from */
> @@ -183,6 +185,10 @@ union ixgbe_tx_offload {
> /* fields for TX offloading of tunnels */
> uint64_t outer_l3_len:8; /**< Outer L3 (IP) Hdr Length. */
> uint64_t outer_l2_len:8; /**< Outer L2 (MAC) Hdr Length. */
> +
> + /* inline ipsec related*/
> + uint64_t sa_idx:8; /**< TX SA database entry index */
> + uint64_t sec_pad_len:4; /**< padding length */
> };
> };
>
> @@ -247,6 +253,9 @@ struct ixgbe_tx_queue {
> struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
> const struct ixgbe_txq_ops *ops; /**< txq ops */
> uint8_t tx_deferred_start; /**< not in global dev start. */
> + uint8_t using_ipsec;
> + /**< indicates that IPsec TX feature is in use */
> +
> };
>
> struct ixgbe_txq_ops {
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
> index e704a7f..c9b1e2e 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
> +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
> @@ -124,10 +124,12 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq)
>
> static inline void
> desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
> - struct rte_mbuf **rx_pkts)
> + struct rte_mbuf **rx_pkts, uint8_t use_ipsec)
> {
> __m128i ptype0, ptype1, vtag0, vtag1, csum;
> __m128i rearm0, rearm1, rearm2, rearm3;
> + __m128i sterr0, sterr1, sterr2, sterr3;
> + __m128i tmp1, tmp2, tmp3, tmp4;
>
> /* mask everything except rss type */
> const __m128i rsstype_msk = _mm_set_epi16(
> @@ -174,6 +176,41 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
> 0, PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
> PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t));
>
> + const __m128i ipsec_sterr_msk = _mm_set_epi32(
> + 0, IXGBE_RXDADV_IPSEC_STATUS_SECP |
> + IXGBE_RXDADV_IPSEC_ERROR_AUTH_FAILED,
> + 0, 0);
> + const __m128i ipsec_proc_msk = _mm_set_epi32(
> + 0, IXGBE_RXDADV_IPSEC_STATUS_SECP, 0, 0);
> + const __m128i ipsec_err_flag = _mm_set_epi32(
> + 0, PKT_RX_SEC_OFFLOAD_FAILED | PKT_RX_SEC_OFFLOAD,
> + 0, 0);
> + const __m128i ipsec_proc_flag = _mm_set_epi32(
> + 0, PKT_RX_SEC_OFFLOAD, 0, 0);
> +
> + if (use_ipsec) {
> + sterr0 = _mm_and_si128(descs[0], ipsec_sterr_msk);
> + sterr1 = _mm_and_si128(descs[1], ipsec_sterr_msk);
> + sterr2 = _mm_and_si128(descs[2], ipsec_sterr_msk);
> + sterr3 = _mm_and_si128(descs[3], ipsec_sterr_msk);
> + tmp1 = _mm_cmpeq_epi32(sterr0, ipsec_sterr_msk);
> + tmp2 = _mm_cmpeq_epi32(sterr0, ipsec_proc_msk);
> + tmp3 = _mm_cmpeq_epi32(sterr1, ipsec_sterr_msk);
> + tmp4 = _mm_cmpeq_epi32(sterr1, ipsec_proc_msk);
> + sterr0 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
> + _mm_and_si128(tmp2, ipsec_proc_flag));
> + sterr1 = _mm_or_si128(_mm_and_si128(tmp3, ipsec_err_flag),
> + _mm_and_si128(tmp4, ipsec_proc_flag));
> + tmp1 = _mm_cmpeq_epi32(sterr2, ipsec_sterr_msk);
> + tmp2 = _mm_cmpeq_epi32(sterr2, ipsec_proc_msk);
> + tmp3 = _mm_cmpeq_epi32(sterr3, ipsec_sterr_msk);
> + tmp4 = _mm_cmpeq_epi32(sterr3, ipsec_proc_msk);
> + sterr2 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
> + _mm_and_si128(tmp2, ipsec_proc_flag));
> + sterr3 = _mm_or_si128(_mm_and_si128(tmp3, ipsec_err_flag),
> + _mm_and_si128(tmp4, ipsec_proc_flag));
> + }
> +
> ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
> ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
> vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
> @@ -221,6 +258,13 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
> rearm2 = _mm_blend_epi16(mbuf_init, _mm_slli_si128(vtag1, 4), 0x10);
> rearm3 = _mm_blend_epi16(mbuf_init, _mm_slli_si128(vtag1, 2), 0x10);
>
> + if (use_ipsec) {
> + rearm0 = _mm_or_si128(rearm0, sterr0);
> + rearm1 = _mm_or_si128(rearm1, sterr1);
> + rearm2 = _mm_or_si128(rearm2, sterr2);
> + rearm3 = _mm_or_si128(rearm3, sterr3);
> + }
> +
> /* write the rearm data and the olflags in one write */
> RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) !=
> offsetof(struct rte_mbuf, rearm_data) + 8);
> @@ -310,6 +354,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
> volatile union ixgbe_adv_rx_desc *rxdp;
> struct ixgbe_rx_entry *sw_ring;
> uint16_t nb_pkts_recd;
> + uint8_t use_ipsec = rxq->using_ipsec;
> int pos;
> uint64_t var;
> __m128i shuf_msk;
> @@ -471,7 +516,8 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
> sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
>
> /* set ol_flags with vlan packet type */
> - desc_to_olflags_v(descs, mbuf_init, vlan_flags, &rx_pkts[pos]);
> + desc_to_olflags_v(descs, mbuf_init, vlan_flags,
> + &rx_pkts[pos], use_ipsec);
>
> /* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
> pkt_mb4 = _mm_add_epi16(pkt_mb4, crc_adjust);
> --
> 2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 10/12] net/ixgbe: enable inline ipsec
2017-10-18 21:29 ` Ananyev, Konstantin
@ 2017-10-19 10:51 ` Radu Nicolau
2017-10-19 11:04 ` Ananyev, Konstantin
0 siblings, 1 reply; 195+ messages in thread
From: Radu Nicolau @ 2017-10-19 10:51 UTC (permalink / raw)
To: Ananyev, Konstantin, Akhil Goyal, dev
Cc: Doherty, Declan, De Lara Guarch, Pablo, hemant.agrawal, borisp,
aviadye, thomas, sandeep.malik, jerin.jacob, Mcnamara, John,
shahafs, olivier.matz
Hi,
Comments inline
On 10/18/2017 10:29 PM, Ananyev, Konstantin wrote:
> Hi Radu,
> Few comments from me below.
> Konstantin
>
>> <snip>
>>
>> +#define IXGBE_WRITE_REG_THEN_POLL_MASK(hw, reg, val, mask, poll_ms) \
>> +{ \
>> + uint32_t cnt = poll_ms; \
>> + IXGBE_WRITE_REG(hw, (reg), (val)); \
>> + while (((IXGBE_READ_REG(hw, (reg))) & (mask)) && (cnt--)) \
>> + rte_delay_ms(1); \
>> +}
>> +
> As you have a macro that consists from multiple statements, you'll need a do { ..} while (0) wrapper
> around it.
> Though I still suggest to make it an inline function - would be much better.
I will add do-while wrapper, but making it an inline function there
brings in a circular dependency.
>
>> <snip>
>> +
>> +static int
>> +ixgbe_crypto_update_mb(void *device __rte_unused,
>> + struct rte_security_session *session,
>> + struct rte_mbuf *m, void *params __rte_unused)
>> +{
>> + struct ixgbe_crypto_session *ic_session =
>> + get_sec_session_private_data(session);
>> + if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
>> + struct ixgbe_crypto_tx_desc_md *mdata =
>> + (struct ixgbe_crypto_tx_desc_md *)&m->udata64;
>> + mdata->enc = 1;
>> + mdata->sa_idx = ic_session->sa_index;
>> + mdata->pad_len = *rte_pktmbuf_mtod_offset(m,
>> + uint8_t *, rte_pktmbuf_pkt_len(m) - 18) + 18;
> Could you explain what pad_len supposed to contain?
> Also what is a magical constant '18'?
> Could you create some macro if needed?
I added an explanation in the code, we read the payload padding size
that is stored at the len-18 bytes and add 18 bytes, 2 for ESP trailer
and 16 for ICV.
>> + }
>> + return 0;
>> +}
>> +
>> +struct rte_cryptodev_capabilities aes_gmac_crypto_capabilities[] = {
>> + { /* AES GMAC (128-bit) */
>> + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
>> + {.sym = {
>> + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
>> + {.auth = {
>> + .algo = RTE_CRYPTO_AUTH_AES_GMAC,
>> + .block_size = 16,
>> + .key_size = {
>> + .min = 16,
>> + .max = 16,
>> + .increment = 0
>> + },
>> + .digest_size = {
>> + .min = 12,
>> + .max = 12,
>> + .increment = 0
>> + },
>> + .iv_size = {
>> + .min = 12,
>> + .max = 12,
>> + .increment = 0
>> + }
>> + }, }
>> + }, }
>> + },
>> + {
>> + .op = RTE_CRYPTO_OP_TYPE_UNDEFINED,
>> + {.sym = {
>> + .xform_type = RTE_CRYPTO_SYM_XFORM_NOT_SPECIFIED
>> + }, }
>> + },
>> +};
>> +
>> +struct rte_cryptodev_capabilities aes_gcm_gmac_crypto_capabilities[] = {
>> + { /* AES GMAC (128-bit) */
>> + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
>> + {.sym = {
>> + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
>> + {.auth = {
>> + .algo = RTE_CRYPTO_AUTH_AES_GMAC,
>> + .block_size = 16,
>> + .key_size = {
>> + .min = 16,
>> + .max = 16,
>> + .increment = 0
>> + },
>> + .digest_size = {
>> + .min = 12,
>> + .max = 12,
>> + .increment = 0
>> + },
>> + .iv_size = {
>> + .min = 12,
>> + .max = 12,
>> + .increment = 0
>> + }
>> + }, }
>> + }, }
>> + },
>> + { /* AES GCM (128-bit) */
>> + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
>> + {.sym = {
>> + .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
>> + {.aead = {
>> + .algo = RTE_CRYPTO_AEAD_AES_GCM,
>> + .block_size = 16,
>> + .key_size = {
>> + .min = 16,
>> + .max = 16,
>> + .increment = 0
>> + },
>> + .digest_size = {
>> + .min = 8,
>> + .max = 16,
>> + .increment = 4
>> + },
>> + .aad_size = {
>> + .min = 0,
>> + .max = 65535,
>> + .increment = 1
>> + },
>> + .iv_size = {
>> + .min = 12,
>> + .max = 12,
>> + .increment = 0
>> + }
>> + }, }
>> + }, }
>> + },
>> + {
>> + .op = RTE_CRYPTO_OP_TYPE_UNDEFINED,
>> + {.sym = {
>> + .xform_type = RTE_CRYPTO_SYM_XFORM_NOT_SPECIFIED
>> + }, }
>> + },
>> +};
>> +
>> +static const struct rte_security_capability ixgbe_security_capabilities[] = {
>> + { /* IPsec Inline Crypto ESP Transport Egress */
>> + .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
>> + .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
>> + .ipsec = {
>> + .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
>> + .mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
>> + .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
>> + .options = { 0 }
>> + },
>> + .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
>> + .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
>> + },
>> + { /* IPsec Inline Crypto ESP Transport Ingress */
>> + .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
>> + .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
>> + .ipsec = {
>> + .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
>> + .mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
>> + .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
>> + .options = { 0 }
>> + },
>> + .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
>> + .ol_flags = 0
>> + },
>> + { /* IPsec Inline Crypto ESP Tunnel Egress */
>> + .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
>> + .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
>> + .ipsec = {
>> + .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
>> + .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
>> + .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
>> + .options = { 0 }
>> + },
>> + .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
>> + .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
>> + },
>> + { /* IPsec Inline Crypto ESP Tunnel Ingress */
>> + .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
>> + .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
>> + .ipsec = {
>> + .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
>> + .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
>> + .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
>> + .options = { 0 }
>> + },
>> + .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
>> + .ol_flags = 0
>> + },
>> + {
>> + .action = RTE_SECURITY_ACTION_TYPE_NONE
>> + }
>> +};
>> +
>> +static const struct rte_security_capability *
>> +ixgbe_crypto_capabilities_get(void *device __rte_unused)
>> +{
> As a nit: if ixgbe_security_capabilities are not used in any other place -
> you can move its definition inside that function.
Done.
>
>> +};
>> +
>> +struct ixgbe_crypto_tx_desc_md {
>> + union {
>> + uint64_t data;
>> + struct {
>> + uint32_t sa_idx;
>> + uint8_t pad_len;
>> + uint8_t enc;
>> + };
>> + };
>> +};
>
> Why just not:
> union ixgbe_crypto_tx_desc_md {
> uint64_t data;
> struct {...};
> };
> ?
Done.
>
>> +
>> +struct ixgbe_ipsec {
>> + struct ixgbe_crypto_rx_ip_table rx_ip_tbl[IPSEC_MAX_RX_IP_COUNT];
>> + struct ixgbe_crypto_rx_sa_table rx_sa_tbl[IPSEC_MAX_SA_COUNT];
>> + struct ixgbe_crypto_tx_sa_table tx_sa_tbl[IPSEC_MAX_SA_COUNT];
>> +};
>> +
>> +extern struct rte_security_ops ixgbe_security_ops;
>> +
>> +
>> +int ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev);
>> +int ixgbe_crypto_add_ingress_sa_from_flow(const void *sess,
>> + const void *ip_spec,
>> + uint8_t is_ipv6);
>> +
>> +
>> +
>> +#endif /*IXGBE_IPSEC_H_*/
>> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
>> index 0038dfb..279e3fa 100644
>> --- a/drivers/net/ixgbe/ixgbe_rxtx.c
>> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
>> @@ -93,6 +93,7 @@
>> PKT_TX_TCP_SEG | \
>> PKT_TX_MACSEC | \
>> PKT_TX_OUTER_IP_CKSUM | \
>> + PKT_TX_SEC_OFFLOAD | \
>> IXGBE_TX_IEEE1588_TMST)
>>
>> #define IXGBE_TX_OFFLOAD_NOTSUP_MASK \
>> @@ -395,7 +396,8 @@ ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
>> static inline void
>> ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
>> volatile struct ixgbe_adv_tx_context_desc *ctx_txd,
>> - uint64_t ol_flags, union ixgbe_tx_offload tx_offload)
>> + uint64_t ol_flags, union ixgbe_tx_offload tx_offload,
>> + struct rte_mbuf *mb)
> I don't think you need to pass mb as a parameter to that function:
> you already have ol_flags as a parameter and all you need is just struct ixgbe_crypto_tx_desc_md md
> here as an extra parameter.
Done.
>
>> {
>> uint32_t type_tucmd_mlhl;
>> uint32_t mss_l4len_idx = 0;
>> @@ -479,6 +481,18 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
>> seqnum_seed |= tx_offload.l2_len
>> << IXGBE_ADVTXD_TUNNEL_LEN;
>> }
>> + if (mb->ol_flags & PKT_TX_SEC_OFFLOAD) {
>> + struct ixgbe_crypto_tx_desc_md *mdata =
>> + (struct ixgbe_crypto_tx_desc_md *)
>> + &mb->udata64;
>> + seqnum_seed |=
>> + (IXGBE_ADVTXD_IPSEC_SA_INDEX_MASK & mdata->sa_idx);
>> + type_tucmd_mlhl |= mdata->enc ?
>> + (IXGBE_ADVTXD_TUCMD_IPSEC_TYPE_ESP |
>> + IXGBE_ADVTXD_TUCMD_IPSEC_ENCRYPT_EN) : 0;
>> + type_tucmd_mlhl |=
>> + (mdata->pad_len & IXGBE_ADVTXD_IPSEC_ESP_LEN_MASK);
> Shouldn't we also update tx_offload_mask here?
We do - updated.
>
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 10/12] net/ixgbe: enable inline ipsec
2017-10-19 10:51 ` Radu Nicolau
@ 2017-10-19 11:04 ` Ananyev, Konstantin
2017-10-19 11:57 ` Nicolau, Radu
0 siblings, 1 reply; 195+ messages in thread
From: Ananyev, Konstantin @ 2017-10-19 11:04 UTC (permalink / raw)
To: Nicolau, Radu, Akhil Goyal, dev
Cc: Doherty, Declan, De Lara Guarch, Pablo, hemant.agrawal, borisp,
aviadye, thomas, sandeep.malik, jerin.jacob, Mcnamara, John,
shahafs, olivier.matz
> >
> >> <snip>
> >> +
> >> +static int
> >> +ixgbe_crypto_update_mb(void *device __rte_unused,
> >> + struct rte_security_session *session,
> >> + struct rte_mbuf *m, void *params __rte_unused)
> >> +{
> >> + struct ixgbe_crypto_session *ic_session =
> >> + get_sec_session_private_data(session);
> >> + if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
> >> + struct ixgbe_crypto_tx_desc_md *mdata =
> >> + (struct ixgbe_crypto_tx_desc_md *)&m->udata64;
> >> + mdata->enc = 1;
> >> + mdata->sa_idx = ic_session->sa_index;
> >> + mdata->pad_len = *rte_pktmbuf_mtod_offset(m,
> >> + uint8_t *, rte_pktmbuf_pkt_len(m) - 18) + 18;
> > Could you explain what pad_len supposed to contain?
> > Also what is a magical constant '18'?
> > Could you create some macro if needed?
> I added an explanation in the code, we read the payload padding size
> that is stored at the len-18 bytes and add 18 bytes, 2 for ESP trailer
> and 16 for ICV.
Ok, can we at least have a macros for all these constants?
Another question: you do use pkt_len() here - does it mean that multi-segment
packets are not supported by ixgbe-ipsec?
Konstantin
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 10/12] net/ixgbe: enable inline ipsec
2017-10-19 11:04 ` Ananyev, Konstantin
@ 2017-10-19 11:57 ` Nicolau, Radu
2017-10-19 12:16 ` Ananyev, Konstantin
0 siblings, 1 reply; 195+ messages in thread
From: Nicolau, Radu @ 2017-10-19 11:57 UTC (permalink / raw)
To: Ananyev, Konstantin, Akhil Goyal, dev
Cc: Doherty, Declan, De Lara Guarch, Pablo, hemant.agrawal, borisp,
aviadye, thomas, sandeep.malik, jerin.jacob, Mcnamara, John,
shahafs, olivier.matz
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Thursday, October 19, 2017 12:04 PM
> To: Nicolau, Radu <radu.nicolau@intel.com>; Akhil Goyal
> <akhil.goyal@nxp.com>; dev@dpdk.org
> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com;
> borisp@mellanox.com; aviadye@mellanox.com; thomas@monjalon.net;
> sandeep.malik@nxp.com; jerin.jacob@caviumnetworks.com; Mcnamara,
> John <john.mcnamara@intel.com>; shahafs@mellanox.com;
> olivier.matz@6wind.com
> Subject: RE: [PATCH v4 10/12] net/ixgbe: enable inline ipsec
>
>
>
> > >
> > >> <snip>
> > >> +
> > >> +static int
> > >> +ixgbe_crypto_update_mb(void *device __rte_unused,
> > >> + struct rte_security_session *session,
> > >> + struct rte_mbuf *m, void *params __rte_unused) {
> > >> + struct ixgbe_crypto_session *ic_session =
> > >> + get_sec_session_private_data(session);
> > >> + if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
> > >> + struct ixgbe_crypto_tx_desc_md *mdata =
> > >> + (struct ixgbe_crypto_tx_desc_md *)&m->udata64;
> > >> + mdata->enc = 1;
> > >> + mdata->sa_idx = ic_session->sa_index;
> > >> + mdata->pad_len = *rte_pktmbuf_mtod_offset(m,
> > >> + uint8_t *, rte_pktmbuf_pkt_len(m) - 18) + 18;
> > > Could you explain what pad_len supposed to contain?
> > > Also what is a magical constant '18'?
> > > Could you create some macro if needed?
> > I added an explanation in the code, we read the payload padding size
> > that is stored at the len-18 bytes and add 18 bytes, 2 for ESP trailer
> > and 16 for ICV.
>
> Ok, can we at least have a macros for all these constants?
> Another question: you do use pkt_len() here - does it mean that multi-
> segment packets are not supported by ixgbe-ipsec?
> Konstantin
It does support multisegment, but the pad_len has to be set only for single send, it will be ignored otherwise. I have updated the code to set it for single segment packets only.
Also, our test app does not support multisegment packets.
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 10/12] net/ixgbe: enable inline ipsec
2017-10-19 11:57 ` Nicolau, Radu
@ 2017-10-19 12:16 ` Ananyev, Konstantin
2017-10-19 12:29 ` Ananyev, Konstantin
2017-10-19 13:09 ` Radu Nicolau
0 siblings, 2 replies; 195+ messages in thread
From: Ananyev, Konstantin @ 2017-10-19 12:16 UTC (permalink / raw)
To: Nicolau, Radu, Akhil Goyal, dev
Cc: Doherty, Declan, De Lara Guarch, Pablo, hemant.agrawal, borisp,
aviadye, thomas, sandeep.malik, jerin.jacob, Mcnamara, John,
shahafs, olivier.matz
> -----Original Message-----
> From: Nicolau, Radu
> Sent: Thursday, October 19, 2017 12:57 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Akhil Goyal <akhil.goyal@nxp.com>; dev@dpdk.org
> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com;
> borisp@mellanox.com; aviadye@mellanox.com; thomas@monjalon.net; sandeep.malik@nxp.com; jerin.jacob@caviumnetworks.com;
> Mcnamara, John <john.mcnamara@intel.com>; shahafs@mellanox.com; olivier.matz@6wind.com
> Subject: RE: [PATCH v4 10/12] net/ixgbe: enable inline ipsec
>
>
>
> > -----Original Message-----
> > From: Ananyev, Konstantin
> > Sent: Thursday, October 19, 2017 12:04 PM
> > To: Nicolau, Radu <radu.nicolau@intel.com>; Akhil Goyal
> > <akhil.goyal@nxp.com>; dev@dpdk.org
> > Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo
> > <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com;
> > borisp@mellanox.com; aviadye@mellanox.com; thomas@monjalon.net;
> > sandeep.malik@nxp.com; jerin.jacob@caviumnetworks.com; Mcnamara,
> > John <john.mcnamara@intel.com>; shahafs@mellanox.com;
> > olivier.matz@6wind.com
> > Subject: RE: [PATCH v4 10/12] net/ixgbe: enable inline ipsec
> >
> >
> >
> > > >
> > > >> <snip>
> > > >> +
> > > >> +static int
> > > >> +ixgbe_crypto_update_mb(void *device __rte_unused,
> > > >> + struct rte_security_session *session,
> > > >> + struct rte_mbuf *m, void *params __rte_unused) {
> > > >> + struct ixgbe_crypto_session *ic_session =
> > > >> + get_sec_session_private_data(session);
> > > >> + if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
> > > >> + struct ixgbe_crypto_tx_desc_md *mdata =
> > > >> + (struct ixgbe_crypto_tx_desc_md *)&m->udata64;
> > > >> + mdata->enc = 1;
> > > >> + mdata->sa_idx = ic_session->sa_index;
> > > >> + mdata->pad_len = *rte_pktmbuf_mtod_offset(m,
> > > >> + uint8_t *, rte_pktmbuf_pkt_len(m) - 18) + 18;
> > > > Could you explain what pad_len supposed to contain?
> > > > Also what is a magical constant '18'?
> > > > Could you create some macro if needed?
> > > I added an explanation in the code, we read the payload padding size
> > > that is stored at the len-18 bytes and add 18 bytes, 2 for ESP trailer
> > > and 16 for ICV.
> >
> > Ok, can we at least have a macros for all these constants?
> > Another question: you do use pkt_len() here - does it mean that multi-
> > segment packets are not supported by ixgbe-ipsec?
> > Konstantin
> It does support multisegment, but the pad_len has to be set only for single send, it will be ignored otherwise. I have updated the code to set
> it for single segment packets only.
Sorry, I didn't understand that.
If that function does support multiseg packets, then it has to go to the last segment via m->next,
If it doesn't, then it should return an error I case of m->nb_seg != 1.
Right?
> Also, our test app does not support multisegment packets.
Ok, I suppose that means, multi-seg case wasn't tested :)
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 10/12] net/ixgbe: enable inline ipsec
2017-10-19 12:16 ` Ananyev, Konstantin
@ 2017-10-19 12:29 ` Ananyev, Konstantin
2017-10-19 13:14 ` Radu Nicolau
2017-10-19 13:09 ` Radu Nicolau
1 sibling, 1 reply; 195+ messages in thread
From: Ananyev, Konstantin @ 2017-10-19 12:29 UTC (permalink / raw)
To: Ananyev, Konstantin, Nicolau, Radu, Akhil Goyal, dev
Cc: Doherty, Declan, De Lara Guarch, Pablo, hemant.agrawal, borisp,
aviadye, thomas, sandeep.malik, jerin.jacob, Mcnamara, John,
shahafs, olivier.matz
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ananyev, Konstantin
> Sent: Thursday, October 19, 2017 1:17 PM
> To: Nicolau, Radu <radu.nicolau@intel.com>; Akhil Goyal <akhil.goyal@nxp.com>; dev@dpdk.org
> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com;
> borisp@mellanox.com; aviadye@mellanox.com; thomas@monjalon.net; sandeep.malik@nxp.com; jerin.jacob@caviumnetworks.com;
> Mcnamara, John <john.mcnamara@intel.com>; shahafs@mellanox.com; olivier.matz@6wind.com
> Subject: Re: [dpdk-dev] [PATCH v4 10/12] net/ixgbe: enable inline ipsec
>
>
>
> > -----Original Message-----
> > From: Nicolau, Radu
> > Sent: Thursday, October 19, 2017 12:57 PM
> > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Akhil Goyal <akhil.goyal@nxp.com>; dev@dpdk.org
> > Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com;
> > borisp@mellanox.com; aviadye@mellanox.com; thomas@monjalon.net; sandeep.malik@nxp.com; jerin.jacob@caviumnetworks.com;
> > Mcnamara, John <john.mcnamara@intel.com>; shahafs@mellanox.com; olivier.matz@6wind.com
> > Subject: RE: [PATCH v4 10/12] net/ixgbe: enable inline ipsec
> >
> >
> >
> > > -----Original Message-----
> > > From: Ananyev, Konstantin
> > > Sent: Thursday, October 19, 2017 12:04 PM
> > > To: Nicolau, Radu <radu.nicolau@intel.com>; Akhil Goyal
> > > <akhil.goyal@nxp.com>; dev@dpdk.org
> > > Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo
> > > <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com;
> > > borisp@mellanox.com; aviadye@mellanox.com; thomas@monjalon.net;
> > > sandeep.malik@nxp.com; jerin.jacob@caviumnetworks.com; Mcnamara,
> > > John <john.mcnamara@intel.com>; shahafs@mellanox.com;
> > > olivier.matz@6wind.com
> > > Subject: RE: [PATCH v4 10/12] net/ixgbe: enable inline ipsec
> > >
> > >
> > >
> > > > >
> > > > >> <snip>
> > > > >> +
> > > > >> +static int
> > > > >> +ixgbe_crypto_update_mb(void *device __rte_unused,
> > > > >> + struct rte_security_session *session,
> > > > >> + struct rte_mbuf *m, void *params __rte_unused) {
Another sort of generic question - why not make security_set_pkt_metadata function
to accept bulk of packets?
In that case o can minimize the cost of function calls, accessing session data, etc.
Though I suppose that could wait till next patch series.
Konstantin
> > > > >> + struct ixgbe_crypto_session *ic_session =
> > > > >> + get_sec_session_private_data(session);
> > > > >> + if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
> > > > >> + struct ixgbe_crypto_tx_desc_md *mdata =
> > > > >> + (struct ixgbe_crypto_tx_desc_md *)&m->udata64;
> > > > >> + mdata->enc = 1;
> > > > >> + mdata->sa_idx = ic_session->sa_index;
> > > > >> + mdata->pad_len = *rte_pktmbuf_mtod_offset(m,
> > > > >> + uint8_t *, rte_pktmbuf_pkt_len(m) - 18) + 18;
> > > > > Could you explain what pad_len supposed to contain?
> > > > > Also what is a magical constant '18'?
> > > > > Could you create some macro if needed?
> > > > I added an explanation in the code, we read the payload padding size
> > > > that is stored at the len-18 bytes and add 18 bytes, 2 for ESP trailer
> > > > and 16 for ICV.
> > >
> > > Ok, can we at least have a macros for all these constants?
> > > Another question: you do use pkt_len() here - does it mean that multi-
> > > segment packets are not supported by ixgbe-ipsec?
> > > Konstantin
> > It does support multisegment, but the pad_len has to be set only for single send, it will be ignored otherwise. I have updated the code to
> set
> > it for single segment packets only.
>
> Sorry, I didn't understand that.
> If that function does support multiseg packets, then it has to go to the last segment via m->next,
> If it doesn't, then it should return an error I case of m->nb_seg != 1.
> Right?
>
> > Also, our test app does not support multisegment packets.
>
> Ok, I suppose that means, multi-seg case wasn't tested :)
>
>
>
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 10/12] net/ixgbe: enable inline ipsec
2017-10-19 12:29 ` Ananyev, Konstantin
@ 2017-10-19 13:14 ` Radu Nicolau
2017-10-19 13:22 ` Ananyev, Konstantin
0 siblings, 1 reply; 195+ messages in thread
From: Radu Nicolau @ 2017-10-19 13:14 UTC (permalink / raw)
To: Ananyev, Konstantin, Akhil Goyal, dev
Cc: Doherty, Declan, De Lara Guarch, Pablo, hemant.agrawal, borisp,
aviadye, thomas, sandeep.malik, jerin.jacob, Mcnamara, John,
shahafs, olivier.matz
On 10/19/2017 1:29 PM, Ananyev, Konstantin wrote:
>
>> -----Original Message-----
>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ananyev, Konstantin
>> Sent: Thursday, October 19, 2017 1:17 PM
>> To: Nicolau, Radu <radu.nicolau@intel.com>; Akhil Goyal <akhil.goyal@nxp.com>; dev@dpdk.org
>> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com;
>> borisp@mellanox.com; aviadye@mellanox.com; thomas@monjalon.net; sandeep.malik@nxp.com; jerin.jacob@caviumnetworks.com;
>> Mcnamara, John <john.mcnamara@intel.com>; shahafs@mellanox.com; olivier.matz@6wind.com
>> Subject: Re: [dpdk-dev] [PATCH v4 10/12] net/ixgbe: enable inline ipsec
>>
>>
>>
>>> -----Original Message-----
>>> From: Nicolau, Radu
>>> Sent: Thursday, October 19, 2017 12:57 PM
>>> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Akhil Goyal <akhil.goyal@nxp.com>; dev@dpdk.org
>>> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com;
>>> borisp@mellanox.com; aviadye@mellanox.com; thomas@monjalon.net; sandeep.malik@nxp.com; jerin.jacob@caviumnetworks.com;
>>> Mcnamara, John <john.mcnamara@intel.com>; shahafs@mellanox.com; olivier.matz@6wind.com
>>> Subject: RE: [PATCH v4 10/12] net/ixgbe: enable inline ipsec
>>>
>>>
>>>
>>>> -----Original Message-----
>>>> From: Ananyev, Konstantin
>>>> Sent: Thursday, October 19, 2017 12:04 PM
>>>> To: Nicolau, Radu <radu.nicolau@intel.com>; Akhil Goyal
>>>> <akhil.goyal@nxp.com>; dev@dpdk.org
>>>> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo
>>>> <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com;
>>>> borisp@mellanox.com; aviadye@mellanox.com; thomas@monjalon.net;
>>>> sandeep.malik@nxp.com; jerin.jacob@caviumnetworks.com; Mcnamara,
>>>> John <john.mcnamara@intel.com>; shahafs@mellanox.com;
>>>> olivier.matz@6wind.com
>>>> Subject: RE: [PATCH v4 10/12] net/ixgbe: enable inline ipsec
>>>>
>>>>
>>>>
>>>>>>> <snip>
>>>>>>> +
>>>>>>> +static int
>>>>>>> +ixgbe_crypto_update_mb(void *device __rte_unused,
>>>>>>> + struct rte_security_session *session,
>>>>>>> + struct rte_mbuf *m, void *params __rte_unused) {
>
>
> Another sort of generic question - why not make security_set_pkt_metadata function
> to accept bulk of packets?
> In that case o can minimize the cost of function calls, accessing session data, etc.
> Though I suppose that could wait till next patch series.
> Konstantin
It is a good suggestion, but we need to discuss it further; for example
if it can accept a bulk of packets, will it need also a bulk of metadata
pointers, or just one for all the packets?
>
>>>>>>> + struct ixgbe_crypto_session *ic_session =
>>>>>>> + get_sec_session_private_data(session);
>>>>>>> + if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
>>>>>>> + struct ixgbe_crypto_tx_desc_md *mdata =
>>>>>>> + (struct ixgbe_crypto_tx_desc_md *)&m->udata64;
>>>>>>> + mdata->enc = 1;
>>>>>>> + mdata->sa_idx = ic_session->sa_index;
>>>>>>> + mdata->pad_len = *rte_pktmbuf_mtod_offset(m,
>>>>>>> + uint8_t *, rte_pktmbuf_pkt_len(m) - 18) + 18;
>>>>>> Could you explain what pad_len supposed to contain?
>>>>>> Also what is a magical constant '18'?
>>>>>> Could you create some macro if needed?
>>>>> I added an explanation in the code, we read the payload padding size
>>>>> that is stored at the len-18 bytes and add 18 bytes, 2 for ESP trailer
>>>>> and 16 for ICV.
>>>> Ok, can we at least have a macros for all these constants?
>>>> Another question: you do use pkt_len() here - does it mean that multi-
>>>> segment packets are not supported by ixgbe-ipsec?
>>>> Konstantin
>>> It does support multisegment, but the pad_len has to be set only for single send, it will be ignored otherwise. I have updated the code to
>> set
>>> it for single segment packets only.
>> Sorry, I didn't understand that.
>> If that function does support multiseg packets, then it has to go to the last segment via m->next,
>> If it doesn't, then it should return an error I case of m->nb_seg != 1.
>> Right?
>>
>>> Also, our test app does not support multisegment packets.
>> Ok, I suppose that means, multi-seg case wasn't tested :)
>>
>>
>>
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 10/12] net/ixgbe: enable inline ipsec
2017-10-19 13:14 ` Radu Nicolau
@ 2017-10-19 13:22 ` Ananyev, Konstantin
2017-10-19 14:19 ` Nicolau, Radu
0 siblings, 1 reply; 195+ messages in thread
From: Ananyev, Konstantin @ 2017-10-19 13:22 UTC (permalink / raw)
To: Nicolau, Radu, Akhil Goyal, dev
Cc: Doherty, Declan, De Lara Guarch, Pablo, hemant.agrawal, borisp,
aviadye, thomas, sandeep.malik, jerin.jacob, Mcnamara, John,
shahafs, olivier.matz
> -----Original Message-----
> From: Nicolau, Radu
> Sent: Thursday, October 19, 2017 2:14 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Akhil Goyal <akhil.goyal@nxp.com>; dev@dpdk.org
> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com;
> borisp@mellanox.com; aviadye@mellanox.com; thomas@monjalon.net; sandeep.malik@nxp.com; jerin.jacob@caviumnetworks.com;
> Mcnamara, John <john.mcnamara@intel.com>; shahafs@mellanox.com; olivier.matz@6wind.com
> Subject: Re: [PATCH v4 10/12] net/ixgbe: enable inline ipsec
>
>
>
> On 10/19/2017 1:29 PM, Ananyev, Konstantin wrote:
> >
> >> -----Original Message-----
> >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ananyev, Konstantin
> >> Sent: Thursday, October 19, 2017 1:17 PM
> >> To: Nicolau, Radu <radu.nicolau@intel.com>; Akhil Goyal <akhil.goyal@nxp.com>; dev@dpdk.org
> >> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>;
> hemant.agrawal@nxp.com;
> >> borisp@mellanox.com; aviadye@mellanox.com; thomas@monjalon.net; sandeep.malik@nxp.com; jerin.jacob@caviumnetworks.com;
> >> Mcnamara, John <john.mcnamara@intel.com>; shahafs@mellanox.com; olivier.matz@6wind.com
> >> Subject: Re: [dpdk-dev] [PATCH v4 10/12] net/ixgbe: enable inline ipsec
> >>
> >>
> >>
> >>> -----Original Message-----
> >>> From: Nicolau, Radu
> >>> Sent: Thursday, October 19, 2017 12:57 PM
> >>> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Akhil Goyal <akhil.goyal@nxp.com>; dev@dpdk.org
> >>> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>;
> hemant.agrawal@nxp.com;
> >>> borisp@mellanox.com; aviadye@mellanox.com; thomas@monjalon.net; sandeep.malik@nxp.com; jerin.jacob@caviumnetworks.com;
> >>> Mcnamara, John <john.mcnamara@intel.com>; shahafs@mellanox.com; olivier.matz@6wind.com
> >>> Subject: RE: [PATCH v4 10/12] net/ixgbe: enable inline ipsec
> >>>
> >>>
> >>>
> >>>> -----Original Message-----
> >>>> From: Ananyev, Konstantin
> >>>> Sent: Thursday, October 19, 2017 12:04 PM
> >>>> To: Nicolau, Radu <radu.nicolau@intel.com>; Akhil Goyal
> >>>> <akhil.goyal@nxp.com>; dev@dpdk.org
> >>>> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo
> >>>> <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com;
> >>>> borisp@mellanox.com; aviadye@mellanox.com; thomas@monjalon.net;
> >>>> sandeep.malik@nxp.com; jerin.jacob@caviumnetworks.com; Mcnamara,
> >>>> John <john.mcnamara@intel.com>; shahafs@mellanox.com;
> >>>> olivier.matz@6wind.com
> >>>> Subject: RE: [PATCH v4 10/12] net/ixgbe: enable inline ipsec
> >>>>
> >>>>
> >>>>
> >>>>>>> <snip>
> >>>>>>> +
> >>>>>>> +static int
> >>>>>>> +ixgbe_crypto_update_mb(void *device __rte_unused,
> >>>>>>> + struct rte_security_session *session,
> >>>>>>> + struct rte_mbuf *m, void *params __rte_unused) {
> >
> >
> > Another sort of generic question - why not make security_set_pkt_metadata function
> > to accept bulk of packets?
> > In that case o can minimize the cost of function calls, accessing session data, etc.
> > Though I suppose that could wait till next patch series.
> > Konstantin
> It is a good suggestion, but we need to discuss it further;
Yes, as I said that's for future.
> for example
> if it can accept a bulk of packets, will it need also a bulk of metadata
> pointers, or just one for all the packets?
By metadata do you mean a session or ...?
Konstantin
> >
> >>>>>>> + struct ixgbe_crypto_session *ic_session =
> >>>>>>> + get_sec_session_private_data(session);
> >>>>>>> + if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
> >>>>>>> + struct ixgbe_crypto_tx_desc_md *mdata =
> >>>>>>> + (struct ixgbe_crypto_tx_desc_md *)&m->udata64;
> >>>>>>> + mdata->enc = 1;
> >>>>>>> + mdata->sa_idx = ic_session->sa_index;
> >>>>>>> + mdata->pad_len = *rte_pktmbuf_mtod_offset(m,
> >>>>>>> + uint8_t *, rte_pktmbuf_pkt_len(m) - 18) + 18;
> >>>>>> Could you explain what pad_len supposed to contain?
> >>>>>> Also what is a magical constant '18'?
> >>>>>> Could you create some macro if needed?
> >>>>> I added an explanation in the code, we read the payload padding size
> >>>>> that is stored at the len-18 bytes and add 18 bytes, 2 for ESP trailer
> >>>>> and 16 for ICV.
> >>>> Ok, can we at least have a macros for all these constants?
> >>>> Another question: you do use pkt_len() here - does it mean that multi-
> >>>> segment packets are not supported by ixgbe-ipsec?
> >>>> Konstantin
> >>> It does support multisegment, but the pad_len has to be set only for single send, it will be ignored otherwise. I have updated the code
> to
> >> set
> >>> it for single segment packets only.
> >> Sorry, I didn't understand that.
> >> If that function does support multiseg packets, then it has to go to the last segment via m->next,
> >> If it doesn't, then it should return an error I case of m->nb_seg != 1.
> >> Right?
> >>
> >>> Also, our test app does not support multisegment packets.
> >> Ok, I suppose that means, multi-seg case wasn't tested :)
> >>
> >>
> >>
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 10/12] net/ixgbe: enable inline ipsec
2017-10-19 13:22 ` Ananyev, Konstantin
@ 2017-10-19 14:19 ` Nicolau, Radu
2017-10-19 14:36 ` Ananyev, Konstantin
0 siblings, 1 reply; 195+ messages in thread
From: Nicolau, Radu @ 2017-10-19 14:19 UTC (permalink / raw)
To: Ananyev, Konstantin, Akhil Goyal, dev
Cc: Doherty, Declan, De Lara Guarch, Pablo, hemant.agrawal, borisp,
aviadye, thomas, sandeep.malik, jerin.jacob, Mcnamara, John,
shahafs, olivier.matz
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Thursday, October 19, 2017 2:23 PM
> To: Nicolau, Radu <radu.nicolau@intel.com>; Akhil Goyal
> <akhil.goyal@nxp.com>; dev@dpdk.org
> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com;
> borisp@mellanox.com; aviadye@mellanox.com; thomas@monjalon.net;
> sandeep.malik@nxp.com; jerin.jacob@caviumnetworks.com; Mcnamara,
> John <john.mcnamara@intel.com>; shahafs@mellanox.com;
> olivier.matz@6wind.com
> Subject: RE: [PATCH v4 10/12] net/ixgbe: enable inline ipsec
>
>
>
> > -----Original Message-----
> > From: Nicolau, Radu
> > Sent: Thursday, October 19, 2017 2:14 PM
> > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Akhil Goyal
> > <akhil.goyal@nxp.com>; dev@dpdk.org
> > Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo
> > <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com;
> > borisp@mellanox.com; aviadye@mellanox.com; thomas@monjalon.net;
> > sandeep.malik@nxp.com; jerin.jacob@caviumnetworks.com; Mcnamara,
> John
> > <john.mcnamara@intel.com>; shahafs@mellanox.com;
> > olivier.matz@6wind.com
> > Subject: Re: [PATCH v4 10/12] net/ixgbe: enable inline ipsec
> >
> >
> >
> > On 10/19/2017 1:29 PM, Ananyev, Konstantin wrote:
> > >
> > >> -----Original Message-----
> > >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ananyev,
> > >> Konstantin
> > >> Sent: Thursday, October 19, 2017 1:17 PM
> > >> To: Nicolau, Radu <radu.nicolau@intel.com>; Akhil Goyal
> > >> <akhil.goyal@nxp.com>; dev@dpdk.org
> > >> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch,
> > >> Pablo <pablo.de.lara.guarch@intel.com>;
> > hemant.agrawal@nxp.com;
> > >> borisp@mellanox.com; aviadye@mellanox.com;
> thomas@monjalon.net;
> > >> sandeep.malik@nxp.com; jerin.jacob@caviumnetworks.com;
> Mcnamara,
> > >> John <john.mcnamara@intel.com>; shahafs@mellanox.com;
> > >> olivier.matz@6wind.com
> > >> Subject: Re: [dpdk-dev] [PATCH v4 10/12] net/ixgbe: enable inline
> > >> ipsec
> > >>
> > >>
> > >>
> > >>> -----Original Message-----
> > >>> From: Nicolau, Radu
> > >>> Sent: Thursday, October 19, 2017 12:57 PM
> > >>> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Akhil
> > >>> Goyal <akhil.goyal@nxp.com>; dev@dpdk.org
> > >>> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch,
> > >>> Pablo <pablo.de.lara.guarch@intel.com>;
> > hemant.agrawal@nxp.com;
> > >>> borisp@mellanox.com; aviadye@mellanox.com;
> thomas@monjalon.net;
> > >>> sandeep.malik@nxp.com; jerin.jacob@caviumnetworks.com;
> Mcnamara,
> > >>> John <john.mcnamara@intel.com>; shahafs@mellanox.com;
> > >>> olivier.matz@6wind.com
> > >>> Subject: RE: [PATCH v4 10/12] net/ixgbe: enable inline ipsec
> > >>>
> > >>>
> > >>>
> > >>>> -----Original Message-----
> > >>>> From: Ananyev, Konstantin
> > >>>> Sent: Thursday, October 19, 2017 12:04 PM
> > >>>> To: Nicolau, Radu <radu.nicolau@intel.com>; Akhil Goyal
> > >>>> <akhil.goyal@nxp.com>; dev@dpdk.org
> > >>>> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch,
> > >>>> Pablo <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com;
> > >>>> borisp@mellanox.com; aviadye@mellanox.com;
> thomas@monjalon.net;
> > >>>> sandeep.malik@nxp.com; jerin.jacob@caviumnetworks.com;
> Mcnamara,
> > >>>> John <john.mcnamara@intel.com>; shahafs@mellanox.com;
> > >>>> olivier.matz@6wind.com
> > >>>> Subject: RE: [PATCH v4 10/12] net/ixgbe: enable inline ipsec
> > >>>>
> > >>>>
> > >>>>
> > >>>>>>> <snip>
> > >>>>>>> +
> > >>>>>>> +static int
> > >>>>>>> +ixgbe_crypto_update_mb(void *device __rte_unused,
> > >>>>>>> + struct rte_security_session *session,
> > >>>>>>> + struct rte_mbuf *m, void *params __rte_unused)
> {
> > >
> > >
> > > Another sort of generic question - why not make
> > > security_set_pkt_metadata function to accept bulk of packets?
> > > In that case o can minimize the cost of function calls, accessing session
> data, etc.
> > > Though I suppose that could wait till next patch series.
> > > Konstantin
> > It is a good suggestion, but we need to discuss it further;
>
> Yes, as I said that's for future.
>
> > for example
> > if it can accept a bulk of packets, will it need also a bulk of
> > metadata pointers, or just one for all the packets?
>
> By metadata do you mean a session or ...?
> Konstantin
No, I mean the void *params parameter, (that was named metadata in earlier patches).
>
> > >
> > >>>>>>> + struct ixgbe_crypto_session *ic_session =
> > >>>>>>> + get_sec_session_private_data(session);
> > >>>>>>> + if (ic_session->op ==
> IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
> > >>>>>>> + struct ixgbe_crypto_tx_desc_md *mdata =
> > >>>>>>> + (struct ixgbe_crypto_tx_desc_md *)&m-
> >udata64;
> > >>>>>>> + mdata->enc = 1;
> > >>>>>>> + mdata->sa_idx = ic_session->sa_index;
> > >>>>>>> + mdata->pad_len = *rte_pktmbuf_mtod_offset(m,
> > >>>>>>> + uint8_t *, rte_pktmbuf_pkt_len(m) - 18) +
> 18;
> > >>>>>> Could you explain what pad_len supposed to contain?
> > >>>>>> Also what is a magical constant '18'?
> > >>>>>> Could you create some macro if needed?
> > >>>>> I added an explanation in the code, we read the payload padding
> > >>>>> size that is stored at the len-18 bytes and add 18 bytes, 2 for
> > >>>>> ESP trailer and 16 for ICV.
> > >>>> Ok, can we at least have a macros for all these constants?
> > >>>> Another question: you do use pkt_len() here - does it mean that
> > >>>> multi- segment packets are not supported by ixgbe-ipsec?
> > >>>> Konstantin
> > >>> It does support multisegment, but the pad_len has to be set only
> > >>> for single send, it will be ignored otherwise. I have updated the
> > >>> code
> > to
> > >> set
> > >>> it for single segment packets only.
> > >> Sorry, I didn't understand that.
> > >> If that function does support multiseg packets, then it has to go
> > >> to the last segment via m->next, If it doesn't, then it should return an
> error I case of m->nb_seg != 1.
> > >> Right?
> > >>
> > >>> Also, our test app does not support multisegment packets.
> > >> Ok, I suppose that means, multi-seg case wasn't tested :)
> > >>
> > >>
> > >>
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 10/12] net/ixgbe: enable inline ipsec
2017-10-19 14:19 ` Nicolau, Radu
@ 2017-10-19 14:36 ` Ananyev, Konstantin
0 siblings, 0 replies; 195+ messages in thread
From: Ananyev, Konstantin @ 2017-10-19 14:36 UTC (permalink / raw)
To: Nicolau, Radu, Akhil Goyal, dev
Cc: Doherty, Declan, De Lara Guarch, Pablo, hemant.agrawal, borisp,
aviadye, thomas, sandeep.malik, jerin.jacob, Mcnamara, John,
shahafs, olivier.matz
> > > >>>>
> > > >>>>>>> <snip>
> > > >>>>>>> +
> > > >>>>>>> +static int
> > > >>>>>>> +ixgbe_crypto_update_mb(void *device __rte_unused,
> > > >>>>>>> + struct rte_security_session *session,
> > > >>>>>>> + struct rte_mbuf *m, void *params __rte_unused)
> > {
> > > >
> > > >
> > > > Another sort of generic question - why not make
> > > > security_set_pkt_metadata function to accept bulk of packets?
> > > > In that case o can minimize the cost of function calls, accessing session
> > data, etc.
> > > > Though I suppose that could wait till next patch series.
> > > > Konstantin
> > > It is a good suggestion, but we need to discuss it further;
> >
> > Yes, as I said that's for future.
> >
> > > for example
> > > if it can accept a bulk of packets, will it need also a bulk of
> > > metadata pointers, or just one for all the packets?
> >
> > By metadata do you mean a session or ...?
> > Konstantin
>
> No, I mean the void *params parameter, (that was named metadata in earlier patches).
> >
As right now it is not used, and I don't really know how you guys foresee to use it in future -
I don't have any strong opinion on it :)
Konstantin
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 10/12] net/ixgbe: enable inline ipsec
2017-10-19 12:16 ` Ananyev, Konstantin
2017-10-19 12:29 ` Ananyev, Konstantin
@ 2017-10-19 13:09 ` Radu Nicolau
1 sibling, 0 replies; 195+ messages in thread
From: Radu Nicolau @ 2017-10-19 13:09 UTC (permalink / raw)
To: Ananyev, Konstantin, Akhil Goyal, dev
Cc: Doherty, Declan, De Lara Guarch, Pablo, hemant.agrawal, borisp,
aviadye, thomas, sandeep.malik, jerin.jacob, Mcnamara, John,
shahafs, olivier.matz
On 10/19/2017 1:16 PM, Ananyev, Konstantin wrote:
>
>> -----Original Message-----
>> From: Nicolau, Radu
>> Sent: Thursday, October 19, 2017 12:57 PM
>> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Akhil Goyal <akhil.goyal@nxp.com>; dev@dpdk.org
>> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com;
>> borisp@mellanox.com; aviadye@mellanox.com; thomas@monjalon.net; sandeep.malik@nxp.com; jerin.jacob@caviumnetworks.com;
>> Mcnamara, John <john.mcnamara@intel.com>; shahafs@mellanox.com; olivier.matz@6wind.com
>> Subject: RE: [PATCH v4 10/12] net/ixgbe: enable inline ipsec
>>
>>
>>
>>> -----Original Message-----
>>> From: Ananyev, Konstantin
>>> Sent: Thursday, October 19, 2017 12:04 PM
>>> To: Nicolau, Radu <radu.nicolau@intel.com>; Akhil Goyal
>>> <akhil.goyal@nxp.com>; dev@dpdk.org
>>> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo
>>> <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com;
>>> borisp@mellanox.com; aviadye@mellanox.com; thomas@monjalon.net;
>>> sandeep.malik@nxp.com; jerin.jacob@caviumnetworks.com; Mcnamara,
>>> John <john.mcnamara@intel.com>; shahafs@mellanox.com;
>>> olivier.matz@6wind.com
>>> Subject: RE: [PATCH v4 10/12] net/ixgbe: enable inline ipsec
>>>
>>>
>>>
>>>>>> <snip>
>>>>>> +
>>>>>> +static int
>>>>>> +ixgbe_crypto_update_mb(void *device __rte_unused,
>>>>>> + struct rte_security_session *session,
>>>>>> + struct rte_mbuf *m, void *params __rte_unused) {
>>>>>> + struct ixgbe_crypto_session *ic_session =
>>>>>> + get_sec_session_private_data(session);
>>>>>> + if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
>>>>>> + struct ixgbe_crypto_tx_desc_md *mdata =
>>>>>> + (struct ixgbe_crypto_tx_desc_md *)&m->udata64;
>>>>>> + mdata->enc = 1;
>>>>>> + mdata->sa_idx = ic_session->sa_index;
>>>>>> + mdata->pad_len = *rte_pktmbuf_mtod_offset(m,
>>>>>> + uint8_t *, rte_pktmbuf_pkt_len(m) - 18) + 18;
>>>>> Could you explain what pad_len supposed to contain?
>>>>> Also what is a magical constant '18'?
>>>>> Could you create some macro if needed?
>>>> I added an explanation in the code, we read the payload padding size
>>>> that is stored at the len-18 bytes and add 18 bytes, 2 for ESP trailer
>>>> and 16 for ICV.
>>> Ok, can we at least have a macros for all these constants?
>>> Another question: you do use pkt_len() here - does it mean that multi-
>>> segment packets are not supported by ixgbe-ipsec?
>>> Konstantin
>> It does support multisegment, but the pad_len has to be set only for single send, it will be ignored otherwise. I have updated the code to set
>> it for single segment packets only.
> Sorry, I didn't understand that.
> If that function does support multiseg packets, then it has to go to the last segment via m->next,
> If it doesn't, then it should return an error I case of m->nb_seg != 1.
> Right?
No need to return an error, just don't try to read the padding and don't
set the pad_len in the metadata. My understanding of the datasheet is
that multisegment egress IPsec is supported only for TCP/UDP packets,
and the pad_len is ignored even if it's set. So I changed it to only
process the padding for m->nb_seg == 1.
>
>> Also, our test app does not support multisegment packets.
> Ok, I suppose that means, multi-seg case wasn't tested :)
>
>
>
>
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 10/12] net/ixgbe: enable inline ipsec
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 10/12] net/ixgbe: enable inline ipsec Akhil Goyal
2017-10-15 12:51 ` Aviad Yehezkel
2017-10-18 21:29 ` Ananyev, Konstantin
@ 2017-10-19 9:04 ` Ananyev, Konstantin
2 siblings, 0 replies; 195+ messages in thread
From: Ananyev, Konstantin @ 2017-10-19 9:04 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: Doherty, Declan, De Lara Guarch, Pablo, hemant.agrawal, Nicolau,
Radu, borisp, aviadye, thomas, sandeep.malik, jerin.jacob,
Mcnamara, John, shahafs, olivier.matz
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
> index 14b9c53..fcabd5e 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -61,6 +61,7 @@
> #include <rte_random.h>
> #include <rte_dev.h>
> #include <rte_hash_crc.h>
> +#include <rte_security_driver.h>
>
> #include "ixgbe_logs.h"
> #include "base/ixgbe_api.h"
> @@ -1132,6 +1133,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
> IXGBE_DEV_PRIVATE_TO_FILTER_INFO(eth_dev->data->dev_private);
> struct ixgbe_bw_conf *bw_conf =
> IXGBE_DEV_PRIVATE_TO_BW_CONF(eth_dev->data->dev_private);
> + struct rte_security_ctx *security_instance;
> uint32_t ctrl_ext;
> uint16_t csum;
> int diag, i;
> @@ -1139,6 +1141,17 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
> PMD_INIT_FUNC_TRACE();
>
> eth_dev->dev_ops = &ixgbe_eth_dev_ops;
> + security_instance = rte_malloc("rte_security_instances_ops",
> + sizeof(struct rte_security_ctx), 0);
> + if (security_instance == NULL)
> + return -ENOMEM;
> + security_instance->state = RTE_SECURITY_INSTANCE_VALID;
> + security_instance->device = (void *)eth_dev;
> + security_instance->ops = &ixgbe_security_ops;
> + security_instance->sess_cnt = 0;
> +
As another nit - can we move the code above into a separate function
into ixgbe_ipsec.c?
Something like ixgbe_ipsec_ctx_create() or so?
Konstantin
> + eth_dev->data->security_ctx = security_instance;
> +
> eth_dev->rx_pkt_burst = &ixgbe_recv_pkts;
> eth_dev->tx_pkt_burst = &ixgbe_xmit_pkts;
> eth_dev->tx_pkt_prepare = &ixgbe_prep_pkts;
> @@ -1169,6 +1182,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
>
> rte_eth_copy_pci_info(eth_dev, pci_dev);
> eth_dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE;
> + eth_dev->data->dev_flags |= RTE_ETH_DEV_SECURITY;
>
> /* Vendor and Device ID need to be set before init of shared code */
> hw->device_id = pci_dev->id.device_id;
> @@ -1401,6 +1415,8 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
> /* Remove all Traffic Manager configuration */
> ixgbe_tm_conf_uninit(eth_dev);
>
> + rte_free(eth_dev->data->security_ctx);
> +
> return 0;
> }
>
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v4 11/12] crypto/dpaa2_sec: add support for protocol offload ipsec
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 " Akhil Goyal
` (9 preceding siblings ...)
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 10/12] net/ixgbe: enable inline ipsec Akhil Goyal
@ 2017-10-14 22:17 ` Akhil Goyal
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 12/12] examples/ipsec-secgw: add support for security offload Akhil Goyal
` (2 subsequent siblings)
13 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-14 22:17 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
Driver implementation to support rte_security APIs
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
doc/guides/cryptodevs/features/dpaa2_sec.ini | 1 +
drivers/crypto/Makefile | 2 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 420 ++++++++++++++++++++++++++-
drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 62 ++++
4 files changed, 473 insertions(+), 12 deletions(-)
diff --git a/doc/guides/cryptodevs/features/dpaa2_sec.ini b/doc/guides/cryptodevs/features/dpaa2_sec.ini
index c3bb3dd..8fd07d6 100644
--- a/doc/guides/cryptodevs/features/dpaa2_sec.ini
+++ b/doc/guides/cryptodevs/features/dpaa2_sec.ini
@@ -7,6 +7,7 @@
Symmetric crypto = Y
Sym operation chaining = Y
HW Accelerated = Y
+Protocol offload = Y
;
; Supported crypto algorithms of the 'dpaa2_sec' crypto driver.
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index d8c8740..ec297f2 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -56,7 +56,7 @@ DEPDIRS-mrvl = $(core-libs)
DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += null
DEPDIRS-null = $(core-libs)
DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec
-DEPDIRS-dpaa2_sec = $(core-libs)
+DEPDIRS-dpaa2_sec = $(core-libs) librte_security
DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA_SEC) += dpaa_sec
DEPDIRS-dpaa_sec = $(core-libs)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 672cacf..c768313 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -36,6 +36,7 @@
#include <rte_mbuf.h>
#include <rte_cryptodev.h>
+#include <rte_security_driver.h>
#include <rte_malloc.h>
#include <rte_memcpy.h>
#include <rte_string_fns.h>
@@ -73,12 +74,44 @@
#define FLE_POOL_NUM_BUFS 32000
#define FLE_POOL_BUF_SIZE 256
#define FLE_POOL_CACHE_SIZE 512
+#define SEC_FLC_DHR_OUTBOUND -114
+#define SEC_FLC_DHR_INBOUND 0
enum rta_sec_era rta_sec_era = RTA_SEC_ERA_8;
static uint8_t cryptodev_driver_id;
static inline int
+build_proto_fd(dpaa2_sec_session *sess,
+ struct rte_crypto_op *op,
+ struct qbman_fd *fd, uint16_t bpid)
+{
+ struct rte_crypto_sym_op *sym_op = op->sym;
+ struct ctxt_priv *priv = sess->ctxt;
+ struct sec_flow_context *flc;
+ struct rte_mbuf *mbuf = sym_op->m_src;
+
+ if (likely(bpid < MAX_BPID))
+ DPAA2_SET_FD_BPID(fd, bpid);
+ else
+ DPAA2_SET_FD_IVP(fd);
+
+ /* Save the shared descriptor */
+ flc = &priv->flc_desc[0].flc;
+
+ DPAA2_SET_FD_ADDR(fd, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+ DPAA2_SET_FD_OFFSET(fd, sym_op->m_src->data_off);
+ DPAA2_SET_FD_LEN(fd, sym_op->m_src->pkt_len);
+ DPAA2_SET_FD_FLC(fd, ((uint64_t)flc));
+
+ /* save physical address of mbuf */
+ op->sym->aead.digest.phys_addr = mbuf->buf_physaddr;
+ mbuf->buf_physaddr = (uint64_t)op;
+
+ return 0;
+}
+
+static inline int
build_authenc_gcm_fd(dpaa2_sec_session *sess,
struct rte_crypto_op *op,
struct qbman_fd *fd, uint16_t bpid)
@@ -545,13 +578,23 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
}
static inline int
-build_sec_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+build_sec_fd(struct rte_crypto_op *op,
struct qbman_fd *fd, uint16_t bpid)
{
int ret = -1;
+ dpaa2_sec_session *sess;
PMD_INIT_FUNC_TRACE();
+ if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
+ sess = (dpaa2_sec_session *)get_session_private_data(
+ op->sym->session, cryptodev_driver_id);
+ else if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION)
+ sess = (dpaa2_sec_session *)get_sec_session_private_data(
+ op->sym->sec_session);
+ else
+ return -1;
+
switch (sess->ctxt_type) {
case DPAA2_SEC_CIPHER:
ret = build_cipher_fd(sess, op, fd, bpid);
@@ -565,6 +608,9 @@ build_sec_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
case DPAA2_SEC_CIPHER_HASH:
ret = build_authenc_fd(sess, op, fd, bpid);
break;
+ case DPAA2_SEC_IPSEC:
+ ret = build_proto_fd(sess, op, fd, bpid);
+ break;
case DPAA2_SEC_HASH_CIPHER:
default:
RTE_LOG(ERR, PMD, "error: Unsupported session\n");
@@ -588,12 +634,11 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
/*todo - need to support multiple buffer pools */
uint16_t bpid;
struct rte_mempool *mb_pool;
- dpaa2_sec_session *sess;
if (unlikely(nb_ops == 0))
return 0;
- if (ops[0]->sess_type != RTE_CRYPTO_OP_WITH_SESSION) {
+ if (ops[0]->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
RTE_LOG(ERR, PMD, "sessionless crypto op not supported\n");
return 0;
}
@@ -618,13 +663,9 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
for (loop = 0; loop < frames_to_send; loop++) {
/*Clear the unused FD fields before sending*/
memset(&fd_arr[loop], 0, sizeof(struct qbman_fd));
- sess = (dpaa2_sec_session *)
- get_session_private_data(
- (*ops)->sym->session,
- cryptodev_driver_id);
mb_pool = (*ops)->sym->m_src->pool;
bpid = mempool_to_bpid(mb_pool);
- ret = build_sec_fd(sess, *ops, &fd_arr[loop], bpid);
+ ret = build_sec_fd(*ops, &fd_arr[loop], bpid);
if (ret) {
PMD_DRV_LOG(ERR, "error: Improper packet"
" contents for crypto operation\n");
@@ -649,12 +690,44 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
}
static inline struct rte_crypto_op *
-sec_fd_to_mbuf(const struct qbman_fd *fd)
+sec_simple_fd_to_mbuf(const struct qbman_fd *fd, __rte_unused uint8_t id)
+{
+ struct rte_crypto_op *op;
+ uint16_t len = DPAA2_GET_FD_LEN(fd);
+ uint16_t diff = 0;
+ dpaa2_sec_session *sess_priv;
+
+ struct rte_mbuf *mbuf = DPAA2_INLINE_MBUF_FROM_BUF(
+ DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd)),
+ rte_dpaa2_bpid_info[DPAA2_GET_FD_BPID(fd)].meta_data_size);
+
+ op = (struct rte_crypto_op *)mbuf->buf_physaddr;
+ mbuf->buf_physaddr = op->sym->aead.digest.phys_addr;
+ op->sym->aead.digest.phys_addr = 0L;
+
+ sess_priv = (dpaa2_sec_session *)get_sec_session_private_data(
+ op->sym->sec_session);
+ if (sess_priv->dir == DIR_ENC)
+ mbuf->data_off += SEC_FLC_DHR_OUTBOUND;
+ else
+ mbuf->data_off += SEC_FLC_DHR_INBOUND;
+ diff = len - mbuf->pkt_len;
+ mbuf->pkt_len += diff;
+ mbuf->data_len += diff;
+
+ return op;
+}
+
+static inline struct rte_crypto_op *
+sec_fd_to_mbuf(const struct qbman_fd *fd, uint8_t driver_id)
{
struct qbman_fle *fle;
struct rte_crypto_op *op;
struct ctxt_priv *priv;
+ if (DPAA2_FD_GET_FORMAT(fd) == qbman_fd_single)
+ return sec_simple_fd_to_mbuf(fd, driver_id);
+
fle = (struct qbman_fle *)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd));
PMD_RX_LOG(DEBUG, "FLE addr = %x - %x, offset = %x",
@@ -701,6 +774,8 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
{
/* Function is responsible to receive frames for a given device and VQ*/
struct dpaa2_sec_qp *dpaa2_qp = (struct dpaa2_sec_qp *)qp;
+ struct rte_cryptodev *dev =
+ (struct rte_cryptodev *)(dpaa2_qp->rx_vq.dev);
struct qbman_result *dq_storage;
uint32_t fqid = dpaa2_qp->rx_vq.fqid;
int ret, num_rx = 0;
@@ -770,7 +845,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
}
fd = qbman_result_DQ_fd(dq_storage);
- ops[num_rx] = sec_fd_to_mbuf(fd);
+ ops[num_rx] = sec_fd_to_mbuf(fd, dev->driver_id);
if (unlikely(fd->simple.frc)) {
/* TODO Parse SEC errors */
@@ -1547,6 +1622,300 @@ dpaa2_sec_set_session_parameters(struct rte_cryptodev *dev,
}
static int
+dpaa2_sec_set_ipsec_session(struct rte_cryptodev *dev,
+ struct rte_security_session_conf *conf,
+ void *sess)
+{
+ struct rte_security_ipsec_xform *ipsec_xform = &conf->ipsec;
+ struct rte_crypto_auth_xform *auth_xform;
+ struct rte_crypto_cipher_xform *cipher_xform;
+ dpaa2_sec_session *session = (dpaa2_sec_session *)sess;
+ struct ctxt_priv *priv;
+ struct ipsec_encap_pdb encap_pdb;
+ struct ipsec_decap_pdb decap_pdb;
+ struct alginfo authdata, cipherdata;
+ unsigned int bufsize;
+ struct sec_flow_context *flc;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (ipsec_xform->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ cipher_xform = &conf->crypto_xform->cipher;
+ auth_xform = &conf->crypto_xform->next->auth;
+ } else {
+ auth_xform = &conf->crypto_xform->auth;
+ cipher_xform = &conf->crypto_xform->next->cipher;
+ }
+ priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+ sizeof(struct ctxt_priv) +
+ sizeof(struct sec_flc_desc),
+ RTE_CACHE_LINE_SIZE);
+
+ if (priv == NULL) {
+ RTE_LOG(ERR, PMD, "\nNo memory for priv CTXT");
+ return -ENOMEM;
+ }
+
+ flc = &priv->flc_desc[0].flc;
+
+ session->ctxt_type = DPAA2_SEC_IPSEC;
+ session->cipher_key.data = rte_zmalloc(NULL,
+ cipher_xform->key.length,
+ RTE_CACHE_LINE_SIZE);
+ if (session->cipher_key.data == NULL &&
+ cipher_xform->key.length > 0) {
+ RTE_LOG(ERR, PMD, "No Memory for cipher key\n");
+ rte_free(priv);
+ return -ENOMEM;
+ }
+
+ session->cipher_key.length = cipher_xform->key.length;
+ session->auth_key.data = rte_zmalloc(NULL,
+ auth_xform->key.length,
+ RTE_CACHE_LINE_SIZE);
+ if (session->auth_key.data == NULL &&
+ auth_xform->key.length > 0) {
+ RTE_LOG(ERR, PMD, "No Memory for auth key\n");
+ rte_free(session->cipher_key.data);
+ rte_free(priv);
+ return -ENOMEM;
+ }
+ session->auth_key.length = auth_xform->key.length;
+ memcpy(session->cipher_key.data, cipher_xform->key.data,
+ cipher_xform->key.length);
+ memcpy(session->auth_key.data, auth_xform->key.data,
+ auth_xform->key.length);
+
+ authdata.key = (uint64_t)session->auth_key.data;
+ authdata.keylen = session->auth_key.length;
+ authdata.key_enc_flags = 0;
+ authdata.key_type = RTA_DATA_IMM;
+ switch (auth_xform->algo) {
+ case RTE_CRYPTO_AUTH_SHA1_HMAC:
+ authdata.algtype = OP_PCL_IPSEC_HMAC_SHA1_96;
+ authdata.algmode = OP_ALG_AAI_HMAC;
+ session->auth_alg = RTE_CRYPTO_AUTH_SHA1_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_MD5_HMAC:
+ authdata.algtype = OP_PCL_IPSEC_HMAC_MD5_96;
+ authdata.algmode = OP_ALG_AAI_HMAC;
+ session->auth_alg = RTE_CRYPTO_AUTH_MD5_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256_HMAC:
+ authdata.algtype = OP_PCL_IPSEC_HMAC_SHA2_256_128;
+ authdata.algmode = OP_ALG_AAI_HMAC;
+ session->auth_alg = RTE_CRYPTO_AUTH_SHA256_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA384_HMAC:
+ authdata.algtype = OP_PCL_IPSEC_HMAC_SHA2_384_192;
+ authdata.algmode = OP_ALG_AAI_HMAC;
+ session->auth_alg = RTE_CRYPTO_AUTH_SHA384_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA512_HMAC:
+ authdata.algtype = OP_PCL_IPSEC_HMAC_SHA2_512_256;
+ authdata.algmode = OP_ALG_AAI_HMAC;
+ session->auth_alg = RTE_CRYPTO_AUTH_SHA512_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_AES_CMAC:
+ authdata.algtype = OP_PCL_IPSEC_AES_CMAC_96;
+ session->auth_alg = RTE_CRYPTO_AUTH_AES_CMAC;
+ break;
+ case RTE_CRYPTO_AUTH_NULL:
+ authdata.algtype = OP_PCL_IPSEC_HMAC_NULL;
+ session->auth_alg = RTE_CRYPTO_AUTH_NULL;
+ break;
+ case RTE_CRYPTO_AUTH_SHA224_HMAC:
+ case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+ case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+ case RTE_CRYPTO_AUTH_SHA1:
+ case RTE_CRYPTO_AUTH_SHA256:
+ case RTE_CRYPTO_AUTH_SHA512:
+ case RTE_CRYPTO_AUTH_SHA224:
+ case RTE_CRYPTO_AUTH_SHA384:
+ case RTE_CRYPTO_AUTH_MD5:
+ case RTE_CRYPTO_AUTH_AES_GMAC:
+ case RTE_CRYPTO_AUTH_KASUMI_F9:
+ case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+ case RTE_CRYPTO_AUTH_ZUC_EIA3:
+ RTE_LOG(ERR, PMD, "Crypto: Unsupported auth alg %u\n",
+ auth_xform->algo);
+ goto out;
+ default:
+ RTE_LOG(ERR, PMD, "Crypto: Undefined Auth specified %u\n",
+ auth_xform->algo);
+ goto out;
+ }
+ cipherdata.key = (uint64_t)session->cipher_key.data;
+ cipherdata.keylen = session->cipher_key.length;
+ cipherdata.key_enc_flags = 0;
+ cipherdata.key_type = RTA_DATA_IMM;
+
+ switch (cipher_xform->algo) {
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ cipherdata.algtype = OP_PCL_IPSEC_AES_CBC;
+ cipherdata.algmode = OP_ALG_AAI_CBC;
+ session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CBC;
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_CBC:
+ cipherdata.algtype = OP_PCL_IPSEC_3DES;
+ cipherdata.algmode = OP_ALG_AAI_CBC;
+ session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CBC;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CTR:
+ cipherdata.algtype = OP_PCL_IPSEC_AES_CTR;
+ cipherdata.algmode = OP_ALG_AAI_CTR;
+ session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CTR;
+ break;
+ case RTE_CRYPTO_CIPHER_NULL:
+ cipherdata.algtype = OP_PCL_IPSEC_NULL;
+ break;
+ case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+ case RTE_CRYPTO_CIPHER_3DES_ECB:
+ case RTE_CRYPTO_CIPHER_AES_ECB:
+ case RTE_CRYPTO_CIPHER_KASUMI_F8:
+ RTE_LOG(ERR, PMD, "Crypto: Unsupported Cipher alg %u\n",
+ cipher_xform->algo);
+ goto out;
+ default:
+ RTE_LOG(ERR, PMD, "Crypto: Undefined Cipher specified %u\n",
+ cipher_xform->algo);
+ goto out;
+ }
+
+ if (ipsec_xform->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ struct ip ip4_hdr;
+
+ flc->dhr = SEC_FLC_DHR_OUTBOUND;
+ ip4_hdr.ip_v = IPVERSION;
+ ip4_hdr.ip_hl = 5;
+ ip4_hdr.ip_len = rte_cpu_to_be_16(sizeof(ip4_hdr));
+ ip4_hdr.ip_tos = ipsec_xform->tunnel.ipv4.dscp;
+ ip4_hdr.ip_id = 0;
+ ip4_hdr.ip_off = 0;
+ ip4_hdr.ip_ttl = ipsec_xform->tunnel.ipv4.ttl;
+ ip4_hdr.ip_p = 0x32;
+ ip4_hdr.ip_sum = 0;
+ ip4_hdr.ip_src = ipsec_xform->tunnel.ipv4.src_ip;
+ ip4_hdr.ip_dst = ipsec_xform->tunnel.ipv4.dst_ip;
+ ip4_hdr.ip_sum = calc_chksum((uint16_t *)(void *)&ip4_hdr,
+ sizeof(struct ip));
+
+ /* For Sec Proto only one descriptor is required. */
+ memset(&encap_pdb, 0, sizeof(struct ipsec_encap_pdb));
+ encap_pdb.options = (IPVERSION << PDBNH_ESP_ENCAP_SHIFT) |
+ PDBOPTS_ESP_OIHI_PDB_INL |
+ PDBOPTS_ESP_IVSRC |
+ PDBHMO_ESP_ENCAP_DTTL;
+ encap_pdb.spi = ipsec_xform->spi;
+ encap_pdb.ip_hdr_len = sizeof(struct ip);
+
+ session->dir = DIR_ENC;
+ bufsize = cnstr_shdsc_ipsec_new_encap(priv->flc_desc[0].desc,
+ 1, 0, &encap_pdb,
+ (uint8_t *)&ip4_hdr,
+ &cipherdata, &authdata);
+ } else if (ipsec_xform->direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
+ flc->dhr = SEC_FLC_DHR_INBOUND;
+ memset(&decap_pdb, 0, sizeof(struct ipsec_decap_pdb));
+ decap_pdb.options = sizeof(struct ip) << 16;
+ session->dir = DIR_DEC;
+ bufsize = cnstr_shdsc_ipsec_new_decap(priv->flc_desc[0].desc,
+ 1, 0, &decap_pdb, &cipherdata, &authdata);
+ } else
+ goto out;
+ flc->word1_sdl = (uint8_t)bufsize;
+
+ /* Enable the stashing control bit */
+ DPAA2_SET_FLC_RSC(flc);
+ flc->word2_rflc_31_0 = lower_32_bits(
+ (uint64_t)&(((struct dpaa2_sec_qp *)
+ dev->data->queue_pairs[0])->rx_vq) | 0x14);
+ flc->word3_rflc_63_32 = upper_32_bits(
+ (uint64_t)&(((struct dpaa2_sec_qp *)
+ dev->data->queue_pairs[0])->rx_vq));
+
+ /* Set EWS bit i.e. enable write-safe */
+ DPAA2_SET_FLC_EWS(flc);
+ /* Set BS = 1 i.e reuse input buffers as output buffers */
+ DPAA2_SET_FLC_REUSE_BS(flc);
+ /* Set FF = 10; reuse input buffers if they provide sufficient space */
+ DPAA2_SET_FLC_REUSE_FF(flc);
+
+ session->ctxt = priv;
+
+ return 0;
+out:
+ rte_free(session->auth_key.data);
+ rte_free(session->cipher_key.data);
+ rte_free(priv);
+ return -1;
+}
+
+static int
+dpaa2_sec_security_session_create(void *dev,
+ struct rte_security_session_conf *conf,
+ struct rte_security_session *sess,
+ struct rte_mempool *mempool)
+{
+ void *sess_private_data;
+ struct rte_cryptodev *cdev = (struct rte_cryptodev *)dev;
+ int ret;
+
+ if (rte_mempool_get(mempool, &sess_private_data)) {
+ CDEV_LOG_ERR(
+ "Couldn't get object from session mempool");
+ return -ENOMEM;
+ }
+
+ switch (conf->protocol) {
+ case RTE_SECURITY_PROTOCOL_IPSEC:
+ ret = dpaa2_sec_set_ipsec_session(cdev, conf,
+ sess_private_data);
+ break;
+ case RTE_SECURITY_PROTOCOL_MACSEC:
+ return -ENOTSUP;
+ default:
+ return -EINVAL;
+ }
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR,
+ "DPAA2 PMD: failed to configure session parameters");
+
+ /* Return session to mempool */
+ rte_mempool_put(mempool, sess_private_data);
+ return ret;
+ }
+
+ set_sec_session_private_data(sess, sess_private_data);
+
+ return ret;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static int
+dpaa2_sec_security_session_destroy(void *dev __rte_unused,
+ struct rte_security_session *sess)
+{
+ PMD_INIT_FUNC_TRACE();
+ void *sess_priv = get_sec_session_private_data(sess);
+
+ dpaa2_sec_session *s = (dpaa2_sec_session *)sess_priv;
+
+ if (sess_priv) {
+ struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
+
+ rte_free(s->ctxt);
+ rte_free(s->cipher_key.data);
+ rte_free(s->auth_key.data);
+ memset(sess, 0, sizeof(dpaa2_sec_session));
+ set_sec_session_private_data(sess, NULL);
+ rte_mempool_put(sess_mp, sess_priv);
+ }
+ return 0;
+}
+
+static int
dpaa2_sec_session_configure(struct rte_cryptodev *dev,
struct rte_crypto_sym_xform *xform,
struct rte_cryptodev_sym_session *sess,
@@ -1820,11 +2189,28 @@ static struct rte_cryptodev_ops crypto_ops = {
.session_clear = dpaa2_sec_session_clear,
};
+static const struct rte_security_capability *
+dpaa2_sec_capabilities_get(void *device __rte_unused)
+{
+ return dpaa2_sec_security_cap;
+}
+
+struct rte_security_ops dpaa2_sec_security_ops = {
+ .session_create = dpaa2_sec_security_session_create,
+ .session_update = NULL,
+ .session_stats_get = NULL,
+ .session_destroy = dpaa2_sec_security_session_destroy,
+ .set_pkt_metadata = NULL,
+ .capabilities_get = dpaa2_sec_capabilities_get
+};
+
static int
dpaa2_sec_uninit(const struct rte_cryptodev *dev)
{
struct dpaa2_sec_dev_private *internals = dev->data->dev_private;
+ rte_free(dev->data->security_ctx);
+
rte_mempool_free(internals->fle_pool);
PMD_INIT_LOG(INFO, "Closing DPAA2_SEC device %s on numa socket %u\n",
@@ -1839,6 +2225,7 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
struct dpaa2_sec_dev_private *internals;
struct rte_device *dev = cryptodev->device;
struct rte_dpaa2_device *dpaa2_dev;
+ struct rte_security_ctx *security_instance;
struct fsl_mc_io *dpseci;
uint16_t token;
struct dpseci_attr attr;
@@ -1855,12 +2242,23 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
cryptodev->driver_id = cryptodev_driver_id;
cryptodev->dev_ops = &crypto_ops;
+ security_instance = rte_malloc("rte_security_instances_ops",
+ sizeof(struct rte_security_ctx), 0);
+ if (security_instance == NULL)
+ return -ENOMEM;
+ security_instance->state = RTE_SECURITY_INSTANCE_VALID;
+ security_instance->device = (void *)cryptodev;
+ security_instance->ops = &dpaa2_sec_security_ops;
+ security_instance->sess_cnt = 0;
+
+ cryptodev->data->security_ctx = security_instance;
cryptodev->enqueue_burst = dpaa2_sec_enqueue_burst;
cryptodev->dequeue_burst = dpaa2_sec_dequeue_burst;
cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
RTE_CRYPTODEV_FF_HW_ACCELERATED |
- RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
+ RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
+ RTE_CRYPTODEV_FF_SECURITY;
internals = cryptodev->data->dev_private;
internals->max_nb_sessions = RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index 3849a05..14e71df 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -67,6 +67,11 @@ enum shr_desc_type {
#define DIR_ENC 1
#define DIR_DEC 0
+#define DPAA2_SET_FLC_EWS(flc) (flc->word1_bits23_16 |= 0x1)
+#define DPAA2_SET_FLC_RSC(flc) (flc->word1_bits31_24 |= 0x1)
+#define DPAA2_SET_FLC_REUSE_BS(flc) (flc->mode_bits |= 0x8000)
+#define DPAA2_SET_FLC_REUSE_FF(flc) (flc->mode_bits |= 0x2000)
+
/* SEC Flow Context Descriptor */
struct sec_flow_context {
/* word 0 */
@@ -411,4 +416,61 @@ static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
};
+
+static const struct rte_security_capability dpaa2_sec_security_cap[] = {
+ { /* IPsec Lookaside Protocol offload ESP Transport Egress */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ .options = { 0 }
+ },
+ .crypto_capabilities = dpaa2_sec_capabilities
+ },
+ { /* IPsec Lookaside Protocol offload ESP Tunnel Ingress */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+ .options = { 0 }
+ },
+ .crypto_capabilities = dpaa2_sec_capabilities
+ },
+ {
+ .action = RTE_SECURITY_ACTION_TYPE_NONE
+ }
+};
+
+/**
+ * Checksum
+ *
+ * @param buffer calculate chksum for buffer
+ * @param len buffer length
+ *
+ * @return checksum value in host cpu order
+ */
+static inline uint16_t
+calc_chksum(void *buffer, int len)
+{
+ uint16_t *buf = (uint16_t *)buffer;
+ uint32_t sum = 0;
+ uint16_t result;
+
+ for (sum = 0; len > 1; len -= 2)
+ sum += *buf++;
+
+ if (len == 1)
+ sum += *(unsigned char *)buf;
+
+ sum = (sum >> 16) + (sum & 0xFFFF);
+ sum += (sum >> 16);
+ result = ~sum;
+
+ return result;
+}
+
#endif /* _RTE_DPAA2_SEC_PMD_PRIVATE_H_ */
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v4 12/12] examples/ipsec-secgw: add support for security offload
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 " Akhil Goyal
` (10 preceding siblings ...)
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 11/12] crypto/dpaa2_sec: add support for protocol offload ipsec Akhil Goyal
@ 2017-10-14 22:17 ` Akhil Goyal
2017-10-15 12:51 ` Aviad Yehezkel
2017-10-16 10:44 ` [dpdk-dev] [PATCH v4 00/12] introduce security offload library Thomas Monjalon
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 00/11] " Akhil Goyal
13 siblings, 1 reply; 195+ messages in thread
From: Akhil Goyal @ 2017-10-14 22:17 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
Ipsec-secgw application is modified so that it can support
following type of actions for crypto operations
1. full protocol offload using crypto devices.
2. inline ipsec using ethernet devices to perform crypto operations
3. full protocol offload using ethernet devices.
4. non protocol offload
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
---
doc/guides/sample_app_ug/ipsec_secgw.rst | 52 +++++-
examples/ipsec-secgw/esp.c | 120 ++++++++----
examples/ipsec-secgw/esp.h | 10 -
examples/ipsec-secgw/ipsec-secgw.c | 5 +
examples/ipsec-secgw/ipsec.c | 308 ++++++++++++++++++++++++++-----
examples/ipsec-secgw/ipsec.h | 32 +++-
examples/ipsec-secgw/sa.c | 151 +++++++++++----
7 files changed, 545 insertions(+), 133 deletions(-)
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index b675cba..892977e 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -52,13 +52,22 @@ The application classifies the ports as *Protected* and *Unprotected*.
Thus, traffic received on an Unprotected or Protected port is consider
Inbound or Outbound respectively.
+The application also supports complete IPSec protocol offload to hardware
+(Look aside crypto accelarator or using ethernet device). It also support
+inline ipsec processing by the supported ethernet device during transmission.
+These modes can be selected during the SA creation configuration.
+
+In case of complete protocol offload, the processing of headers(ESP and outer
+IP header) is done by the hardware and the application does not need to
+add/remove them during outbound/inbound processing.
+
The Path for IPsec Inbound traffic is:
* Read packets from the port.
* Classify packets between IPv4 and ESP.
* Perform Inbound SA lookup for ESP packets based on their SPI.
-* Perform Verification/Decryption.
-* Remove ESP and outer IP header
+* Perform Verification/Decryption (Not needed in case of inline ipsec).
+* Remove ESP and outer IP header (Not needed in case of protocol offload).
* Inbound SP check using ACL of decrypted packets and any other IPv4 packets.
* Routing.
* Write packet to port.
@@ -68,8 +77,8 @@ The Path for the IPsec Outbound traffic is:
* Read packets from the port.
* Perform Outbound SP check using ACL of all IPv4 traffic.
* Perform Outbound SA lookup for packets that need IPsec protection.
-* Add ESP and outer IP header.
-* Perform Encryption/Digest.
+* Add ESP and outer IP header (Not needed in case protocol offload).
+* Perform Encryption/Digest (Not needed in case of inline ipsec).
* Routing.
* Write packet to port.
@@ -385,7 +394,7 @@ The SA rule syntax is shown as follows:
.. code-block:: console
sa <dir> <spi> <cipher_algo> <cipher_key> <auth_algo> <auth_key>
- <mode> <src_ip> <dst_ip>
+ <mode> <src_ip> <dst_ip> <action_type> <port_id>
where each options means:
@@ -526,6 +535,34 @@ where each options means:
* *dst X.X.X.X* for IPv4
* *dst XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX* for IPv6
+``<type>``
+
+ * Action type to specify the security action. This option specify
+ the SA to be performed with look aside protocol offload to HW
+ accelerator or protocol offload on ethernet device or inline
+ crypto processing on the ethernet device during transmission.
+
+ * Optional: Yes, default type *no-offload*
+
+ * Available options:
+
+ * *lookaside-protocol-offload*: look aside protocol offload to HW accelerator
+ * *inline-protocol-offload*: inline protocol offload on ethernet device
+ * *inline-crypto-offload*: inline crypto processing on ethernet device
+ * *no-offload*: no offloading to hardware
+
+ ``<port_id>``
+
+ * Port/device ID of the ethernet/crypto accelerator for which the SA is
+ configured. This option is used when *type* is NOT *no-offload*
+
+ * Optional: No, if *type* is not *no-offload*
+
+ * Syntax:
+
+ * *port_id X* X is a valid device number in decimal
+
+
Example SA rules:
.. code-block:: console
@@ -545,6 +582,11 @@ Example SA rules:
aead_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
mode ipv4-tunnel src 172.16.2.5 dst 172.16.1.5
+ sa out 5 cipher_algo aes-128-cbc cipher_key 0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0 \
+ auth_algo sha1-hmac auth_key 0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0 \
+ mode ipv4-tunnel src 172.16.1.5 dst 172.16.2.5 \
+ type lookaside-protocol-offload port_id 4
+
Routing rule syntax
^^^^^^^^^^^^^^^^^^^
diff --git a/examples/ipsec-secgw/esp.c b/examples/ipsec-secgw/esp.c
index 12c6f8c..781b162 100644
--- a/examples/ipsec-secgw/esp.c
+++ b/examples/ipsec-secgw/esp.c
@@ -58,8 +58,11 @@ esp_inbound(struct rte_mbuf *m, struct ipsec_sa *sa,
struct rte_crypto_sym_op *sym_cop;
int32_t payload_len, ip_hdr_len;
- RTE_ASSERT(m != NULL);
RTE_ASSERT(sa != NULL);
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO)
+ return 0;
+
+ RTE_ASSERT(m != NULL);
RTE_ASSERT(cop != NULL);
ip4 = rte_pktmbuf_mtod(m, struct ip *);
@@ -175,29 +178,44 @@ esp_inbound_post(struct rte_mbuf *m, struct ipsec_sa *sa,
RTE_ASSERT(sa != NULL);
RTE_ASSERT(cop != NULL);
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) {
+ if (m->ol_flags & PKT_RX_SEC_OFFLOAD) {
+ if (m->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
+ cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ else
+ cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ } else
+ cop->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+ }
+
if (cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS) {
RTE_LOG(ERR, IPSEC_ESP, "failed crypto op\n");
return -1;
}
- nexthdr = rte_pktmbuf_mtod_offset(m, uint8_t*,
- rte_pktmbuf_pkt_len(m) - sa->digest_len - 1);
- pad_len = nexthdr - 1;
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO &&
+ sa->ol_flags & RTE_SECURITY_RX_HW_TRAILER_OFFLOAD) {
+ nexthdr = &m->inner_esp_next_proto;
+ } else {
+ nexthdr = rte_pktmbuf_mtod_offset(m, uint8_t*,
+ rte_pktmbuf_pkt_len(m) - sa->digest_len - 1);
+ pad_len = nexthdr - 1;
+
+ padding = pad_len - *pad_len;
+ for (i = 0; i < *pad_len; i++) {
+ if (padding[i] != i + 1) {
+ RTE_LOG(ERR, IPSEC_ESP, "invalid padding\n");
+ return -EINVAL;
+ }
+ }
- padding = pad_len - *pad_len;
- for (i = 0; i < *pad_len; i++) {
- if (padding[i] != i + 1) {
- RTE_LOG(ERR, IPSEC_ESP, "invalid padding\n");
+ if (rte_pktmbuf_trim(m, *pad_len + 2 + sa->digest_len)) {
+ RTE_LOG(ERR, IPSEC_ESP,
+ "failed to remove pad_len + digest\n");
return -EINVAL;
}
}
- if (rte_pktmbuf_trim(m, *pad_len + 2 + sa->digest_len)) {
- RTE_LOG(ERR, IPSEC_ESP,
- "failed to remove pad_len + digest\n");
- return -EINVAL;
- }
-
if (unlikely(sa->flags == TRANSPORT)) {
ip = rte_pktmbuf_mtod(m, struct ip *);
ip4 = (struct ip *)rte_pktmbuf_adj(m,
@@ -226,7 +244,7 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
struct ip *ip4;
struct ip6_hdr *ip6;
struct esp_hdr *esp = NULL;
- uint8_t *padding, *new_ip, nlp;
+ uint8_t *padding = NULL, *new_ip, nlp;
struct rte_crypto_sym_op *sym_cop;
int32_t i;
uint16_t pad_payload_len, pad_len = 0;
@@ -236,7 +254,6 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
RTE_ASSERT(sa != NULL);
RTE_ASSERT(sa->flags == IP4_TUNNEL || sa->flags == IP6_TUNNEL ||
sa->flags == TRANSPORT);
- RTE_ASSERT(cop != NULL);
ip4 = rte_pktmbuf_mtod(m, struct ip *);
if (likely(ip4->ip_v == IPVERSION)) {
@@ -290,12 +307,19 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
return -EINVAL;
}
- padding = (uint8_t *)rte_pktmbuf_append(m, pad_len + sa->digest_len);
- if (unlikely(padding == NULL)) {
- RTE_LOG(ERR, IPSEC_ESP, "not enough mbuf trailing space\n");
- return -ENOSPC;
+ /* Add trailer padding if it is not constructed by HW */
+ if (sa->type != RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO ||
+ (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO &&
+ !(sa->ol_flags & RTE_SECURITY_TX_HW_TRAILER_OFFLOAD))) {
+ padding = (uint8_t *)rte_pktmbuf_append(m, pad_len +
+ sa->digest_len);
+ if (unlikely(padding == NULL)) {
+ RTE_LOG(ERR, IPSEC_ESP,
+ "not enough mbuf trailing space\n");
+ return -ENOSPC;
+ }
+ rte_prefetch0(padding);
}
- rte_prefetch0(padding);
switch (sa->flags) {
case IP4_TUNNEL:
@@ -328,15 +352,46 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
esp->spi = rte_cpu_to_be_32(sa->spi);
esp->seq = rte_cpu_to_be_32((uint32_t)sa->seq);
+ /* set iv */
uint64_t *iv = (uint64_t *)(esp + 1);
+ if (sa->aead_algo == RTE_CRYPTO_AEAD_AES_GCM) {
+ *iv = rte_cpu_to_be_64(sa->seq);
+ } else {
+ switch (sa->cipher_algo) {
+ case RTE_CRYPTO_CIPHER_NULL:
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ memset(iv, 0, sa->iv_len);
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CTR:
+ *iv = rte_cpu_to_be_64(sa->seq);
+ break;
+ default:
+ RTE_LOG(ERR, IPSEC_ESP,
+ "unsupported cipher algorithm %u\n",
+ sa->cipher_algo);
+ return -EINVAL;
+ }
+ }
+
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) {
+ if (sa->ol_flags & RTE_SECURITY_TX_HW_TRAILER_OFFLOAD) {
+ /* Set the inner esp next protocol for HW trailer */
+ m->inner_esp_next_proto = nlp;
+ m->packet_type |= RTE_PTYPE_TUNNEL_ESP;
+ } else {
+ padding[pad_len - 2] = pad_len - 2;
+ padding[pad_len - 1] = nlp;
+ }
+ goto done;
+ }
+ RTE_ASSERT(cop != NULL);
sym_cop = get_sym_cop(cop);
sym_cop->m_src = m;
if (sa->aead_algo == RTE_CRYPTO_AEAD_AES_GCM) {
uint8_t *aad;
- *iv = rte_cpu_to_be_64(sa->seq);
sym_cop->aead.data.offset = ip_hdr_len +
sizeof(struct esp_hdr) + sa->iv_len;
sym_cop->aead.data.length = pad_payload_len;
@@ -365,13 +420,11 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
switch (sa->cipher_algo) {
case RTE_CRYPTO_CIPHER_NULL:
case RTE_CRYPTO_CIPHER_AES_CBC:
- memset(iv, 0, sa->iv_len);
sym_cop->cipher.data.offset = ip_hdr_len +
sizeof(struct esp_hdr);
sym_cop->cipher.data.length = pad_payload_len + sa->iv_len;
break;
case RTE_CRYPTO_CIPHER_AES_CTR:
- *iv = rte_cpu_to_be_64(sa->seq);
sym_cop->cipher.data.offset = ip_hdr_len +
sizeof(struct esp_hdr) + sa->iv_len;
sym_cop->cipher.data.length = pad_payload_len;
@@ -413,21 +466,26 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
rte_pktmbuf_pkt_len(m) - sa->digest_len);
}
+done:
return 0;
}
int
-esp_outbound_post(struct rte_mbuf *m __rte_unused,
- struct ipsec_sa *sa __rte_unused,
- struct rte_crypto_op *cop)
+esp_outbound_post(struct rte_mbuf *m,
+ struct ipsec_sa *sa,
+ struct rte_crypto_op *cop)
{
RTE_ASSERT(m != NULL);
RTE_ASSERT(sa != NULL);
- RTE_ASSERT(cop != NULL);
- if (cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS) {
- RTE_LOG(ERR, IPSEC_ESP, "Failed crypto op\n");
- return -1;
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) {
+ m->ol_flags |= PKT_TX_SEC_OFFLOAD;
+ } else {
+ RTE_ASSERT(cop != NULL);
+ if (cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS) {
+ RTE_LOG(ERR, IPSEC_ESP, "Failed crypto op\n");
+ return -1;
+ }
}
return 0;
diff --git a/examples/ipsec-secgw/esp.h b/examples/ipsec-secgw/esp.h
index fa5cc8a..23601e3 100644
--- a/examples/ipsec-secgw/esp.h
+++ b/examples/ipsec-secgw/esp.h
@@ -35,16 +35,6 @@
struct mbuf;
-/* RFC4303 */
-struct esp_hdr {
- uint32_t spi;
- uint32_t seq;
- /* Payload */
- /* Padding */
- /* Pad Length */
- /* Next Header */
- /* Integrity Check Value - ICV */
-};
int
esp_inbound(struct rte_mbuf *m, struct ipsec_sa *sa,
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index f931de6..6e18e84 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -1317,6 +1317,11 @@ port_init(uint16_t portid)
printf("Creating queues: nb_rx_queue=%d nb_tx_queue=%u...\n",
nb_rx_queue, nb_tx_queue);
+ if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_SECURITY)
+ port_conf.rxmode.enable_sec = 1;
+ if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_SECURITY)
+ port_conf.txmode.enable_sec = 1;
+
ret = rte_eth_dev_configure(portid, nb_rx_queue, nb_tx_queue,
&port_conf);
if (ret < 0)
diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
index daa1d7b..6423e3e 100644
--- a/examples/ipsec-secgw/ipsec.c
+++ b/examples/ipsec-secgw/ipsec.c
@@ -37,7 +37,9 @@
#include <rte_branch_prediction.h>
#include <rte_log.h>
#include <rte_crypto.h>
+#include <rte_security.h>
#include <rte_cryptodev.h>
+#include <rte_ethdev.h>
#include <rte_mbuf.h>
#include <rte_hash.h>
@@ -49,7 +51,7 @@ create_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa)
{
struct rte_cryptodev_info cdev_info;
unsigned long cdev_id_qp = 0;
- int32_t ret;
+ int32_t ret = 0;
struct cdev_key key = { 0 };
key.lcore_id = (uint8_t)rte_lcore_id();
@@ -58,16 +60,19 @@ create_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa)
key.auth_algo = (uint8_t)sa->auth_algo;
key.aead_algo = (uint8_t)sa->aead_algo;
- ret = rte_hash_lookup_data(ipsec_ctx->cdev_map, &key,
- (void **)&cdev_id_qp);
- if (ret < 0) {
- RTE_LOG(ERR, IPSEC, "No cryptodev: core %u, cipher_algo %u, "
- "auth_algo %u aead_algo %u\n",
- key.lcore_id,
- key.cipher_algo,
- key.auth_algo,
- key.aead_algo);
- return -1;
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_NONE) {
+ ret = rte_hash_lookup_data(ipsec_ctx->cdev_map, &key,
+ (void **)&cdev_id_qp);
+ if (ret < 0) {
+ RTE_LOG(ERR, IPSEC,
+ "No cryptodev: core %u, cipher_algo %u, "
+ "auth_algo %u aead_algo %u\n",
+ key.lcore_id,
+ key.cipher_algo,
+ key.auth_algo,
+ key.aead_algo);
+ return -1;
+ }
}
RTE_LOG_DP(DEBUG, IPSEC, "Create session for SA spi %u on cryptodev "
@@ -75,23 +80,153 @@ create_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa)
ipsec_ctx->tbl[cdev_id_qp].id,
ipsec_ctx->tbl[cdev_id_qp].qp);
- sa->crypto_session = rte_cryptodev_sym_session_create(
- ipsec_ctx->session_pool);
- rte_cryptodev_sym_session_init(ipsec_ctx->tbl[cdev_id_qp].id,
- sa->crypto_session, sa->xforms,
- ipsec_ctx->session_pool);
-
- rte_cryptodev_info_get(ipsec_ctx->tbl[cdev_id_qp].id, &cdev_info);
- if (cdev_info.sym.max_nb_sessions_per_qp > 0) {
- ret = rte_cryptodev_queue_pair_attach_sym_session(
- ipsec_ctx->tbl[cdev_id_qp].id,
- ipsec_ctx->tbl[cdev_id_qp].qp,
- sa->crypto_session);
- if (ret < 0) {
- RTE_LOG(ERR, IPSEC,
- "Session cannot be attached to qp %u ",
- ipsec_ctx->tbl[cdev_id_qp].qp);
- return -1;
+ if (sa->type != RTE_SECURITY_ACTION_TYPE_NONE) {
+ struct rte_security_session_conf sess_conf = {
+ .action_type = sa->type,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .spi = sa->spi,
+ .salt = sa->salt,
+ .options = { 0 },
+ .direction = sa->direction,
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = (sa->flags == IP4_TUNNEL ||
+ sa->flags == IP6_TUNNEL) ?
+ RTE_SECURITY_IPSEC_SA_MODE_TUNNEL :
+ RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
+ },
+ .crypto_xform = sa->xforms
+
+ };
+
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL) {
+ struct rte_security_ctx *ctx = (struct rte_security_ctx *)
+ rte_cryptodev_get_sec_ctx(
+ ipsec_ctx->tbl[cdev_id_qp].id);
+
+ if (sess_conf.ipsec.mode ==
+ RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
+ struct rte_security_ipsec_tunnel_param *tunnel =
+ &sess_conf.ipsec.tunnel;
+ if (sa->flags == IP4_TUNNEL) {
+ tunnel->type =
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+ tunnel->ipv4.ttl = IPDEFTTL;
+
+ memcpy((uint8_t *)&tunnel->ipv4.src_ip,
+ (uint8_t *)&sa->src.ip.ip4, 4);
+
+ memcpy((uint8_t *)&tunnel->ipv4.dst_ip,
+ (uint8_t *)&sa->dst.ip.ip4, 4);
+ }
+ /* TODO support for Transport and IPV6 tunnel */
+ }
+
+ sa->sec_session = rte_security_session_create(ctx,
+ &sess_conf, ipsec_ctx->session_pool);
+ if (sa->sec_session == NULL) {
+ RTE_LOG(ERR, IPSEC,
+ "SEC Session init failed: err: %d\n", ret);
+ return -1;
+ }
+ } else if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) {
+ struct rte_flow_error err;
+ struct rte_security_ctx *ctx = (struct rte_security_ctx *)
+ rte_eth_dev_get_sec_ctx(
+ sa->portid);
+ const struct rte_security_capability *sec_cap;
+
+ sa->sec_session = rte_security_session_create(ctx,
+ &sess_conf, ipsec_ctx->session_pool);
+ if (sa->sec_session == NULL) {
+ RTE_LOG(ERR, IPSEC,
+ "SEC Session init failed: err: %d\n", ret);
+ return -1;
+ }
+
+ sec_cap = rte_security_capabilities_get(ctx);
+
+ /* iterate until ESP tunnel*/
+ while (sec_cap->action !=
+ RTE_SECURITY_ACTION_TYPE_NONE) {
+
+ if (sec_cap->action == sa->type &&
+ sec_cap->protocol ==
+ RTE_SECURITY_PROTOCOL_IPSEC &&
+ sec_cap->ipsec.mode ==
+ RTE_SECURITY_IPSEC_SA_MODE_TUNNEL &&
+ sec_cap->ipsec.direction == sa->direction)
+ break;
+ sec_cap++;
+ }
+
+ if (sec_cap->action == RTE_SECURITY_ACTION_TYPE_NONE) {
+ RTE_LOG(ERR, IPSEC,
+ "No suitable security capability found\n");
+ return -1;
+ }
+
+ sa->ol_flags = sec_cap->ol_flags;
+ sa->security_ctx = ctx;
+ sa->pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH;
+
+ sa->pattern[1].type = RTE_FLOW_ITEM_TYPE_IPV4;
+ sa->pattern[1].mask = &rte_flow_item_ipv4_mask;
+ if (sa->flags & IP6_TUNNEL) {
+ sa->pattern[1].spec = &sa->ipv6_spec;
+ memcpy(sa->ipv6_spec.hdr.dst_addr,
+ sa->dst.ip.ip6.ip6_b, 16);
+ memcpy(sa->ipv6_spec.hdr.src_addr,
+ sa->src.ip.ip6.ip6_b, 16);
+ } else {
+ sa->pattern[1].spec = &sa->ipv4_spec;
+ sa->ipv4_spec.hdr.dst_addr = sa->dst.ip.ip4;
+ sa->ipv4_spec.hdr.src_addr = sa->src.ip.ip4;
+ }
+
+ sa->pattern[2].type = RTE_FLOW_ITEM_TYPE_ESP;
+ sa->pattern[2].spec = &sa->esp_spec;
+ sa->pattern[2].mask = &rte_flow_item_esp_mask;
+ sa->esp_spec.hdr.spi = sa->spi;
+
+ sa->pattern[3].type = RTE_FLOW_ITEM_TYPE_END;
+
+ sa->action[0].type = RTE_FLOW_ACTION_TYPE_SECURITY;
+ sa->action[0].conf = sa->sec_session;
+
+ sa->action[1].type = RTE_FLOW_ACTION_TYPE_END;
+
+ sa->attr.egress = (sa->direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS);
+ sa->flow = rte_flow_create(sa->portid,
+ &sa->attr, sa->pattern, sa->action, &err);
+ if (sa->flow == NULL) {
+ RTE_LOG(ERR, IPSEC,
+ "Failed to create ipsec flow msg: %s\n",
+ err.message);
+ return -1;
+ }
+ }
+ } else {
+ sa->crypto_session = rte_cryptodev_sym_session_create(
+ ipsec_ctx->session_pool);
+ rte_cryptodev_sym_session_init(ipsec_ctx->tbl[cdev_id_qp].id,
+ sa->crypto_session, sa->xforms,
+ ipsec_ctx->session_pool);
+
+ rte_cryptodev_info_get(ipsec_ctx->tbl[cdev_id_qp].id,
+ &cdev_info);
+ if (cdev_info.sym.max_nb_sessions_per_qp > 0) {
+ ret = rte_cryptodev_queue_pair_attach_sym_session(
+ ipsec_ctx->tbl[cdev_id_qp].id,
+ ipsec_ctx->tbl[cdev_id_qp].qp,
+ sa->crypto_session);
+ if (ret < 0) {
+ RTE_LOG(ERR, IPSEC,
+ "Session cannot be attached to qp %u\n",
+ ipsec_ctx->tbl[cdev_id_qp].qp);
+ return -1;
+ }
}
}
sa->cdev_id_qp = cdev_id_qp;
@@ -129,7 +264,9 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
{
int32_t ret = 0, i;
struct ipsec_mbuf_metadata *priv;
+ struct rte_crypto_sym_op *sym_cop;
struct ipsec_sa *sa;
+ struct cdev_qp *cqp;
for (i = 0; i < nb_pkts; i++) {
if (unlikely(sas[i] == NULL)) {
@@ -144,23 +281,76 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
sa = sas[i];
priv->sa = sa;
- priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
- priv->cop.status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
-
- rte_prefetch0(&priv->sym_cop);
-
- if ((unlikely(sa->crypto_session == NULL)) &&
- create_session(ipsec_ctx, sa)) {
- rte_pktmbuf_free(pkts[i]);
- continue;
- }
-
- rte_crypto_op_attach_sym_session(&priv->cop,
- sa->crypto_session);
-
- ret = xform_func(pkts[i], sa, &priv->cop);
- if (unlikely(ret)) {
- rte_pktmbuf_free(pkts[i]);
+ switch (sa->type) {
+ case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
+ priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ priv->cop.status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+
+ rte_prefetch0(&priv->sym_cop);
+
+ if ((unlikely(sa->sec_session == NULL)) &&
+ create_session(ipsec_ctx, sa)) {
+ rte_pktmbuf_free(pkts[i]);
+ continue;
+ }
+
+ sym_cop = get_sym_cop(&priv->cop);
+ sym_cop->m_src = pkts[i];
+
+ rte_security_attach_session(&priv->cop,
+ sa->sec_session);
+ break;
+ case RTE_SECURITY_ACTION_TYPE_NONE:
+
+ priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ priv->cop.status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+
+ rte_prefetch0(&priv->sym_cop);
+
+ if ((unlikely(sa->crypto_session == NULL)) &&
+ create_session(ipsec_ctx, sa)) {
+ rte_pktmbuf_free(pkts[i]);
+ continue;
+ }
+
+ rte_crypto_op_attach_sym_session(&priv->cop,
+ sa->crypto_session);
+
+ ret = xform_func(pkts[i], sa, &priv->cop);
+ if (unlikely(ret)) {
+ rte_pktmbuf_free(pkts[i]);
+ continue;
+ }
+ break;
+ case RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+ break;
+ case RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO:
+ priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ priv->cop.status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+
+ rte_prefetch0(&priv->sym_cop);
+
+ if ((unlikely(sa->sec_session == NULL)) &&
+ create_session(ipsec_ctx, sa)) {
+ rte_pktmbuf_free(pkts[i]);
+ continue;
+ }
+
+ rte_security_attach_session(&priv->cop,
+ sa->sec_session);
+
+ ret = xform_func(pkts[i], sa, &priv->cop);
+ if (unlikely(ret)) {
+ rte_pktmbuf_free(pkts[i]);
+ continue;
+ }
+
+ cqp = &ipsec_ctx->tbl[sa->cdev_id_qp];
+ cqp->ol_pkts[cqp->ol_pkts_cnt++] = pkts[i];
+ if (sa->ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA)
+ rte_security_set_pkt_metadata(
+ sa->security_ctx,
+ sa->sec_session, pkts[i], NULL);
continue;
}
@@ -171,7 +361,7 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
static inline int
ipsec_dequeue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
- struct rte_mbuf *pkts[], uint16_t max_pkts)
+ struct rte_mbuf *pkts[], uint16_t max_pkts)
{
int32_t nb_pkts = 0, ret = 0, i, j, nb_cops;
struct ipsec_mbuf_metadata *priv;
@@ -186,6 +376,19 @@ ipsec_dequeue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
if (ipsec_ctx->last_qp == ipsec_ctx->nb_qps)
ipsec_ctx->last_qp %= ipsec_ctx->nb_qps;
+ while (cqp->ol_pkts_cnt > 0 && nb_pkts < max_pkts) {
+ pkt = cqp->ol_pkts[--cqp->ol_pkts_cnt];
+ rte_prefetch0(pkt);
+ priv = get_priv(pkt);
+ sa = priv->sa;
+ ret = xform_func(pkt, sa, &priv->cop);
+ if (unlikely(ret)) {
+ rte_pktmbuf_free(pkt);
+ continue;
+ }
+ pkts[nb_pkts++] = pkt;
+ }
+
if (cqp->in_flight == 0)
continue;
@@ -203,11 +406,14 @@ ipsec_dequeue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
RTE_ASSERT(sa != NULL);
- ret = xform_func(pkt, sa, cops[j]);
- if (unlikely(ret))
- rte_pktmbuf_free(pkt);
- else
- pkts[nb_pkts++] = pkt;
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_NONE) {
+ ret = xform_func(pkt, sa, cops[j]);
+ if (unlikely(ret)) {
+ rte_pktmbuf_free(pkt);
+ continue;
+ }
+ }
+ pkts[nb_pkts++] = pkt;
}
}
diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h
index 9e22b1b..613785f 100644
--- a/examples/ipsec-secgw/ipsec.h
+++ b/examples/ipsec-secgw/ipsec.h
@@ -38,6 +38,8 @@
#include <rte_byteorder.h>
#include <rte_crypto.h>
+#include <rte_security.h>
+#include <rte_flow.h>
#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1
#define RTE_LOGTYPE_IPSEC_ESP RTE_LOGTYPE_USER2
@@ -99,7 +101,10 @@ struct ipsec_sa {
uint32_t cdev_id_qp;
uint64_t seq;
uint32_t salt;
- struct rte_cryptodev_sym_session *crypto_session;
+ union {
+ struct rte_cryptodev_sym_session *crypto_session;
+ struct rte_security_session *sec_session;
+ };
enum rte_crypto_cipher_algorithm cipher_algo;
enum rte_crypto_auth_algorithm auth_algo;
enum rte_crypto_aead_algorithm aead_algo;
@@ -117,7 +122,28 @@ struct ipsec_sa {
uint8_t auth_key[MAX_KEY_SIZE];
uint16_t auth_key_len;
uint16_t aad_len;
- struct rte_crypto_sym_xform *xforms;
+ union {
+ struct rte_crypto_sym_xform *xforms;
+ struct rte_security_ipsec_xform *sec_xform;
+ };
+ enum rte_security_session_action_type type;
+ enum rte_security_ipsec_sa_direction direction;
+ uint16_t portid;
+ struct rte_security_ctx *security_ctx;
+ uint32_t ol_flags;
+
+#define MAX_RTE_FLOW_PATTERN (4)
+#define MAX_RTE_FLOW_ACTIONS (2)
+ struct rte_flow_item pattern[MAX_RTE_FLOW_PATTERN];
+ struct rte_flow_action action[MAX_RTE_FLOW_ACTIONS];
+ struct rte_flow_attr attr;
+ union {
+ struct rte_flow_item_ipv4 ipv4_spec;
+ struct rte_flow_item_ipv6 ipv6_spec;
+ };
+ struct rte_flow_item_esp esp_spec;
+ struct rte_flow *flow;
+ struct rte_security_session_conf sess_conf;
} __rte_cache_aligned;
struct ipsec_mbuf_metadata {
@@ -133,6 +159,8 @@ struct cdev_qp {
uint16_t in_flight;
uint16_t len;
struct rte_crypto_op *buf[MAX_PKT_BURST] __rte_aligned(sizeof(void *));
+ struct rte_mbuf *ol_pkts[MAX_PKT_BURST] __rte_aligned(sizeof(void *));
+ uint16_t ol_pkts_cnt;
};
struct ipsec_ctx {
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index ef94475..d8ee47b 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -41,16 +41,20 @@
#include <rte_memzone.h>
#include <rte_crypto.h>
+#include <rte_security.h>
#include <rte_cryptodev.h>
#include <rte_byteorder.h>
#include <rte_errno.h>
#include <rte_ip.h>
#include <rte_random.h>
+#include <rte_ethdev.h>
#include "ipsec.h"
#include "esp.h"
#include "parser.h"
+#define IPDEFTTL 64
+
struct supported_cipher_algo {
const char *keyword;
enum rte_crypto_cipher_algorithm algo;
@@ -238,6 +242,8 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
uint32_t src_p = 0;
uint32_t dst_p = 0;
uint32_t mode_p = 0;
+ uint32_t type_p = 0;
+ uint32_t portid_p = 0;
if (strcmp(tokens[0], "in") == 0) {
ri = &nb_sa_in;
@@ -550,6 +556,52 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
continue;
}
+ if (strcmp(tokens[ti], "type") == 0) {
+ APP_CHECK_PRESENCE(type_p, tokens[ti], status);
+ if (status->status < 0)
+ return;
+
+ INCREMENT_TOKEN_INDEX(ti, n_tokens, status);
+ if (status->status < 0)
+ return;
+
+ if (strcmp(tokens[ti], "inline-crypto-offload") == 0)
+ rule->type =
+ RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO;
+ else if (strcmp(tokens[ti],
+ "inline-protocol-offload") == 0)
+ rule->type =
+ RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL;
+ else if (strcmp(tokens[ti],
+ "lookaside-protocol-offload") == 0)
+ rule->type =
+ RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL;
+ else if (strcmp(tokens[ti], "no-offload") == 0)
+ rule->type = RTE_SECURITY_ACTION_TYPE_NONE;
+ else {
+ APP_CHECK(0, status, "Invalid input \"%s\"",
+ tokens[ti]);
+ return;
+ }
+
+ type_p = 1;
+ continue;
+ }
+
+ if (strcmp(tokens[ti], "port_id") == 0) {
+ APP_CHECK_PRESENCE(portid_p, tokens[ti], status);
+ if (status->status < 0)
+ return;
+ INCREMENT_TOKEN_INDEX(ti, n_tokens, status);
+ if (status->status < 0)
+ return;
+ rule->portid = atoi(tokens[ti]);
+ if (status->status < 0)
+ return;
+ portid_p = 1;
+ continue;
+ }
+
/* unrecognizeable input */
APP_CHECK(0, status, "unrecognized input \"%s\"",
tokens[ti]);
@@ -580,6 +632,14 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
if (status->status < 0)
return;
+ if ((rule->type != RTE_SECURITY_ACTION_TYPE_NONE) && (portid_p == 0))
+ printf("Missing portid option, falling back to non-offload\n");
+
+ if (!type_p || !portid_p) {
+ rule->type = RTE_SECURITY_ACTION_TYPE_NONE;
+ rule->portid = -1;
+ }
+
*ri = *ri + 1;
}
@@ -647,9 +707,11 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound)
struct sa_ctx {
struct ipsec_sa sa[IPSEC_SA_MAX_ENTRIES];
- struct {
- struct rte_crypto_sym_xform a;
- struct rte_crypto_sym_xform b;
+ union {
+ struct {
+ struct rte_crypto_sym_xform a;
+ struct rte_crypto_sym_xform b;
+ };
} xf[IPSEC_SA_MAX_ENTRIES];
};
@@ -682,6 +744,33 @@ sa_create(const char *name, int32_t socket_id)
}
static int
+check_eth_dev_caps(uint16_t portid, uint32_t inbound)
+{
+ struct rte_eth_dev_info dev_info;
+
+ rte_eth_dev_info_get(portid, &dev_info);
+
+ if (inbound) {
+ if ((dev_info.rx_offload_capa &
+ DEV_RX_OFFLOAD_SECURITY) == 0) {
+ RTE_LOG(WARNING, PORT,
+ "hardware RX IPSec offload is not supported\n");
+ return -EINVAL;
+ }
+
+ } else { /* outbound */
+ if ((dev_info.tx_offload_capa &
+ DEV_TX_OFFLOAD_SECURITY) == 0) {
+ RTE_LOG(WARNING, PORT,
+ "hardware TX IPSec offload is not supported\n");
+ return -EINVAL;
+ }
+ }
+ return 0;
+}
+
+
+static int
sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
uint32_t nb_entries, uint32_t inbound)
{
@@ -700,6 +789,16 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
*sa = entries[i];
sa->seq = 0;
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL ||
+ sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) {
+ if (check_eth_dev_caps(sa->portid, inbound))
+ return -EINVAL;
+ }
+
+ sa->direction = (inbound == 1) ?
+ RTE_SECURITY_IPSEC_SA_DIR_INGRESS :
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+
switch (sa->flags) {
case IP4_TUNNEL:
sa->src.ip.ip4 = rte_cpu_to_be_32(sa->src.ip.ip4);
@@ -709,37 +808,21 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
if (sa->aead_algo == RTE_CRYPTO_AEAD_AES_GCM) {
iv_length = 16;
- if (inbound) {
- sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_AEAD;
- sa_ctx->xf[idx].a.aead.algo = sa->aead_algo;
- sa_ctx->xf[idx].a.aead.key.data = sa->cipher_key;
- sa_ctx->xf[idx].a.aead.key.length =
- sa->cipher_key_len;
- sa_ctx->xf[idx].a.aead.op =
- RTE_CRYPTO_AEAD_OP_DECRYPT;
- sa_ctx->xf[idx].a.next = NULL;
- sa_ctx->xf[idx].a.aead.iv.offset = IV_OFFSET;
- sa_ctx->xf[idx].a.aead.iv.length = iv_length;
- sa_ctx->xf[idx].a.aead.aad_length =
- sa->aad_len;
- sa_ctx->xf[idx].a.aead.digest_length =
- sa->digest_len;
- } else { /* outbound */
- sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_AEAD;
- sa_ctx->xf[idx].a.aead.algo = sa->aead_algo;
- sa_ctx->xf[idx].a.aead.key.data = sa->cipher_key;
- sa_ctx->xf[idx].a.aead.key.length =
- sa->cipher_key_len;
- sa_ctx->xf[idx].a.aead.op =
- RTE_CRYPTO_AEAD_OP_ENCRYPT;
- sa_ctx->xf[idx].a.next = NULL;
- sa_ctx->xf[idx].a.aead.iv.offset = IV_OFFSET;
- sa_ctx->xf[idx].a.aead.iv.length = iv_length;
- sa_ctx->xf[idx].a.aead.aad_length =
- sa->aad_len;
- sa_ctx->xf[idx].a.aead.digest_length =
- sa->digest_len;
- }
+ sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_AEAD;
+ sa_ctx->xf[idx].a.aead.algo = sa->aead_algo;
+ sa_ctx->xf[idx].a.aead.key.data = sa->cipher_key;
+ sa_ctx->xf[idx].a.aead.key.length =
+ sa->cipher_key_len;
+ sa_ctx->xf[idx].a.aead.op = (inbound == 1) ?
+ RTE_CRYPTO_AEAD_OP_DECRYPT :
+ RTE_CRYPTO_AEAD_OP_ENCRYPT;
+ sa_ctx->xf[idx].a.next = NULL;
+ sa_ctx->xf[idx].a.aead.iv.offset = IV_OFFSET;
+ sa_ctx->xf[idx].a.aead.iv.length = iv_length;
+ sa_ctx->xf[idx].a.aead.aad_length =
+ sa->aad_len;
+ sa_ctx->xf[idx].a.aead.digest_length =
+ sa->digest_len;
sa->xforms = &sa_ctx->xf[idx].a;
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 12/12] examples/ipsec-secgw: add support for security offload
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 12/12] examples/ipsec-secgw: add support for security offload Akhil Goyal
@ 2017-10-15 12:51 ` Aviad Yehezkel
0 siblings, 0 replies; 195+ messages in thread
From: Aviad Yehezkel @ 2017-10-15 12:51 UTC (permalink / raw)
To: dev
On 10/15/2017 1:17 AM, Akhil Goyal wrote:
> Ipsec-secgw application is modified so that it can support
> following type of actions for crypto operations
> 1. full protocol offload using crypto devices.
> 2. inline ipsec using ethernet devices to perform crypto operations
> 3. full protocol offload using ethernet devices.
> 4. non protocol offload
>
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Boris Pismenny <borisp@mellanox.com>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
> ---
> doc/guides/sample_app_ug/ipsec_secgw.rst | 52 +++++-
> examples/ipsec-secgw/esp.c | 120 ++++++++----
> examples/ipsec-secgw/esp.h | 10 -
> examples/ipsec-secgw/ipsec-secgw.c | 5 +
> examples/ipsec-secgw/ipsec.c | 308 ++++++++++++++++++++++++++-----
> examples/ipsec-secgw/ipsec.h | 32 +++-
> examples/ipsec-secgw/sa.c | 151 +++++++++++----
> 7 files changed, 545 insertions(+), 133 deletions(-)
>
> diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
> index b675cba..892977e 100644
> --- a/doc/guides/sample_app_ug/ipsec_secgw.rst
> +++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
> @@ -52,13 +52,22 @@ The application classifies the ports as *Protected* and *Unprotected*.
> Thus, traffic received on an Unprotected or Protected port is consider
> Inbound or Outbound respectively.
>
> +The application also supports complete IPSec protocol offload to hardware
> +(Look aside crypto accelarator or using ethernet device). It also support
> +inline ipsec processing by the supported ethernet device during transmission.
> +These modes can be selected during the SA creation configuration.
> +
> +In case of complete protocol offload, the processing of headers(ESP and outer
> +IP header) is done by the hardware and the application does not need to
> +add/remove them during outbound/inbound processing.
> +
> The Path for IPsec Inbound traffic is:
>
> * Read packets from the port.
> * Classify packets between IPv4 and ESP.
> * Perform Inbound SA lookup for ESP packets based on their SPI.
> -* Perform Verification/Decryption.
> -* Remove ESP and outer IP header
> +* Perform Verification/Decryption (Not needed in case of inline ipsec).
> +* Remove ESP and outer IP header (Not needed in case of protocol offload).
> * Inbound SP check using ACL of decrypted packets and any other IPv4 packets.
> * Routing.
> * Write packet to port.
> @@ -68,8 +77,8 @@ The Path for the IPsec Outbound traffic is:
> * Read packets from the port.
> * Perform Outbound SP check using ACL of all IPv4 traffic.
> * Perform Outbound SA lookup for packets that need IPsec protection.
> -* Add ESP and outer IP header.
> -* Perform Encryption/Digest.
> +* Add ESP and outer IP header (Not needed in case protocol offload).
> +* Perform Encryption/Digest (Not needed in case of inline ipsec).
> * Routing.
> * Write packet to port.
>
> @@ -385,7 +394,7 @@ The SA rule syntax is shown as follows:
> .. code-block:: console
>
> sa <dir> <spi> <cipher_algo> <cipher_key> <auth_algo> <auth_key>
> - <mode> <src_ip> <dst_ip>
> + <mode> <src_ip> <dst_ip> <action_type> <port_id>
>
> where each options means:
>
> @@ -526,6 +535,34 @@ where each options means:
> * *dst X.X.X.X* for IPv4
> * *dst XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX* for IPv6
>
> +``<type>``
> +
> + * Action type to specify the security action. This option specify
> + the SA to be performed with look aside protocol offload to HW
> + accelerator or protocol offload on ethernet device or inline
> + crypto processing on the ethernet device during transmission.
> +
> + * Optional: Yes, default type *no-offload*
> +
> + * Available options:
> +
> + * *lookaside-protocol-offload*: look aside protocol offload to HW accelerator
> + * *inline-protocol-offload*: inline protocol offload on ethernet device
> + * *inline-crypto-offload*: inline crypto processing on ethernet device
> + * *no-offload*: no offloading to hardware
> +
> + ``<port_id>``
> +
> + * Port/device ID of the ethernet/crypto accelerator for which the SA is
> + configured. This option is used when *type* is NOT *no-offload*
> +
> + * Optional: No, if *type* is not *no-offload*
> +
> + * Syntax:
> +
> + * *port_id X* X is a valid device number in decimal
> +
> +
> Example SA rules:
>
> .. code-block:: console
> @@ -545,6 +582,11 @@ Example SA rules:
> aead_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
> mode ipv4-tunnel src 172.16.2.5 dst 172.16.1.5
>
> + sa out 5 cipher_algo aes-128-cbc cipher_key 0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0 \
> + auth_algo sha1-hmac auth_key 0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0 \
> + mode ipv4-tunnel src 172.16.1.5 dst 172.16.2.5 \
> + type lookaside-protocol-offload port_id 4
> +
> Routing rule syntax
> ^^^^^^^^^^^^^^^^^^^
>
> diff --git a/examples/ipsec-secgw/esp.c b/examples/ipsec-secgw/esp.c
> index 12c6f8c..781b162 100644
> --- a/examples/ipsec-secgw/esp.c
> +++ b/examples/ipsec-secgw/esp.c
> @@ -58,8 +58,11 @@ esp_inbound(struct rte_mbuf *m, struct ipsec_sa *sa,
> struct rte_crypto_sym_op *sym_cop;
> int32_t payload_len, ip_hdr_len;
>
> - RTE_ASSERT(m != NULL);
> RTE_ASSERT(sa != NULL);
> + if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO)
> + return 0;
> +
> + RTE_ASSERT(m != NULL);
> RTE_ASSERT(cop != NULL);
>
> ip4 = rte_pktmbuf_mtod(m, struct ip *);
> @@ -175,29 +178,44 @@ esp_inbound_post(struct rte_mbuf *m, struct ipsec_sa *sa,
> RTE_ASSERT(sa != NULL);
> RTE_ASSERT(cop != NULL);
>
> + if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) {
> + if (m->ol_flags & PKT_RX_SEC_OFFLOAD) {
> + if (m->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
> + cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
> + else
> + cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
> + } else
> + cop->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
> + }
> +
> if (cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS) {
> RTE_LOG(ERR, IPSEC_ESP, "failed crypto op\n");
> return -1;
> }
>
> - nexthdr = rte_pktmbuf_mtod_offset(m, uint8_t*,
> - rte_pktmbuf_pkt_len(m) - sa->digest_len - 1);
> - pad_len = nexthdr - 1;
> + if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO &&
> + sa->ol_flags & RTE_SECURITY_RX_HW_TRAILER_OFFLOAD) {
> + nexthdr = &m->inner_esp_next_proto;
> + } else {
> + nexthdr = rte_pktmbuf_mtod_offset(m, uint8_t*,
> + rte_pktmbuf_pkt_len(m) - sa->digest_len - 1);
> + pad_len = nexthdr - 1;
> +
> + padding = pad_len - *pad_len;
> + for (i = 0; i < *pad_len; i++) {
> + if (padding[i] != i + 1) {
> + RTE_LOG(ERR, IPSEC_ESP, "invalid padding\n");
> + return -EINVAL;
> + }
> + }
>
> - padding = pad_len - *pad_len;
> - for (i = 0; i < *pad_len; i++) {
> - if (padding[i] != i + 1) {
> - RTE_LOG(ERR, IPSEC_ESP, "invalid padding\n");
> + if (rte_pktmbuf_trim(m, *pad_len + 2 + sa->digest_len)) {
> + RTE_LOG(ERR, IPSEC_ESP,
> + "failed to remove pad_len + digest\n");
> return -EINVAL;
> }
> }
>
> - if (rte_pktmbuf_trim(m, *pad_len + 2 + sa->digest_len)) {
> - RTE_LOG(ERR, IPSEC_ESP,
> - "failed to remove pad_len + digest\n");
> - return -EINVAL;
> - }
> -
> if (unlikely(sa->flags == TRANSPORT)) {
> ip = rte_pktmbuf_mtod(m, struct ip *);
> ip4 = (struct ip *)rte_pktmbuf_adj(m,
> @@ -226,7 +244,7 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
> struct ip *ip4;
> struct ip6_hdr *ip6;
> struct esp_hdr *esp = NULL;
> - uint8_t *padding, *new_ip, nlp;
> + uint8_t *padding = NULL, *new_ip, nlp;
> struct rte_crypto_sym_op *sym_cop;
> int32_t i;
> uint16_t pad_payload_len, pad_len = 0;
> @@ -236,7 +254,6 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
> RTE_ASSERT(sa != NULL);
> RTE_ASSERT(sa->flags == IP4_TUNNEL || sa->flags == IP6_TUNNEL ||
> sa->flags == TRANSPORT);
> - RTE_ASSERT(cop != NULL);
>
> ip4 = rte_pktmbuf_mtod(m, struct ip *);
> if (likely(ip4->ip_v == IPVERSION)) {
> @@ -290,12 +307,19 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
> return -EINVAL;
> }
>
> - padding = (uint8_t *)rte_pktmbuf_append(m, pad_len + sa->digest_len);
> - if (unlikely(padding == NULL)) {
> - RTE_LOG(ERR, IPSEC_ESP, "not enough mbuf trailing space\n");
> - return -ENOSPC;
> + /* Add trailer padding if it is not constructed by HW */
> + if (sa->type != RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO ||
> + (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO &&
> + !(sa->ol_flags & RTE_SECURITY_TX_HW_TRAILER_OFFLOAD))) {
> + padding = (uint8_t *)rte_pktmbuf_append(m, pad_len +
> + sa->digest_len);
> + if (unlikely(padding == NULL)) {
> + RTE_LOG(ERR, IPSEC_ESP,
> + "not enough mbuf trailing space\n");
> + return -ENOSPC;
> + }
> + rte_prefetch0(padding);
> }
> - rte_prefetch0(padding);
>
> switch (sa->flags) {
> case IP4_TUNNEL:
> @@ -328,15 +352,46 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
> esp->spi = rte_cpu_to_be_32(sa->spi);
> esp->seq = rte_cpu_to_be_32((uint32_t)sa->seq);
>
> + /* set iv */
> uint64_t *iv = (uint64_t *)(esp + 1);
> + if (sa->aead_algo == RTE_CRYPTO_AEAD_AES_GCM) {
> + *iv = rte_cpu_to_be_64(sa->seq);
> + } else {
> + switch (sa->cipher_algo) {
> + case RTE_CRYPTO_CIPHER_NULL:
> + case RTE_CRYPTO_CIPHER_AES_CBC:
> + memset(iv, 0, sa->iv_len);
> + break;
> + case RTE_CRYPTO_CIPHER_AES_CTR:
> + *iv = rte_cpu_to_be_64(sa->seq);
> + break;
> + default:
> + RTE_LOG(ERR, IPSEC_ESP,
> + "unsupported cipher algorithm %u\n",
> + sa->cipher_algo);
> + return -EINVAL;
> + }
> + }
> +
> + if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) {
> + if (sa->ol_flags & RTE_SECURITY_TX_HW_TRAILER_OFFLOAD) {
> + /* Set the inner esp next protocol for HW trailer */
> + m->inner_esp_next_proto = nlp;
> + m->packet_type |= RTE_PTYPE_TUNNEL_ESP;
> + } else {
> + padding[pad_len - 2] = pad_len - 2;
> + padding[pad_len - 1] = nlp;
> + }
> + goto done;
> + }
>
> + RTE_ASSERT(cop != NULL);
> sym_cop = get_sym_cop(cop);
> sym_cop->m_src = m;
>
> if (sa->aead_algo == RTE_CRYPTO_AEAD_AES_GCM) {
> uint8_t *aad;
>
> - *iv = rte_cpu_to_be_64(sa->seq);
> sym_cop->aead.data.offset = ip_hdr_len +
> sizeof(struct esp_hdr) + sa->iv_len;
> sym_cop->aead.data.length = pad_payload_len;
> @@ -365,13 +420,11 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
> switch (sa->cipher_algo) {
> case RTE_CRYPTO_CIPHER_NULL:
> case RTE_CRYPTO_CIPHER_AES_CBC:
> - memset(iv, 0, sa->iv_len);
> sym_cop->cipher.data.offset = ip_hdr_len +
> sizeof(struct esp_hdr);
> sym_cop->cipher.data.length = pad_payload_len + sa->iv_len;
> break;
> case RTE_CRYPTO_CIPHER_AES_CTR:
> - *iv = rte_cpu_to_be_64(sa->seq);
> sym_cop->cipher.data.offset = ip_hdr_len +
> sizeof(struct esp_hdr) + sa->iv_len;
> sym_cop->cipher.data.length = pad_payload_len;
> @@ -413,21 +466,26 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
> rte_pktmbuf_pkt_len(m) - sa->digest_len);
> }
>
> +done:
> return 0;
> }
>
> int
> -esp_outbound_post(struct rte_mbuf *m __rte_unused,
> - struct ipsec_sa *sa __rte_unused,
> - struct rte_crypto_op *cop)
> +esp_outbound_post(struct rte_mbuf *m,
> + struct ipsec_sa *sa,
> + struct rte_crypto_op *cop)
> {
> RTE_ASSERT(m != NULL);
> RTE_ASSERT(sa != NULL);
> - RTE_ASSERT(cop != NULL);
>
> - if (cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS) {
> - RTE_LOG(ERR, IPSEC_ESP, "Failed crypto op\n");
> - return -1;
> + if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) {
> + m->ol_flags |= PKT_TX_SEC_OFFLOAD;
> + } else {
> + RTE_ASSERT(cop != NULL);
> + if (cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS) {
> + RTE_LOG(ERR, IPSEC_ESP, "Failed crypto op\n");
> + return -1;
> + }
> }
>
> return 0;
> diff --git a/examples/ipsec-secgw/esp.h b/examples/ipsec-secgw/esp.h
> index fa5cc8a..23601e3 100644
> --- a/examples/ipsec-secgw/esp.h
> +++ b/examples/ipsec-secgw/esp.h
> @@ -35,16 +35,6 @@
>
> struct mbuf;
>
> -/* RFC4303 */
> -struct esp_hdr {
> - uint32_t spi;
> - uint32_t seq;
> - /* Payload */
> - /* Padding */
> - /* Pad Length */
> - /* Next Header */
> - /* Integrity Check Value - ICV */
> -};
>
> int
> esp_inbound(struct rte_mbuf *m, struct ipsec_sa *sa,
> diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
> index f931de6..6e18e84 100644
> --- a/examples/ipsec-secgw/ipsec-secgw.c
> +++ b/examples/ipsec-secgw/ipsec-secgw.c
> @@ -1317,6 +1317,11 @@ port_init(uint16_t portid)
> printf("Creating queues: nb_rx_queue=%d nb_tx_queue=%u...\n",
> nb_rx_queue, nb_tx_queue);
>
> + if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_SECURITY)
> + port_conf.rxmode.enable_sec = 1;
> + if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_SECURITY)
> + port_conf.txmode.enable_sec = 1;
> +
> ret = rte_eth_dev_configure(portid, nb_rx_queue, nb_tx_queue,
> &port_conf);
> if (ret < 0)
> diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
> index daa1d7b..6423e3e 100644
> --- a/examples/ipsec-secgw/ipsec.c
> +++ b/examples/ipsec-secgw/ipsec.c
> @@ -37,7 +37,9 @@
> #include <rte_branch_prediction.h>
> #include <rte_log.h>
> #include <rte_crypto.h>
> +#include <rte_security.h>
> #include <rte_cryptodev.h>
> +#include <rte_ethdev.h>
> #include <rte_mbuf.h>
> #include <rte_hash.h>
>
> @@ -49,7 +51,7 @@ create_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa)
> {
> struct rte_cryptodev_info cdev_info;
> unsigned long cdev_id_qp = 0;
> - int32_t ret;
> + int32_t ret = 0;
> struct cdev_key key = { 0 };
>
> key.lcore_id = (uint8_t)rte_lcore_id();
> @@ -58,16 +60,19 @@ create_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa)
> key.auth_algo = (uint8_t)sa->auth_algo;
> key.aead_algo = (uint8_t)sa->aead_algo;
>
> - ret = rte_hash_lookup_data(ipsec_ctx->cdev_map, &key,
> - (void **)&cdev_id_qp);
> - if (ret < 0) {
> - RTE_LOG(ERR, IPSEC, "No cryptodev: core %u, cipher_algo %u, "
> - "auth_algo %u aead_algo %u\n",
> - key.lcore_id,
> - key.cipher_algo,
> - key.auth_algo,
> - key.aead_algo);
> - return -1;
> + if (sa->type == RTE_SECURITY_ACTION_TYPE_NONE) {
> + ret = rte_hash_lookup_data(ipsec_ctx->cdev_map, &key,
> + (void **)&cdev_id_qp);
> + if (ret < 0) {
> + RTE_LOG(ERR, IPSEC,
> + "No cryptodev: core %u, cipher_algo %u, "
> + "auth_algo %u aead_algo %u\n",
> + key.lcore_id,
> + key.cipher_algo,
> + key.auth_algo,
> + key.aead_algo);
> + return -1;
> + }
> }
>
> RTE_LOG_DP(DEBUG, IPSEC, "Create session for SA spi %u on cryptodev "
> @@ -75,23 +80,153 @@ create_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa)
> ipsec_ctx->tbl[cdev_id_qp].id,
> ipsec_ctx->tbl[cdev_id_qp].qp);
>
> - sa->crypto_session = rte_cryptodev_sym_session_create(
> - ipsec_ctx->session_pool);
> - rte_cryptodev_sym_session_init(ipsec_ctx->tbl[cdev_id_qp].id,
> - sa->crypto_session, sa->xforms,
> - ipsec_ctx->session_pool);
> -
> - rte_cryptodev_info_get(ipsec_ctx->tbl[cdev_id_qp].id, &cdev_info);
> - if (cdev_info.sym.max_nb_sessions_per_qp > 0) {
> - ret = rte_cryptodev_queue_pair_attach_sym_session(
> - ipsec_ctx->tbl[cdev_id_qp].id,
> - ipsec_ctx->tbl[cdev_id_qp].qp,
> - sa->crypto_session);
> - if (ret < 0) {
> - RTE_LOG(ERR, IPSEC,
> - "Session cannot be attached to qp %u ",
> - ipsec_ctx->tbl[cdev_id_qp].qp);
> - return -1;
> + if (sa->type != RTE_SECURITY_ACTION_TYPE_NONE) {
> + struct rte_security_session_conf sess_conf = {
> + .action_type = sa->type,
> + .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
> + .ipsec = {
> + .spi = sa->spi,
> + .salt = sa->salt,
> + .options = { 0 },
> + .direction = sa->direction,
> + .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
> + .mode = (sa->flags == IP4_TUNNEL ||
> + sa->flags == IP6_TUNNEL) ?
> + RTE_SECURITY_IPSEC_SA_MODE_TUNNEL :
> + RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
> + },
> + .crypto_xform = sa->xforms
> +
> + };
> +
> + if (sa->type == RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL) {
> + struct rte_security_ctx *ctx = (struct rte_security_ctx *)
> + rte_cryptodev_get_sec_ctx(
> + ipsec_ctx->tbl[cdev_id_qp].id);
> +
> + if (sess_conf.ipsec.mode ==
> + RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
> + struct rte_security_ipsec_tunnel_param *tunnel =
> + &sess_conf.ipsec.tunnel;
> + if (sa->flags == IP4_TUNNEL) {
> + tunnel->type =
> + RTE_SECURITY_IPSEC_TUNNEL_IPV4;
> + tunnel->ipv4.ttl = IPDEFTTL;
> +
> + memcpy((uint8_t *)&tunnel->ipv4.src_ip,
> + (uint8_t *)&sa->src.ip.ip4, 4);
> +
> + memcpy((uint8_t *)&tunnel->ipv4.dst_ip,
> + (uint8_t *)&sa->dst.ip.ip4, 4);
> + }
> + /* TODO support for Transport and IPV6 tunnel */
> + }
> +
> + sa->sec_session = rte_security_session_create(ctx,
> + &sess_conf, ipsec_ctx->session_pool);
> + if (sa->sec_session == NULL) {
> + RTE_LOG(ERR, IPSEC,
> + "SEC Session init failed: err: %d\n", ret);
> + return -1;
> + }
> + } else if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) {
> + struct rte_flow_error err;
> + struct rte_security_ctx *ctx = (struct rte_security_ctx *)
> + rte_eth_dev_get_sec_ctx(
> + sa->portid);
> + const struct rte_security_capability *sec_cap;
> +
> + sa->sec_session = rte_security_session_create(ctx,
> + &sess_conf, ipsec_ctx->session_pool);
> + if (sa->sec_session == NULL) {
> + RTE_LOG(ERR, IPSEC,
> + "SEC Session init failed: err: %d\n", ret);
> + return -1;
> + }
> +
> + sec_cap = rte_security_capabilities_get(ctx);
> +
> + /* iterate until ESP tunnel*/
> + while (sec_cap->action !=
> + RTE_SECURITY_ACTION_TYPE_NONE) {
> +
> + if (sec_cap->action == sa->type &&
> + sec_cap->protocol ==
> + RTE_SECURITY_PROTOCOL_IPSEC &&
> + sec_cap->ipsec.mode ==
> + RTE_SECURITY_IPSEC_SA_MODE_TUNNEL &&
> + sec_cap->ipsec.direction == sa->direction)
> + break;
> + sec_cap++;
> + }
> +
> + if (sec_cap->action == RTE_SECURITY_ACTION_TYPE_NONE) {
> + RTE_LOG(ERR, IPSEC,
> + "No suitable security capability found\n");
> + return -1;
> + }
> +
> + sa->ol_flags = sec_cap->ol_flags;
> + sa->security_ctx = ctx;
> + sa->pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH;
> +
> + sa->pattern[1].type = RTE_FLOW_ITEM_TYPE_IPV4;
> + sa->pattern[1].mask = &rte_flow_item_ipv4_mask;
> + if (sa->flags & IP6_TUNNEL) {
> + sa->pattern[1].spec = &sa->ipv6_spec;
> + memcpy(sa->ipv6_spec.hdr.dst_addr,
> + sa->dst.ip.ip6.ip6_b, 16);
> + memcpy(sa->ipv6_spec.hdr.src_addr,
> + sa->src.ip.ip6.ip6_b, 16);
> + } else {
> + sa->pattern[1].spec = &sa->ipv4_spec;
> + sa->ipv4_spec.hdr.dst_addr = sa->dst.ip.ip4;
> + sa->ipv4_spec.hdr.src_addr = sa->src.ip.ip4;
> + }
> +
> + sa->pattern[2].type = RTE_FLOW_ITEM_TYPE_ESP;
> + sa->pattern[2].spec = &sa->esp_spec;
> + sa->pattern[2].mask = &rte_flow_item_esp_mask;
> + sa->esp_spec.hdr.spi = sa->spi;
> +
> + sa->pattern[3].type = RTE_FLOW_ITEM_TYPE_END;
> +
> + sa->action[0].type = RTE_FLOW_ACTION_TYPE_SECURITY;
> + sa->action[0].conf = sa->sec_session;
> +
> + sa->action[1].type = RTE_FLOW_ACTION_TYPE_END;
> +
> + sa->attr.egress = (sa->direction ==
> + RTE_SECURITY_IPSEC_SA_DIR_EGRESS);
> + sa->flow = rte_flow_create(sa->portid,
> + &sa->attr, sa->pattern, sa->action, &err);
> + if (sa->flow == NULL) {
> + RTE_LOG(ERR, IPSEC,
> + "Failed to create ipsec flow msg: %s\n",
> + err.message);
> + return -1;
> + }
> + }
> + } else {
> + sa->crypto_session = rte_cryptodev_sym_session_create(
> + ipsec_ctx->session_pool);
> + rte_cryptodev_sym_session_init(ipsec_ctx->tbl[cdev_id_qp].id,
> + sa->crypto_session, sa->xforms,
> + ipsec_ctx->session_pool);
> +
> + rte_cryptodev_info_get(ipsec_ctx->tbl[cdev_id_qp].id,
> + &cdev_info);
> + if (cdev_info.sym.max_nb_sessions_per_qp > 0) {
> + ret = rte_cryptodev_queue_pair_attach_sym_session(
> + ipsec_ctx->tbl[cdev_id_qp].id,
> + ipsec_ctx->tbl[cdev_id_qp].qp,
> + sa->crypto_session);
> + if (ret < 0) {
> + RTE_LOG(ERR, IPSEC,
> + "Session cannot be attached to qp %u\n",
> + ipsec_ctx->tbl[cdev_id_qp].qp);
> + return -1;
> + }
> }
> }
> sa->cdev_id_qp = cdev_id_qp;
> @@ -129,7 +264,9 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
> {
> int32_t ret = 0, i;
> struct ipsec_mbuf_metadata *priv;
> + struct rte_crypto_sym_op *sym_cop;
> struct ipsec_sa *sa;
> + struct cdev_qp *cqp;
>
> for (i = 0; i < nb_pkts; i++) {
> if (unlikely(sas[i] == NULL)) {
> @@ -144,23 +281,76 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
> sa = sas[i];
> priv->sa = sa;
>
> - priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
> - priv->cop.status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
> -
> - rte_prefetch0(&priv->sym_cop);
> -
> - if ((unlikely(sa->crypto_session == NULL)) &&
> - create_session(ipsec_ctx, sa)) {
> - rte_pktmbuf_free(pkts[i]);
> - continue;
> - }
> -
> - rte_crypto_op_attach_sym_session(&priv->cop,
> - sa->crypto_session);
> -
> - ret = xform_func(pkts[i], sa, &priv->cop);
> - if (unlikely(ret)) {
> - rte_pktmbuf_free(pkts[i]);
> + switch (sa->type) {
> + case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
> + priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
> + priv->cop.status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
> +
> + rte_prefetch0(&priv->sym_cop);
> +
> + if ((unlikely(sa->sec_session == NULL)) &&
> + create_session(ipsec_ctx, sa)) {
> + rte_pktmbuf_free(pkts[i]);
> + continue;
> + }
> +
> + sym_cop = get_sym_cop(&priv->cop);
> + sym_cop->m_src = pkts[i];
> +
> + rte_security_attach_session(&priv->cop,
> + sa->sec_session);
> + break;
> + case RTE_SECURITY_ACTION_TYPE_NONE:
> +
> + priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
> + priv->cop.status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
> +
> + rte_prefetch0(&priv->sym_cop);
> +
> + if ((unlikely(sa->crypto_session == NULL)) &&
> + create_session(ipsec_ctx, sa)) {
> + rte_pktmbuf_free(pkts[i]);
> + continue;
> + }
> +
> + rte_crypto_op_attach_sym_session(&priv->cop,
> + sa->crypto_session);
> +
> + ret = xform_func(pkts[i], sa, &priv->cop);
> + if (unlikely(ret)) {
> + rte_pktmbuf_free(pkts[i]);
> + continue;
> + }
> + break;
> + case RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
> + break;
> + case RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO:
> + priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
> + priv->cop.status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
> +
> + rte_prefetch0(&priv->sym_cop);
> +
> + if ((unlikely(sa->sec_session == NULL)) &&
> + create_session(ipsec_ctx, sa)) {
> + rte_pktmbuf_free(pkts[i]);
> + continue;
> + }
> +
> + rte_security_attach_session(&priv->cop,
> + sa->sec_session);
> +
> + ret = xform_func(pkts[i], sa, &priv->cop);
> + if (unlikely(ret)) {
> + rte_pktmbuf_free(pkts[i]);
> + continue;
> + }
> +
> + cqp = &ipsec_ctx->tbl[sa->cdev_id_qp];
> + cqp->ol_pkts[cqp->ol_pkts_cnt++] = pkts[i];
> + if (sa->ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA)
> + rte_security_set_pkt_metadata(
> + sa->security_ctx,
> + sa->sec_session, pkts[i], NULL);
> continue;
> }
>
> @@ -171,7 +361,7 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
>
> static inline int
> ipsec_dequeue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
> - struct rte_mbuf *pkts[], uint16_t max_pkts)
> + struct rte_mbuf *pkts[], uint16_t max_pkts)
> {
> int32_t nb_pkts = 0, ret = 0, i, j, nb_cops;
> struct ipsec_mbuf_metadata *priv;
> @@ -186,6 +376,19 @@ ipsec_dequeue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
> if (ipsec_ctx->last_qp == ipsec_ctx->nb_qps)
> ipsec_ctx->last_qp %= ipsec_ctx->nb_qps;
>
> + while (cqp->ol_pkts_cnt > 0 && nb_pkts < max_pkts) {
> + pkt = cqp->ol_pkts[--cqp->ol_pkts_cnt];
> + rte_prefetch0(pkt);
> + priv = get_priv(pkt);
> + sa = priv->sa;
> + ret = xform_func(pkt, sa, &priv->cop);
> + if (unlikely(ret)) {
> + rte_pktmbuf_free(pkt);
> + continue;
> + }
> + pkts[nb_pkts++] = pkt;
> + }
> +
> if (cqp->in_flight == 0)
> continue;
>
> @@ -203,11 +406,14 @@ ipsec_dequeue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
>
> RTE_ASSERT(sa != NULL);
>
> - ret = xform_func(pkt, sa, cops[j]);
> - if (unlikely(ret))
> - rte_pktmbuf_free(pkt);
> - else
> - pkts[nb_pkts++] = pkt;
> + if (sa->type == RTE_SECURITY_ACTION_TYPE_NONE) {
> + ret = xform_func(pkt, sa, cops[j]);
> + if (unlikely(ret)) {
> + rte_pktmbuf_free(pkt);
> + continue;
> + }
> + }
> + pkts[nb_pkts++] = pkt;
> }
> }
>
> diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h
> index 9e22b1b..613785f 100644
> --- a/examples/ipsec-secgw/ipsec.h
> +++ b/examples/ipsec-secgw/ipsec.h
> @@ -38,6 +38,8 @@
>
> #include <rte_byteorder.h>
> #include <rte_crypto.h>
> +#include <rte_security.h>
> +#include <rte_flow.h>
>
> #define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1
> #define RTE_LOGTYPE_IPSEC_ESP RTE_LOGTYPE_USER2
> @@ -99,7 +101,10 @@ struct ipsec_sa {
> uint32_t cdev_id_qp;
> uint64_t seq;
> uint32_t salt;
> - struct rte_cryptodev_sym_session *crypto_session;
> + union {
> + struct rte_cryptodev_sym_session *crypto_session;
> + struct rte_security_session *sec_session;
> + };
> enum rte_crypto_cipher_algorithm cipher_algo;
> enum rte_crypto_auth_algorithm auth_algo;
> enum rte_crypto_aead_algorithm aead_algo;
> @@ -117,7 +122,28 @@ struct ipsec_sa {
> uint8_t auth_key[MAX_KEY_SIZE];
> uint16_t auth_key_len;
> uint16_t aad_len;
> - struct rte_crypto_sym_xform *xforms;
> + union {
> + struct rte_crypto_sym_xform *xforms;
> + struct rte_security_ipsec_xform *sec_xform;
> + };
> + enum rte_security_session_action_type type;
> + enum rte_security_ipsec_sa_direction direction;
> + uint16_t portid;
> + struct rte_security_ctx *security_ctx;
> + uint32_t ol_flags;
> +
> +#define MAX_RTE_FLOW_PATTERN (4)
> +#define MAX_RTE_FLOW_ACTIONS (2)
> + struct rte_flow_item pattern[MAX_RTE_FLOW_PATTERN];
> + struct rte_flow_action action[MAX_RTE_FLOW_ACTIONS];
> + struct rte_flow_attr attr;
> + union {
> + struct rte_flow_item_ipv4 ipv4_spec;
> + struct rte_flow_item_ipv6 ipv6_spec;
> + };
> + struct rte_flow_item_esp esp_spec;
> + struct rte_flow *flow;
> + struct rte_security_session_conf sess_conf;
> } __rte_cache_aligned;
>
> struct ipsec_mbuf_metadata {
> @@ -133,6 +159,8 @@ struct cdev_qp {
> uint16_t in_flight;
> uint16_t len;
> struct rte_crypto_op *buf[MAX_PKT_BURST] __rte_aligned(sizeof(void *));
> + struct rte_mbuf *ol_pkts[MAX_PKT_BURST] __rte_aligned(sizeof(void *));
> + uint16_t ol_pkts_cnt;
> };
>
> struct ipsec_ctx {
> diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
> index ef94475..d8ee47b 100644
> --- a/examples/ipsec-secgw/sa.c
> +++ b/examples/ipsec-secgw/sa.c
> @@ -41,16 +41,20 @@
>
> #include <rte_memzone.h>
> #include <rte_crypto.h>
> +#include <rte_security.h>
> #include <rte_cryptodev.h>
> #include <rte_byteorder.h>
> #include <rte_errno.h>
> #include <rte_ip.h>
> #include <rte_random.h>
> +#include <rte_ethdev.h>
>
> #include "ipsec.h"
> #include "esp.h"
> #include "parser.h"
>
> +#define IPDEFTTL 64
> +
> struct supported_cipher_algo {
> const char *keyword;
> enum rte_crypto_cipher_algorithm algo;
> @@ -238,6 +242,8 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
> uint32_t src_p = 0;
> uint32_t dst_p = 0;
> uint32_t mode_p = 0;
> + uint32_t type_p = 0;
> + uint32_t portid_p = 0;
>
> if (strcmp(tokens[0], "in") == 0) {
> ri = &nb_sa_in;
> @@ -550,6 +556,52 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
> continue;
> }
>
> + if (strcmp(tokens[ti], "type") == 0) {
> + APP_CHECK_PRESENCE(type_p, tokens[ti], status);
> + if (status->status < 0)
> + return;
> +
> + INCREMENT_TOKEN_INDEX(ti, n_tokens, status);
> + if (status->status < 0)
> + return;
> +
> + if (strcmp(tokens[ti], "inline-crypto-offload") == 0)
> + rule->type =
> + RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO;
> + else if (strcmp(tokens[ti],
> + "inline-protocol-offload") == 0)
> + rule->type =
> + RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL;
> + else if (strcmp(tokens[ti],
> + "lookaside-protocol-offload") == 0)
> + rule->type =
> + RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL;
> + else if (strcmp(tokens[ti], "no-offload") == 0)
> + rule->type = RTE_SECURITY_ACTION_TYPE_NONE;
> + else {
> + APP_CHECK(0, status, "Invalid input \"%s\"",
> + tokens[ti]);
> + return;
> + }
> +
> + type_p = 1;
> + continue;
> + }
> +
> + if (strcmp(tokens[ti], "port_id") == 0) {
> + APP_CHECK_PRESENCE(portid_p, tokens[ti], status);
> + if (status->status < 0)
> + return;
> + INCREMENT_TOKEN_INDEX(ti, n_tokens, status);
> + if (status->status < 0)
> + return;
> + rule->portid = atoi(tokens[ti]);
> + if (status->status < 0)
> + return;
> + portid_p = 1;
> + continue;
> + }
> +
> /* unrecognizeable input */
> APP_CHECK(0, status, "unrecognized input \"%s\"",
> tokens[ti]);
> @@ -580,6 +632,14 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
> if (status->status < 0)
> return;
>
> + if ((rule->type != RTE_SECURITY_ACTION_TYPE_NONE) && (portid_p == 0))
> + printf("Missing portid option, falling back to non-offload\n");
> +
> + if (!type_p || !portid_p) {
> + rule->type = RTE_SECURITY_ACTION_TYPE_NONE;
> + rule->portid = -1;
> + }
> +
> *ri = *ri + 1;
> }
>
> @@ -647,9 +707,11 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound)
>
> struct sa_ctx {
> struct ipsec_sa sa[IPSEC_SA_MAX_ENTRIES];
> - struct {
> - struct rte_crypto_sym_xform a;
> - struct rte_crypto_sym_xform b;
> + union {
> + struct {
> + struct rte_crypto_sym_xform a;
> + struct rte_crypto_sym_xform b;
> + };
> } xf[IPSEC_SA_MAX_ENTRIES];
> };
>
> @@ -682,6 +744,33 @@ sa_create(const char *name, int32_t socket_id)
> }
>
> static int
> +check_eth_dev_caps(uint16_t portid, uint32_t inbound)
> +{
> + struct rte_eth_dev_info dev_info;
> +
> + rte_eth_dev_info_get(portid, &dev_info);
> +
> + if (inbound) {
> + if ((dev_info.rx_offload_capa &
> + DEV_RX_OFFLOAD_SECURITY) == 0) {
> + RTE_LOG(WARNING, PORT,
> + "hardware RX IPSec offload is not supported\n");
> + return -EINVAL;
> + }
> +
> + } else { /* outbound */
> + if ((dev_info.tx_offload_capa &
> + DEV_TX_OFFLOAD_SECURITY) == 0) {
> + RTE_LOG(WARNING, PORT,
> + "hardware TX IPSec offload is not supported\n");
> + return -EINVAL;
> + }
> + }
> + return 0;
> +}
> +
> +
> +static int
> sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
> uint32_t nb_entries, uint32_t inbound)
> {
> @@ -700,6 +789,16 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
> *sa = entries[i];
> sa->seq = 0;
>
> + if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL ||
> + sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) {
> + if (check_eth_dev_caps(sa->portid, inbound))
> + return -EINVAL;
> + }
> +
> + sa->direction = (inbound == 1) ?
> + RTE_SECURITY_IPSEC_SA_DIR_INGRESS :
> + RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
> +
> switch (sa->flags) {
> case IP4_TUNNEL:
> sa->src.ip.ip4 = rte_cpu_to_be_32(sa->src.ip.ip4);
> @@ -709,37 +808,21 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
> if (sa->aead_algo == RTE_CRYPTO_AEAD_AES_GCM) {
> iv_length = 16;
>
> - if (inbound) {
> - sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_AEAD;
> - sa_ctx->xf[idx].a.aead.algo = sa->aead_algo;
> - sa_ctx->xf[idx].a.aead.key.data = sa->cipher_key;
> - sa_ctx->xf[idx].a.aead.key.length =
> - sa->cipher_key_len;
> - sa_ctx->xf[idx].a.aead.op =
> - RTE_CRYPTO_AEAD_OP_DECRYPT;
> - sa_ctx->xf[idx].a.next = NULL;
> - sa_ctx->xf[idx].a.aead.iv.offset = IV_OFFSET;
> - sa_ctx->xf[idx].a.aead.iv.length = iv_length;
> - sa_ctx->xf[idx].a.aead.aad_length =
> - sa->aad_len;
> - sa_ctx->xf[idx].a.aead.digest_length =
> - sa->digest_len;
> - } else { /* outbound */
> - sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_AEAD;
> - sa_ctx->xf[idx].a.aead.algo = sa->aead_algo;
> - sa_ctx->xf[idx].a.aead.key.data = sa->cipher_key;
> - sa_ctx->xf[idx].a.aead.key.length =
> - sa->cipher_key_len;
> - sa_ctx->xf[idx].a.aead.op =
> - RTE_CRYPTO_AEAD_OP_ENCRYPT;
> - sa_ctx->xf[idx].a.next = NULL;
> - sa_ctx->xf[idx].a.aead.iv.offset = IV_OFFSET;
> - sa_ctx->xf[idx].a.aead.iv.length = iv_length;
> - sa_ctx->xf[idx].a.aead.aad_length =
> - sa->aad_len;
> - sa_ctx->xf[idx].a.aead.digest_length =
> - sa->digest_len;
> - }
> + sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_AEAD;
> + sa_ctx->xf[idx].a.aead.algo = sa->aead_algo;
> + sa_ctx->xf[idx].a.aead.key.data = sa->cipher_key;
> + sa_ctx->xf[idx].a.aead.key.length =
> + sa->cipher_key_len;
> + sa_ctx->xf[idx].a.aead.op = (inbound == 1) ?
> + RTE_CRYPTO_AEAD_OP_DECRYPT :
> + RTE_CRYPTO_AEAD_OP_ENCRYPT;
> + sa_ctx->xf[idx].a.next = NULL;
> + sa_ctx->xf[idx].a.aead.iv.offset = IV_OFFSET;
> + sa_ctx->xf[idx].a.aead.iv.length = iv_length;
> + sa_ctx->xf[idx].a.aead.aad_length =
> + sa->aad_len;
> + sa_ctx->xf[idx].a.aead.digest_length =
> + sa->digest_len;
>
> sa->xforms = &sa_ctx->xf[idx].a;
>
Tested-by: Aviad Yehezkel <aviadye@mellanox.com>
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 00/12] introduce security offload library
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 " Akhil Goyal
` (11 preceding siblings ...)
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 12/12] examples/ipsec-secgw: add support for security offload Akhil Goyal
@ 2017-10-16 10:44 ` Thomas Monjalon
2017-10-20 9:32 ` Thomas Monjalon
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 00/11] " Akhil Goyal
13 siblings, 1 reply; 195+ messages in thread
From: Thomas Monjalon @ 2017-10-16 10:44 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, sandeep.malik, jerin.jacob,
john.mcnamara, konstantin.ananyev, shahafs, olivier.matz
15/10/2017 00:17, Akhil Goyal:
> This patchset introduce the rte_security library in DPDK.
> This also includes the sample implementation of drivers and
> changes in ipsec gateway application to demonstrate its usage.
[...]
> This patchset is also available at:
> git://dpdk.org/draft/dpdk-draft-ipsec
> branch: integration_v4
If I understand well, this patchset is the result of the group work?
Nothing else is needed to merge for the IPsec offload features?
If so, please mark other patches as supersed in patchwork.
Thanks
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 00/12] introduce security offload library
2017-10-16 10:44 ` [dpdk-dev] [PATCH v4 00/12] introduce security offload library Thomas Monjalon
@ 2017-10-20 9:32 ` Thomas Monjalon
2017-10-21 16:13 ` Akhil Goyal
0 siblings, 1 reply; 195+ messages in thread
From: Thomas Monjalon @ 2017-10-20 9:32 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, sandeep.malik, jerin.jacob,
john.mcnamara, konstantin.ananyev, shahafs, olivier.matz
16/10/2017 12:44, Thomas Monjalon:
> 15/10/2017 00:17, Akhil Goyal:
> > This patchset introduce the rte_security library in DPDK.
> > This also includes the sample implementation of drivers and
> > changes in ipsec gateway application to demonstrate its usage.
> [...]
> > This patchset is also available at:
> > git://dpdk.org/draft/dpdk-draft-ipsec
> > branch: integration_v4
>
> If I understand well, this patchset is the result of the group work?
> Nothing else is needed to merge for the IPsec offload features?
Please answer to make sure we are not forgetting something in RC2.
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 00/12] introduce security offload library
2017-10-20 9:32 ` Thomas Monjalon
@ 2017-10-21 16:13 ` Akhil Goyal
2017-10-22 20:37 ` Akhil Goyal
0 siblings, 1 reply; 195+ messages in thread
From: Akhil Goyal @ 2017-10-21 16:13 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, sandeep.malik, jerin.jacob,
john.mcnamara, konstantin.ananyev, shahafs, olivier.matz
Hi Thomas,
On 10/20/2017 3:02 PM, Thomas Monjalon wrote:
> 16/10/2017 12:44, Thomas Monjalon:
>> 15/10/2017 00:17, Akhil Goyal:
>>> This patchset introduce the rte_security library in DPDK.
>>> This also includes the sample implementation of drivers and
>>> changes in ipsec gateway application to demonstrate its usage.
>> [...]
>>> This patchset is also available at:
>>> git://dpdk.org/draft/dpdk-draft-ipsec
>>> branch: integration_v4
>>
>> If I understand well, this patchset is the result of the group work?
>> Nothing else is needed to merge for the IPsec offload features?
Yes this patchset is a result of a group work.
We do not need anything else to be merged for ipsec offload features.
But Aviad has made some fixes in the ipsec application which may result
in conflict with the last patch in this series.
So v4 was just rebased over those patches sent by Aviad separately.
We would send a v5 incorporating/answering to all the comments/queries
rebased over the v2 of the Aviad's fixes to the application.
>
> Please answer to make sure we are not forgetting something in RC2.
>
Sorry for the late replies.
I was on PTO for this week. So my responses are delayed.
Thanks,
Akhil
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 00/12] introduce security offload library
2017-10-21 16:13 ` Akhil Goyal
@ 2017-10-22 20:37 ` Akhil Goyal
2017-10-22 20:59 ` Thomas Monjalon
0 siblings, 1 reply; 195+ messages in thread
From: Akhil Goyal @ 2017-10-22 20:37 UTC (permalink / raw)
To: Thomas Monjalon, pablo.de.lara.guarch, radu.nicolau, aviadye,
konstantin.ananyev
Cc: dev, declan.doherty, hemant.agrawal, borisp, sandeep.malik,
jerin.jacob, john.mcnamara, shahafs, olivier.matz
Hi All,
On 10/21/2017 9:43 PM, Akhil Goyal wrote:
> Hi Thomas,
> On 10/20/2017 3:02 PM, Thomas Monjalon wrote:
>> 16/10/2017 12:44, Thomas Monjalon:
>>> 15/10/2017 00:17, Akhil Goyal:
>>>> This patchset introduce the rte_security library in DPDK.
>>>> This also includes the sample implementation of drivers and
>>>> changes in ipsec gateway application to demonstrate its usage.
>>> [...]
>>>> This patchset is also available at:
>>>> git://dpdk.org/draft/dpdk-draft-ipsec
>>>> branch: integration_v4
>>>
>>> If I understand well, this patchset is the result of the group work?
>>> Nothing else is needed to merge for the IPsec offload features?
> Yes this patchset is a result of a group work.
> We do not need anything else to be merged for ipsec offload features.
> But Aviad has made some fixes in the ipsec application which may result
> in conflict with the last patch in this series.
> So v4 was just rebased over those patches sent by Aviad separately.
> We would send a v5 incorporating/answering to all the comments/queries
> rebased over the v2 of the Aviad's fixes to the application.
>
Just for information,
I have rebased the rte_security patches over crypto-next and over
Aviad's v2. The patches are available at the draft tree
"git://dpdk.org/draft/dpdk-draft-ipsec", branch integration_v5.
The patchset include the changes suggested by Thomas and Konstantin on v4.
The patches are not sent to the mailing list as the ipsec-secgw patches
from Aviad needs a v3 and I would like to send the patch set rebased
over the v3. I will send the patchset as soon as Aviad's patches are
ready to be merged(most probably on Monday).
Please let me know in case, there is some risk in getting this series
applied in RC2.
Thanks,
Akhil
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 00/12] introduce security offload library
2017-10-22 20:37 ` Akhil Goyal
@ 2017-10-22 20:59 ` Thomas Monjalon
2017-10-23 11:44 ` Aviad Yehezkel
2017-10-24 9:41 ` Akhil Goyal
0 siblings, 2 replies; 195+ messages in thread
From: Thomas Monjalon @ 2017-10-22 20:59 UTC (permalink / raw)
To: Akhil Goyal
Cc: pablo.de.lara.guarch, radu.nicolau, aviadye, konstantin.ananyev,
dev, declan.doherty, hemant.agrawal, borisp, sandeep.malik,
jerin.jacob, john.mcnamara, shahafs, olivier.matz
22/10/2017 22:37, Akhil Goyal:
> Hi All,
> On 10/21/2017 9:43 PM, Akhil Goyal wrote:
> > Hi Thomas,
> > On 10/20/2017 3:02 PM, Thomas Monjalon wrote:
> >> 16/10/2017 12:44, Thomas Monjalon:
> >>> 15/10/2017 00:17, Akhil Goyal:
> >>>> This patchset introduce the rte_security library in DPDK.
> >>>> This also includes the sample implementation of drivers and
> >>>> changes in ipsec gateway application to demonstrate its usage.
> >>> [...]
> >>>> This patchset is also available at:
> >>>> git://dpdk.org/draft/dpdk-draft-ipsec
> >>>> branch: integration_v4
> >>>
> >>> If I understand well, this patchset is the result of the group work?
> >>> Nothing else is needed to merge for the IPsec offload features?
> > Yes this patchset is a result of a group work.
> > We do not need anything else to be merged for ipsec offload features.
> > But Aviad has made some fixes in the ipsec application which may result
> > in conflict with the last patch in this series.
> > So v4 was just rebased over those patches sent by Aviad separately.
> > We would send a v5 incorporating/answering to all the comments/queries
> > rebased over the v2 of the Aviad's fixes to the application.
> >
>
> Just for information,
> I have rebased the rte_security patches over crypto-next and over
> Aviad's v2. The patches are available at the draft tree
> "git://dpdk.org/draft/dpdk-draft-ipsec", branch integration_v5.
>
> The patchset include the changes suggested by Thomas and Konstantin on v4.
>
> The patches are not sent to the mailing list as the ipsec-secgw patches
> from Aviad needs a v3 and I would like to send the patch set rebased
> over the v3. I will send the patchset as soon as Aviad's patches are
> ready to be merged(most probably on Monday).
>
> Please let me know in case, there is some risk in getting this series
> applied in RC2.
Thanks for the info Akhil.
A lot of other things are not yet ready for RC2.
You are still in time.
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 00/12] introduce security offload library
2017-10-22 20:59 ` Thomas Monjalon
@ 2017-10-23 11:44 ` Aviad Yehezkel
2017-10-24 9:41 ` Akhil Goyal
1 sibling, 0 replies; 195+ messages in thread
From: Aviad Yehezkel @ 2017-10-23 11:44 UTC (permalink / raw)
To: Thomas Monjalon, Akhil Goyal
Cc: pablo.de.lara.guarch, radu.nicolau, aviadye, konstantin.ananyev,
dev, declan.doherty, hemant.agrawal, borisp, sandeep.malik,
jerin.jacob, john.mcnamara, shahafs, olivier.matz
On 10/22/2017 11:59 PM, Thomas Monjalon wrote:
> 22/10/2017 22:37, Akhil Goyal:
>> Hi All,
>> On 10/21/2017 9:43 PM, Akhil Goyal wrote:
>>> Hi Thomas,
>>> On 10/20/2017 3:02 PM, Thomas Monjalon wrote:
>>>> 16/10/2017 12:44, Thomas Monjalon:
>>>>> 15/10/2017 00:17, Akhil Goyal:
>>>>>> This patchset introduce the rte_security library in DPDK.
>>>>>> This also includes the sample implementation of drivers and
>>>>>> changes in ipsec gateway application to demonstrate its usage.
>>>>> [...]
>>>>>> This patchset is also available at:
>>>>>> git://dpdk.org/draft/dpdk-draft-ipsec
>>>>>> branch: integration_v4
>>>>> If I understand well, this patchset is the result of the group work?
>>>>> Nothing else is needed to merge for the IPsec offload features?
>>> Yes this patchset is a result of a group work.
>>> We do not need anything else to be merged for ipsec offload features.
>>> But Aviad has made some fixes in the ipsec application which may result
>>> in conflict with the last patch in this series.
>>> So v4 was just rebased over those patches sent by Aviad separately.
>>> We would send a v5 incorporating/answering to all the comments/queries
>>> rebased over the v2 of the Aviad's fixes to the application.
>>>
>> Just for information,
>> I have rebased the rte_security patches over crypto-next and over
>> Aviad's v2. The patches are available at the draft tree
>> "git://dpdk.org/draft/dpdk-draft-ipsec", branch integration_v5.
>>
>> The patchset include the changes suggested by Thomas and Konstantin on v4.
>>
>> The patches are not sent to the mailing list as the ipsec-secgw patches
>> from Aviad needs a v3 and I would like to send the patch set rebased
>> over the v3. I will send the patchset as soon as Aviad's patches are
>> ready to be merged(most probably on Monday).
>>
>> Please let me know in case, there is some risk in getting this series
>> applied in RC2.
> Thanks for the info Akhil.
> A lot of other things are not yet ready for RC2.
> You are still in time.
>
I am working on v3 in these moments.
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 00/12] introduce security offload library
2017-10-22 20:59 ` Thomas Monjalon
2017-10-23 11:44 ` Aviad Yehezkel
@ 2017-10-24 9:41 ` Akhil Goyal
2017-10-24 9:52 ` Thomas Monjalon
1 sibling, 1 reply; 195+ messages in thread
From: Akhil Goyal @ 2017-10-24 9:41 UTC (permalink / raw)
To: Thomas Monjalon
Cc: pablo.de.lara.guarch, radu.nicolau, aviadye, konstantin.ananyev,
dev, declan.doherty, hemant.agrawal, borisp, sandeep.malik,
jerin.jacob, john.mcnamara, shahafs, olivier.matz
Hi Thomas,
On 10/23/2017 2:29 AM, Thomas Monjalon wrote:
> 22/10/2017 22:37, Akhil Goyal:
>> Hi All,
>> On 10/21/2017 9:43 PM, Akhil Goyal wrote:
>>> Hi Thomas,
>>> On 10/20/2017 3:02 PM, Thomas Monjalon wrote:
>>>> 16/10/2017 12:44, Thomas Monjalon:
>>>>> 15/10/2017 00:17, Akhil Goyal:
>>>>>> This patchset introduce the rte_security library in DPDK.
>>>>>> This also includes the sample implementation of drivers and
>>>>>> changes in ipsec gateway application to demonstrate its usage.
>>>>> [...]
>>>>>> This patchset is also available at:
>>>>>> git://dpdk.org/draft/dpdk-draft-ipsec
>>>>>> branch: integration_v4
>>>>>
>>>>> If I understand well, this patchset is the result of the group work?
>>>>> Nothing else is needed to merge for the IPsec offload features?
>>> Yes this patchset is a result of a group work.
>>> We do not need anything else to be merged for ipsec offload features.
>>> But Aviad has made some fixes in the ipsec application which may result
>>> in conflict with the last patch in this series.
>>> So v4 was just rebased over those patches sent by Aviad separately.
>>> We would send a v5 incorporating/answering to all the comments/queries
>>> rebased over the v2 of the Aviad's fixes to the application.
>>>
>>
>> Just for information,
>> I have rebased the rte_security patches over crypto-next and over
>> Aviad's v2. The patches are available at the draft tree
>> "git://dpdk.org/draft/dpdk-draft-ipsec", branch integration_v5.
>>
>> The patchset include the changes suggested by Thomas and Konstantin on v4.
>>
>> The patches are not sent to the mailing list as the ipsec-secgw patches
>> from Aviad needs a v3 and I would like to send the patch set rebased
>> over the v3. I will send the patchset as soon as Aviad's patches are
>> ready to be merged(most probably on Monday).
>>
>> Please let me know in case, there is some risk in getting this series
>> applied in RC2.
>
> Thanks for the info Akhil.
> A lot of other things are not yet ready for RC2.
> You are still in time.
>
>
On which should I base v5 - master or crypto-next?
-Akhil
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 00/12] introduce security offload library
2017-10-24 9:41 ` Akhil Goyal
@ 2017-10-24 9:52 ` Thomas Monjalon
2017-10-24 14:27 ` Akhil Goyal
0 siblings, 1 reply; 195+ messages in thread
From: Thomas Monjalon @ 2017-10-24 9:52 UTC (permalink / raw)
To: Akhil Goyal
Cc: pablo.de.lara.guarch, radu.nicolau, aviadye, konstantin.ananyev,
dev, declan.doherty, hemant.agrawal, borisp, sandeep.malik,
jerin.jacob, john.mcnamara, shahafs, olivier.matz
24/10/2017 11:41, Akhil Goyal:
> Hi Thomas,
> On 10/23/2017 2:29 AM, Thomas Monjalon wrote:
> > 22/10/2017 22:37, Akhil Goyal:
> >> Hi All,
> >> On 10/21/2017 9:43 PM, Akhil Goyal wrote:
> >>> Hi Thomas,
> >>> On 10/20/2017 3:02 PM, Thomas Monjalon wrote:
> >>>> 16/10/2017 12:44, Thomas Monjalon:
> >>>>> 15/10/2017 00:17, Akhil Goyal:
> >>>>>> This patchset introduce the rte_security library in DPDK.
> >>>>>> This also includes the sample implementation of drivers and
> >>>>>> changes in ipsec gateway application to demonstrate its usage.
> >>>>> [...]
> >>>>>> This patchset is also available at:
> >>>>>> git://dpdk.org/draft/dpdk-draft-ipsec
> >>>>>> branch: integration_v4
> >>>>>
> >>>>> If I understand well, this patchset is the result of the group work?
> >>>>> Nothing else is needed to merge for the IPsec offload features?
> >>> Yes this patchset is a result of a group work.
> >>> We do not need anything else to be merged for ipsec offload features.
> >>> But Aviad has made some fixes in the ipsec application which may result
> >>> in conflict with the last patch in this series.
> >>> So v4 was just rebased over those patches sent by Aviad separately.
> >>> We would send a v5 incorporating/answering to all the comments/queries
> >>> rebased over the v2 of the Aviad's fixes to the application.
> >>>
> >>
> >> Just for information,
> >> I have rebased the rte_security patches over crypto-next and over
> >> Aviad's v2. The patches are available at the draft tree
> >> "git://dpdk.org/draft/dpdk-draft-ipsec", branch integration_v5.
> >>
> >> The patchset include the changes suggested by Thomas and Konstantin on v4.
> >>
> >> The patches are not sent to the mailing list as the ipsec-secgw patches
> >> from Aviad needs a v3 and I would like to send the patch set rebased
> >> over the v3. I will send the patchset as soon as Aviad's patches are
> >> ready to be merged(most probably on Monday).
> >>
> >> Please let me know in case, there is some risk in getting this series
> >> applied in RC2.
> >
> > Thanks for the info Akhil.
> > A lot of other things are not yet ready for RC2.
> > You are still in time.
> >
> >
> On which should I base v5 - master or crypto-next?
Up to you.
Just tell me which one you choose please.
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v4 00/12] introduce security offload library
2017-10-24 9:52 ` Thomas Monjalon
@ 2017-10-24 14:27 ` Akhil Goyal
0 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-24 14:27 UTC (permalink / raw)
To: Thomas Monjalon
Cc: pablo.de.lara.guarch, radu.nicolau, aviadye, konstantin.ananyev,
dev, declan.doherty, hemant.agrawal, borisp, sandeep.malik,
jerin.jacob, john.mcnamara, shahafs, olivier.matz
Hi Thomas,
On 10/24/2017 3:22 PM, Thomas Monjalon wrote:
> 24/10/2017 11:41, Akhil Goyal:
>> Hi Thomas,
>> On 10/23/2017 2:29 AM, Thomas Monjalon wrote:
>>> 22/10/2017 22:37, Akhil Goyal:
>>>> Hi All,
>>>> On 10/21/2017 9:43 PM, Akhil Goyal wrote:
>>>>> Hi Thomas,
>>>>> On 10/20/2017 3:02 PM, Thomas Monjalon wrote:
>>>>>> 16/10/2017 12:44, Thomas Monjalon:
>>>>>>> 15/10/2017 00:17, Akhil Goyal:
>>>>>>>> This patchset introduce the rte_security library in DPDK.
>>>>>>>> This also includes the sample implementation of drivers and
>>>>>>>> changes in ipsec gateway application to demonstrate its usage.
>>>>>>> [...]
>>>>>>>> This patchset is also available at:
>>>>>>>> git://dpdk.org/draft/dpdk-draft-ipsec
>>>>>>>> branch: integration_v4
>>>>>>>
>>>>>>> If I understand well, this patchset is the result of the group work?
>>>>>>> Nothing else is needed to merge for the IPsec offload features?
>>>>> Yes this patchset is a result of a group work.
>>>>> We do not need anything else to be merged for ipsec offload features.
>>>>> But Aviad has made some fixes in the ipsec application which may result
>>>>> in conflict with the last patch in this series.
>>>>> So v4 was just rebased over those patches sent by Aviad separately.
>>>>> We would send a v5 incorporating/answering to all the comments/queries
>>>>> rebased over the v2 of the Aviad's fixes to the application.
>>>>>
>>>>
>>>> Just for information,
>>>> I have rebased the rte_security patches over crypto-next and over
>>>> Aviad's v2. The patches are available at the draft tree
>>>> "git://dpdk.org/draft/dpdk-draft-ipsec", branch integration_v5.
>>>>
>>>> The patchset include the changes suggested by Thomas and Konstantin on v4.
>>>>
>>>> The patches are not sent to the mailing list as the ipsec-secgw patches
>>>> from Aviad needs a v3 and I would like to send the patch set rebased
>>>> over the v3. I will send the patchset as soon as Aviad's patches are
>>>> ready to be merged(most probably on Monday).
>>>>
>>>> Please let me know in case, there is some risk in getting this series
>>>> applied in RC2.
>>>
>>> Thanks for the info Akhil.
>>> A lot of other things are not yet ready for RC2.
>>> You are still in time.
>>>
>>>
>> On which should I base v5 - master or crypto-next?
>
> Up to you.
> Just tell me which one you choose please.
>
It is sent over crypto-next which is rebased over latest master as of now.
Thanks,
Akhil
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v5 00/11] introduce security offload library
2017-10-14 22:17 ` [dpdk-dev] [PATCH v4 " Akhil Goyal
` (12 preceding siblings ...)
2017-10-16 10:44 ` [dpdk-dev] [PATCH v4 00/12] introduce security offload library Thomas Monjalon
@ 2017-10-24 14:15 ` Akhil Goyal
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 01/11] lib/rte_security: add security library Akhil Goyal
` (11 more replies)
13 siblings, 12 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-24 14:15 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
This patchset introduce the rte_security library in DPDK.
This also includes the sample implementation of drivers and
changes in ipsec gateway application to demonstrate its usage.
rte_security library is implemented on the idea proposed earlier [1],[2],[3]
to support IPsec Inline and look aside crypto offload. Though
the current focus is only on IPsec protocol, but the library is
not limited to IPsec, it can be extended to other security
protocols e.g. MACSEC, PDCP or DTLS.
In this library, crypto/ethernet devices can register itself to
the security library to support security offload.
The library support 3 modes of operation
1. full protocol offload using crypto devices.
(RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL)
2. inline ipsec using ethernet devices to perform crypto operations
(RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO)
3. full protocol offload using ethernet devices.
(RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
The details for each mode is documented in the patchset in
doc/guides/prog_guide/rte_security.rst
The modification in the application ipsec-secgw is also doocumented in
doc/guides/sample_app_ug/ipsec_secgw.rst
This patchset is also available at:
git://dpdk.org/draft/dpdk-draft-ipsec
branch: integration_v5
changes in v5:
1. Incorporated comments from Shahaf, Konstantin and Thomas
2. Rebased over latest crypto-next tree(which is rebased over master) +
Aviad's v3 of ipsec-secgw fixes.
changes in v4:
1. Incorporated comments from Konstantin.
2. rebased over master
3. rebased over ipsec patches sent by Aviad
http://dpdk.org/ml/archives/dev/2017-October/079192.html
4. resolved multi process limitation
5. minor updates in documentation and drivers
changes in v3:
1. fixed compilation for FreeBSD
2. Incorporated comments from Pablo, John, Shahaf
3. Updated drivers for dpaa2_sec and ixgbe for some minor fixes
4. patch titles updated
5. fixed return type of rte_cryptodev_get_sec_id
changes in v2:
1. update documentation for rte_flow.
2. fixed API to unregister device to security library.
3. incorporated most of the comments from Jerin.
4. updated rte_security documentation as per the review comments from John.
5. Certain application updates for some cases.
6. updated changes in mbuf as per the comments from Olivier.
Future enhancements:
1. for full protocol offload - error handling and notification cases
2. add more security protocols
3. test application support
4. anti-replay support
5. SA time out support
6. Support Multi process use case
Reference:
[1] http://dpdk.org/ml/archives/dev/2017-July/070793.html
[2] http://dpdk.org/ml/archives/dev/2017-July/071893.html
[3] http://dpdk.org/ml/archives/dev/2017-August/072900.html
Akhil Goyal (6):
lib/rte_security: add security library
doc: add details of rte security
cryptodev: support security APIs
mk: add rte security into build system
crypto/dpaa2_sec: add support for protocol offload ipsec
examples/ipsec-secgw: add support for security offload
Boris Pismenny (3):
net: add ESP header to generic flow steering
mbuf: add security crypto flags and mbuf fields
ethdev: add rte flow action for crypto
Declan Doherty (1):
ethdev: support security APIs
Radu Nicolau (1):
net/ixgbe: enable inline ipsec
MAINTAINERS | 5 +
config/common_base | 5 +
doc/api/doxy-api-index.md | 2 +
doc/api/doxy-api.conf | 1 +
doc/guides/cryptodevs/features/default.ini | 1 +
doc/guides/cryptodevs/features/dpaa2_sec.ini | 1 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rte_flow.rst | 84 ++-
doc/guides/prog_guide/rte_security.rst | 564 +++++++++++++++++++
doc/guides/rel_notes/release_17_11.rst | 1 +
doc/guides/sample_app_ug/ipsec_secgw.rst | 52 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 422 +++++++++++++-
drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 62 +++
drivers/net/ixgbe/Makefile | 2 +-
drivers/net/ixgbe/base/ixgbe_osdep.h | 8 +
drivers/net/ixgbe/ixgbe_ethdev.c | 11 +
drivers/net/ixgbe/ixgbe_ethdev.h | 6 +-
drivers/net/ixgbe/ixgbe_flow.c | 47 ++
drivers/net/ixgbe/ixgbe_ipsec.c | 737 +++++++++++++++++++++++++
drivers/net/ixgbe/ixgbe_ipsec.h | 151 +++++
drivers/net/ixgbe/ixgbe_rxtx.c | 59 +-
drivers/net/ixgbe/ixgbe_rxtx.h | 11 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 57 ++
examples/ipsec-secgw/esp.c | 120 ++--
examples/ipsec-secgw/esp.h | 10 -
examples/ipsec-secgw/ipsec-secgw.c | 5 +
examples/ipsec-secgw/ipsec.c | 308 +++++++++--
examples/ipsec-secgw/ipsec.h | 32 +-
examples/ipsec-secgw/sa.c | 151 +++--
lib/Makefile | 5 +
lib/librte_cryptodev/rte_crypto.h | 3 +-
lib/librte_cryptodev/rte_crypto_sym.h | 2 +
lib/librte_cryptodev/rte_cryptodev.c | 10 +
lib/librte_cryptodev/rte_cryptodev.h | 8 +
lib/librte_cryptodev/rte_cryptodev_version.map | 1 +
lib/librte_ether/rte_ethdev.c | 7 +
lib/librte_ether/rte_ethdev.h | 8 +
lib/librte_ether/rte_ethdev_version.map | 1 +
lib/librte_ether/rte_flow.h | 65 +++
lib/librte_mbuf/rte_mbuf.c | 6 +
lib/librte_mbuf/rte_mbuf.h | 35 +-
lib/librte_mbuf/rte_mbuf_ptype.c | 1 +
lib/librte_mbuf/rte_mbuf_ptype.h | 11 +
lib/librte_net/Makefile | 2 +-
lib/librte_net/rte_esp.h | 60 ++
lib/librte_security/Makefile | 53 ++
lib/librte_security/rte_security.c | 149 +++++
lib/librte_security/rte_security.h | 528 ++++++++++++++++++
lib/librte_security/rte_security_driver.h | 155 ++++++
lib/librte_security/rte_security_version.map | 13 +
mk/rte.app.mk | 1 +
51 files changed, 3882 insertions(+), 158 deletions(-)
create mode 100644 doc/guides/prog_guide/rte_security.rst
create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.c
create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.h
create mode 100644 lib/librte_net/rte_esp.h
create mode 100644 lib/librte_security/Makefile
create mode 100644 lib/librte_security/rte_security.c
create mode 100644 lib/librte_security/rte_security.h
create mode 100644 lib/librte_security/rte_security_driver.h
create mode 100644 lib/librte_security/rte_security_version.map
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v5 01/11] lib/rte_security: add security library
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 00/11] " Akhil Goyal
@ 2017-10-24 14:15 ` Akhil Goyal
2017-10-24 15:15 ` De Lara Guarch, Pablo
` (2 more replies)
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 02/11] doc: add details of rte security Akhil Goyal
` (10 subsequent siblings)
11 siblings, 3 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-24 14:15 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
rte_security library provides APIs for security session
create/free for protocol offload or offloaded crypto
operation to ethernet device.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
---
MAINTAINERS | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf | 1 +
doc/guides/rel_notes/release_17_11.rst | 1 +
lib/librte_security/Makefile | 53 +++
lib/librte_security/rte_security.c | 149 ++++++++
lib/librte_security/rte_security.h | 528 +++++++++++++++++++++++++++
lib/librte_security/rte_security_driver.h | 155 ++++++++
lib/librte_security/rte_security_version.map | 13 +
9 files changed, 906 insertions(+)
create mode 100644 lib/librte_security/Makefile
create mode 100644 lib/librte_security/rte_security.c
create mode 100644 lib/librte_security/rte_security.h
create mode 100644 lib/librte_security/rte_security_driver.h
create mode 100644 lib/librte_security/rte_security_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 826b882..50dd26e 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -280,6 +280,11 @@ T: git://dpdk.org/next/dpdk-next-eventdev
F: lib/librte_eventdev/*eth_rx_adapter*
F: test/test/test_event_eth_rx_adapter.c
+Security API - EXPERIMENTAL
+M: Akhil Goyal <akhil.goyal@nxp.com>
+M: Declan Doherty <declan.doherty@intel.com>
+F: lib/librte_security/
+F: doc/guides/prog_guide/rte_security.rst
Networking Drivers
------------------
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 97ce416..0f8d6d9 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -43,6 +43,7 @@ The public API headers are grouped by topics:
[rte_tm] (@ref rte_tm.h),
[rte_mtr] (@ref rte_mtr.h),
[cryptodev] (@ref rte_cryptodev.h),
+ [security] (@ref rte_security.h),
[eventdev] (@ref rte_eventdev.h),
[event_eth_rx_adapter] (@ref rte_event_eth_rx_adapter.h),
[metrics] (@ref rte_metrics.h),
diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf
index 9e9fa56..567691b 100644
--- a/doc/api/doxy-api.conf
+++ b/doc/api/doxy-api.conf
@@ -70,6 +70,7 @@ INPUT = doc/api/doxy-api-index.md \
lib/librte_reorder \
lib/librte_ring \
lib/librte_sched \
+ lib/librte_security \
lib/librte_table \
lib/librte_timer \
lib/librte_vhost
diff --git a/doc/guides/rel_notes/release_17_11.rst b/doc/guides/rel_notes/release_17_11.rst
index e4e98f7..6f1d537 100644
--- a/doc/guides/rel_notes/release_17_11.rst
+++ b/doc/guides/rel_notes/release_17_11.rst
@@ -382,6 +382,7 @@ The libraries prepended with a plus sign were incremented in this version.
librte_reorder.so.1
librte_ring.so.1
librte_sched.so.1
+ + librte_security.so.1
librte_table.so.2
librte_timer.so.1
librte_vhost.so.3
diff --git a/lib/librte_security/Makefile b/lib/librte_security/Makefile
new file mode 100644
index 0000000..af87bb2
--- /dev/null
+++ b/lib/librte_security/Makefile
@@ -0,0 +1,53 @@
+# BSD LICENSE
+#
+# Copyright(c) 2017 Intel Corporation. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of Intel Corporation nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_security.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library source files
+SRCS-y += rte_security.c
+
+# export include files
+SYMLINK-y-include += rte_security.h
+SYMLINK-y-include += rte_security_driver.h
+
+# versioning export map
+EXPORT_MAP := rte_security_version.map
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_security/rte_security.c b/lib/librte_security/rte_security.c
new file mode 100644
index 0000000..1227fca
--- /dev/null
+++ b/lib/librte_security/rte_security.c
@@ -0,0 +1,149 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright 2017 NXP.
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of NXP nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_malloc.h>
+#include <rte_dev.h>
+
+#include "rte_security.h"
+#include "rte_security_driver.h"
+
+struct rte_security_session *
+rte_security_session_create(struct rte_security_ctx *instance,
+ struct rte_security_session_conf *conf,
+ struct rte_mempool *mp)
+{
+ struct rte_security_session *sess = NULL;
+
+ if (conf == NULL)
+ return NULL;
+
+ RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->session_create, NULL);
+
+ if (rte_mempool_get(mp, (void *)&sess))
+ return NULL;
+
+ if (instance->ops->session_create(instance->device, conf, sess, mp)) {
+ rte_mempool_put(mp, (void *)sess);
+ return NULL;
+ }
+ instance->sess_cnt++;
+
+ return sess;
+}
+
+int
+rte_security_session_update(struct rte_security_ctx *instance,
+ struct rte_security_session *sess,
+ struct rte_security_session_conf *conf)
+{
+ RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->session_update, -ENOTSUP);
+ return instance->ops->session_update(instance->device, sess, conf);
+}
+
+int
+rte_security_session_stats_get(struct rte_security_ctx *instance,
+ struct rte_security_session *sess,
+ struct rte_security_stats *stats)
+{
+ RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->session_stats_get, -ENOTSUP);
+ return instance->ops->session_stats_get(instance->device, sess, stats);
+}
+
+int
+rte_security_session_destroy(struct rte_security_ctx *instance,
+ struct rte_security_session *sess)
+{
+ int ret;
+ struct rte_mempool *mp = rte_mempool_from_obj(sess);
+
+ RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->session_destroy, -ENOTSUP);
+
+ if (instance->sess_cnt)
+ instance->sess_cnt--;
+
+ ret = instance->ops->session_destroy(instance->device, sess);
+ if (!ret)
+ rte_mempool_put(mp, (void *)sess);
+
+ return ret;
+}
+
+int
+rte_security_set_pkt_metadata(struct rte_security_ctx *instance,
+ struct rte_security_session *sess,
+ struct rte_mbuf *m, void *params)
+{
+ RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->set_pkt_metadata, -ENOTSUP);
+ return instance->ops->set_pkt_metadata(instance->device,
+ sess, m, params);
+}
+
+const struct rte_security_capability *
+rte_security_capabilities_get(struct rte_security_ctx *instance)
+{
+ RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->capabilities_get, NULL);
+ return instance->ops->capabilities_get(instance->device);
+}
+
+const struct rte_security_capability *
+rte_security_capability_get(struct rte_security_ctx *instance,
+ struct rte_security_capability_idx *idx)
+{
+ const struct rte_security_capability *capabilities;
+ const struct rte_security_capability *capability;
+ uint16_t i = 0;
+
+ RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->capabilities_get, NULL);
+ capabilities = instance->ops->capabilities_get(instance->device);
+
+ if (capabilities == NULL)
+ return NULL;
+
+ while ((capability = &capabilities[i++])->action
+ != RTE_SECURITY_ACTION_TYPE_NONE) {
+ if (capability->action == idx->action &&
+ capability->protocol == idx->protocol) {
+ if (idx->protocol == RTE_SECURITY_PROTOCOL_IPSEC) {
+ if (capability->ipsec.proto ==
+ idx->ipsec.proto &&
+ capability->ipsec.mode ==
+ idx->ipsec.mode &&
+ capability->ipsec.direction ==
+ idx->ipsec.direction)
+ return capability;
+ }
+ }
+ }
+
+ return NULL;
+}
diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
new file mode 100644
index 0000000..87b39fb
--- /dev/null
+++ b/lib/librte_security/rte_security.h
@@ -0,0 +1,528 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright 2017 NXP.
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of NXP nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_SECURITY_H_
+#define _RTE_SECURITY_H_
+
+/**
+ * @file rte_security.h
+ *
+ * RTE Security Common Definitions
+ *
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <sys/types.h>
+
+#include <netinet/in.h>
+#include <netinet/ip.h>
+#include <netinet/ip6.h>
+
+#include <rte_common.h>
+#include <rte_crypto.h>
+#include <rte_mbuf.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
+
+/** IPSec protocol mode */
+enum rte_security_ipsec_sa_mode {
+ RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
+ /**< IPSec Transport mode */
+ RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ /**< IPSec Tunnel mode */
+};
+
+/** IPSec Protocol */
+enum rte_security_ipsec_sa_protocol {
+ RTE_SECURITY_IPSEC_SA_PROTO_AH,
+ /**< AH protocol */
+ RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ /**< ESP protocol */
+};
+
+/** IPSEC tunnel type */
+enum rte_security_ipsec_tunnel_type {
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4,
+ /**< Outer header is IPv4 */
+ RTE_SECURITY_IPSEC_TUNNEL_IPV6,
+ /**< Outer header is IPv6 */
+};
+
+/**
+ * Security context for crypto/eth devices
+ *
+ * Security instance for each driver to register security operations.
+ * The application can get the security context from the crypto/eth device id
+ * using the APIs rte_cryptodev_get_sec_ctx()/rte_eth_dev_get_sec_ctx()
+ * This structure is used to identify the device(crypto/eth) for which the
+ * security operations need to be performed.
+ */
+struct rte_security_ctx {
+ void *device;
+ /**< Crypto/ethernet device attached */
+ struct rte_security_ops *ops;
+ /**< Pointer to security ops for the device */
+ uint16_t sess_cnt;
+ /**< Number of sessions attached to this context */
+};
+
+/**
+ * IPSEC tunnel parameters
+ *
+ * These parameters are used to build outbound tunnel headers.
+ */
+struct rte_security_ipsec_tunnel_param {
+ enum rte_security_ipsec_tunnel_type type;
+ /**< Tunnel type: IPv4 or IPv6 */
+ RTE_STD_C11
+ union {
+ struct {
+ struct in_addr src_ip;
+ /**< IPv4 source address */
+ struct in_addr dst_ip;
+ /**< IPv4 destination address */
+ uint8_t dscp;
+ /**< IPv4 Differentiated Services Code Point */
+ uint8_t df;
+ /**< IPv4 Don't Fragment bit */
+ uint8_t ttl;
+ /**< IPv4 Time To Live */
+ } ipv4;
+ /**< IPv4 header parameters */
+ struct {
+ struct in6_addr src_addr;
+ /**< IPv6 source address */
+ struct in6_addr dst_addr;
+ /**< IPv6 destination address */
+ uint8_t dscp;
+ /**< IPv6 Differentiated Services Code Point */
+ uint32_t flabel;
+ /**< IPv6 flow label */
+ uint8_t hlimit;
+ /**< IPv6 hop limit */
+ } ipv6;
+ /**< IPv6 header parameters */
+ };
+};
+
+/**
+ * IPsec Security Association option flags
+ */
+struct rte_security_ipsec_sa_options {
+ /**< Extended Sequence Numbers (ESN)
+ *
+ * * 1: Use extended (64 bit) sequence numbers
+ * * 0: Use normal sequence numbers
+ */
+ uint32_t esn : 1;
+
+ /**< UDP encapsulation
+ *
+ * * 1: Do UDP encapsulation/decapsulation so that IPSEC packets can
+ * traverse through NAT boxes.
+ * * 0: No UDP encapsulation
+ */
+ uint32_t udp_encap : 1;
+
+ /**< Copy DSCP bits
+ *
+ * * 1: Copy IPv4 or IPv6 DSCP bits from inner IP header to
+ * the outer IP header in encapsulation, and vice versa in
+ * decapsulation.
+ * * 0: Do not change DSCP field.
+ */
+ uint32_t copy_dscp : 1;
+
+ /**< Copy IPv6 Flow Label
+ *
+ * * 1: Copy IPv6 flow label from inner IPv6 header to the
+ * outer IPv6 header.
+ * * 0: Outer header is not modified.
+ */
+ uint32_t copy_flabel : 1;
+
+ /**< Copy IPv4 Don't Fragment bit
+ *
+ * * 1: Copy the DF bit from the inner IPv4 header to the outer
+ * IPv4 header.
+ * * 0: Outer header is not modified.
+ */
+ uint32_t copy_df : 1;
+
+ /**< Decrement inner packet Time To Live (TTL) field
+ *
+ * * 1: In tunnel mode, decrement inner packet IPv4 TTL or
+ * IPv6 Hop Limit after tunnel decapsulation, or before tunnel
+ * encapsulation.
+ * * 0: Inner packet is not modified.
+ */
+ uint32_t dec_ttl : 1;
+};
+
+/** IPSec security association direction */
+enum rte_security_ipsec_sa_direction {
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ /**< Encrypt and generate digest */
+ RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+ /**< Verify digest and decrypt */
+};
+
+/**
+ * IPsec security association configuration data.
+ *
+ * This structure contains data required to create an IPsec SA security session.
+ */
+struct rte_security_ipsec_xform {
+ uint32_t spi;
+ /**< SA security parameter index */
+ uint32_t salt;
+ /**< SA salt */
+ struct rte_security_ipsec_sa_options options;
+ /**< various SA options */
+ enum rte_security_ipsec_sa_direction direction;
+ /**< IPSec SA Direction - Egress/Ingress */
+ enum rte_security_ipsec_sa_protocol proto;
+ /**< IPsec SA Protocol - AH/ESP */
+ enum rte_security_ipsec_sa_mode mode;
+ /**< IPsec SA Mode - transport/tunnel */
+ struct rte_security_ipsec_tunnel_param tunnel;
+ /**< Tunnel parameters, NULL for transport mode */
+};
+
+/**
+ * MACsec security session configuration
+ */
+struct rte_security_macsec_xform {
+ /** To be Filled */
+};
+
+/**
+ * Security session action type.
+ */
+enum rte_security_session_action_type {
+ RTE_SECURITY_ACTION_TYPE_NONE,
+ /**< No security actions */
+ RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+ /**< Crypto processing for security protocol is processed inline
+ * during transmission
+ */
+ RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+ /**< All security protocol processing is performed inline during
+ * transmission
+ */
+ RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+ /**< All security protocol processing including crypto is performed
+ * on a lookaside accelerator
+ */
+};
+
+/** Security session protocol definition */
+enum rte_security_session_protocol {
+ RTE_SECURITY_PROTOCOL_IPSEC,
+ /**< IPsec Protocol */
+ RTE_SECURITY_PROTOCOL_MACSEC,
+ /**< MACSec Protocol */
+};
+
+/**
+ * Security session configuration
+ */
+struct rte_security_session_conf {
+ enum rte_security_session_action_type action_type;
+ /**< Type of action to be performed on the session */
+ enum rte_security_session_protocol protocol;
+ /**< Security protocol to be configured */
+ union {
+ struct rte_security_ipsec_xform ipsec;
+ struct rte_security_macsec_xform macsec;
+ };
+ /**< Configuration parameters for security session */
+ struct rte_crypto_sym_xform *crypto_xform;
+ /**< Security Session Crypto Transformations */
+};
+
+struct rte_security_session {
+ void *sess_private_data;
+ /**< Private session material */
+};
+
+/**
+ * Create security session as specified by the session configuration
+ *
+ * @param instance security instance
+ * @param conf session configuration parameters
+ * @param mp mempool to allocate session objects from
+ * @return
+ * - On success, pointer to session
+ * - On failure, NULL
+ */
+struct rte_security_session *
+rte_security_session_create(struct rte_security_ctx *instance,
+ struct rte_security_session_conf *conf,
+ struct rte_mempool *mp);
+
+/**
+ * Update security session as specified by the session configuration
+ *
+ * @param instance security instance
+ * @param sess session to update parameters
+ * @param conf update configuration parameters
+ * @return
+ * - On success returns 0
+ * - On failure return errno
+ */
+int
+rte_security_session_update(struct rte_security_ctx *instance,
+ struct rte_security_session *sess,
+ struct rte_security_session_conf *conf);
+
+/**
+ * Free security session header and the session private data and
+ * return it to its original mempool.
+ *
+ * @param instance security instance
+ * @param sess security session to freed
+ *
+ * @return
+ * - 0 if successful.
+ * - -EINVAL if session is NULL.
+ * - -EBUSY if not all device private data has been freed.
+ */
+int
+rte_security_session_destroy(struct rte_security_ctx *instance,
+ struct rte_security_session *sess);
+
+/**
+ * Updates the buffer with device-specific defined metadata
+ *
+ * @param instance security instance
+ * @param sess security session
+ * @param mb packet mbuf to set metadata on.
+ * @param params device-specific defined parameters
+ * required for metadata
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+int
+rte_security_set_pkt_metadata(struct rte_security_ctx *instance,
+ struct rte_security_session *sess,
+ struct rte_mbuf *mb, void *params);
+
+/**
+ * Attach a session to a symmetric crypto operation
+ *
+ * @param sym_op crypto operation
+ * @param sess security session
+ */
+static inline int
+__rte_security_attach_session(struct rte_crypto_sym_op *sym_op,
+ struct rte_security_session *sess)
+{
+ sym_op->sec_session = sess;
+
+ return 0;
+}
+
+static inline void *
+get_sec_session_private_data(const struct rte_security_session *sess)
+{
+ return sess->sess_private_data;
+}
+
+static inline void
+set_sec_session_private_data(struct rte_security_session *sess,
+ void *private_data)
+{
+ sess->sess_private_data = private_data;
+}
+
+/**
+ * Attach a session to a crypto operation.
+ * This API is needed only in case of RTE_SECURITY_SESS_CRYPTO_PROTO_OFFLOAD
+ * For other rte_security_session_action_type, ol_flags in rte_mbuf may be
+ * defined to perform security operations.
+ *
+ * @param op crypto operation
+ * @param sess security session
+ */
+static inline int
+rte_security_attach_session(struct rte_crypto_op *op,
+ struct rte_security_session *sess)
+{
+ if (unlikely(op->type != RTE_CRYPTO_OP_TYPE_SYMMETRIC))
+ return -EINVAL;
+
+ op->sess_type = RTE_CRYPTO_OP_SECURITY_SESSION;
+
+ return __rte_security_attach_session(op->sym, sess);
+}
+
+struct rte_security_macsec_stats {
+ uint64_t reserved;
+};
+
+struct rte_security_ipsec_stats {
+ uint64_t reserved;
+
+};
+
+struct rte_security_stats {
+ enum rte_security_session_protocol protocol;
+ /**< Security protocol to be configured */
+
+ union {
+ struct rte_security_macsec_stats macsec;
+ struct rte_security_ipsec_stats ipsec;
+ };
+};
+
+/**
+ * Get security session statistics
+ *
+ * @param instance security instance
+ * @param sess security session
+ * @param stats statistics
+ * @return
+ * - On success return 0
+ * - On failure errno
+ */
+int
+rte_security_session_stats_get(struct rte_security_ctx *instance,
+ struct rte_security_session *sess,
+ struct rte_security_stats *stats);
+
+/**
+ * Security capability definition
+ */
+struct rte_security_capability {
+ enum rte_security_session_action_type action;
+ /**< Security action type*/
+ enum rte_security_session_protocol protocol;
+ /**< Security protocol */
+ RTE_STD_C11
+ union {
+ struct {
+ enum rte_security_ipsec_sa_protocol proto;
+ /**< IPsec SA protocol */
+ enum rte_security_ipsec_sa_mode mode;
+ /**< IPsec SA mode */
+ enum rte_security_ipsec_sa_direction direction;
+ /**< IPsec SA direction */
+ struct rte_security_ipsec_sa_options options;
+ /**< IPsec SA supported options */
+ } ipsec;
+ /**< IPsec capability */
+ struct {
+ /* To be Filled */
+ } macsec;
+ /**< MACsec capability */
+ };
+
+ const struct rte_cryptodev_capabilities *crypto_capabilities;
+ /**< Corresponding crypto capabilities for security capability */
+
+ uint32_t ol_flags;
+ /**< Device offload flags */
+};
+
+#define RTE_SECURITY_TX_OLOAD_NEED_MDATA 0x00000001
+/**< HW needs metadata update, see rte_security_set_pkt_metadata().
+ */
+
+#define RTE_SECURITY_TX_HW_TRAILER_OFFLOAD 0x00000002
+/**< HW constructs trailer of packets
+ * Transmitted packets will have the trailer added to them
+ * by hardawre. The next protocol field will be based on
+ * the mbuf->inner_esp_next_proto field.
+ */
+#define RTE_SECURITY_RX_HW_TRAILER_OFFLOAD 0x00010000
+/**< HW removes trailer of packets
+ * Received packets have no trailer, the next protocol field
+ * is supplied in the mbuf->inner_esp_next_proto field.
+ * Inner packet is not modified.
+ */
+
+/**
+ * Security capability index used to query a security instance for a specific
+ * security capability
+ */
+struct rte_security_capability_idx {
+ enum rte_security_session_action_type action;
+ enum rte_security_session_protocol protocol;
+
+ union {
+ struct {
+ enum rte_security_ipsec_sa_protocol proto;
+ enum rte_security_ipsec_sa_mode mode;
+ enum rte_security_ipsec_sa_direction direction;
+ } ipsec;
+ };
+};
+
+/**
+ * Returns array of security instance capabilities
+ *
+ * @param instance Security instance.
+ *
+ * @return
+ * - Returns array of security capabilities.
+ * - Return NULL if no capabilities available.
+ */
+const struct rte_security_capability *
+rte_security_capabilities_get(struct rte_security_ctx *instance);
+
+/**
+ * Query if a specific capability is available on security instance
+ *
+ * @param instance security instance.
+ * @param idx security capability index to match against
+ *
+ * @return
+ * - Returns pointer to security capability on match of capability
+ * index criteria.
+ * - Return NULL if the capability not matched on security instance.
+ */
+const struct rte_security_capability *
+rte_security_capability_get(struct rte_security_ctx *instance,
+ struct rte_security_capability_idx *idx);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_SECURITY_H_ */
diff --git a/lib/librte_security/rte_security_driver.h b/lib/librte_security/rte_security_driver.h
new file mode 100644
index 0000000..78814fa
--- /dev/null
+++ b/lib/librte_security/rte_security_driver.h
@@ -0,0 +1,155 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_SECURITY_DRIVER_H_
+#define _RTE_SECURITY_DRIVER_H_
+
+/**
+ * @file rte_security_driver.h
+ *
+ * RTE Security Common Definitions
+ *
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "rte_security.h"
+
+/**
+ * Configure a security session on a device.
+ *
+ * @param device Crypto/eth device pointer
+ * @param conf Security session configuration
+ * @param sess Pointer to Security private session structure
+ * @param mp Mempool where the private session is allocated
+ *
+ * @return
+ * - Returns 0 if private session structure have been created successfully.
+ * - Returns -EINVAL if input parameters are invalid.
+ * - Returns -ENOTSUP if crypto device does not support the crypto transform.
+ * - Returns -ENOMEM if the private session could not be allocated.
+ */
+typedef int (*security_session_create_t)(void *device,
+ struct rte_security_session_conf *conf,
+ struct rte_security_session *sess,
+ struct rte_mempool *mp);
+
+/**
+ * Free driver private session data.
+ *
+ * @param dev Crypto/eth device pointer
+ * @param sess Security session structure
+ */
+typedef int (*security_session_destroy_t)(void *device,
+ struct rte_security_session *sess);
+
+/**
+ * Update driver private session data.
+ *
+ * @param device Crypto/eth device pointer
+ * @param sess Pointer to Security private session structure
+ * @param conf Security session configuration
+ *
+ * @return
+ * - Returns 0 if private session structure have been updated successfully.
+ * - Returns -EINVAL if input parameters are invalid.
+ * - Returns -ENOTSUP if crypto device does not support the crypto transform.
+ */
+typedef int (*security_session_update_t)(void *device,
+ struct rte_security_session *sess,
+ struct rte_security_session_conf *conf);
+/**
+ * Get stats from the PMD.
+ *
+ * @param device Crypto/eth device pointer
+ * @param sess Pointer to Security private session structure
+ * @param stats Security stats of the driver
+ *
+ * @return
+ * - Returns 0 if private session structure have been updated successfully.
+ * - Returns -EINVAL if session parameters are invalid.
+ */
+typedef int (*security_session_stats_get_t)(void *device,
+ struct rte_security_session *sess,
+ struct rte_security_stats *stats);
+
+/**
+ * Update the mbuf with provided metadata.
+ *
+ * @param sess Security session structure
+ * @param mb Packet buffer
+ * @param mt Metadata
+ *
+ * @return
+ * - Returns 0 if metadata updated successfully.
+ * - Returns -ve value for errors.
+ */
+typedef int (*security_set_pkt_metadata_t)(void *device,
+ struct rte_security_session *sess, struct rte_mbuf *m,
+ void *params);
+
+/**
+ * Get security capabilities of the device.
+ *
+ * @param device crypto/eth device pointer
+ *
+ * @return
+ * - Returns rte_security_capability pointer on success.
+ * - Returns NULL on error.
+ */
+typedef const struct rte_security_capability *(*security_capabilities_get_t)(
+ void *device);
+
+/** Security operations function pointer table */
+struct rte_security_ops {
+ security_session_create_t session_create;
+ /**< Configure a security session. */
+ security_session_update_t session_update;
+ /**< Update a security session. */
+ security_session_stats_get_t session_stats_get;
+ /**< Get security session statistics. */
+ security_session_destroy_t session_destroy;
+ /**< Clear a security sessions private data. */
+ security_set_pkt_metadata_t set_pkt_metadata;
+ /**< Update mbuf metadata. */
+ security_capabilities_get_t capabilities_get;
+ /**< Get security capabilities. */
+};
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_SECURITY_DRIVER_H_ */
diff --git a/lib/librte_security/rte_security_version.map b/lib/librte_security/rte_security_version.map
new file mode 100644
index 0000000..8af7fc1
--- /dev/null
+++ b/lib/librte_security/rte_security_version.map
@@ -0,0 +1,13 @@
+DPDK_17.11 {
+ global:
+
+ rte_security_attach_session;
+ rte_security_capabilities_get;
+ rte_security_capability_get;
+ rte_security_session_create;
+ rte_security_session_destroy;
+ rte_security_session_stats_get;
+ rte_security_session_update;
+ rte_security_set_pkt_metadata;
+
+};
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v5 01/11] lib/rte_security: add security library
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 01/11] lib/rte_security: add security library Akhil Goyal
@ 2017-10-24 15:15 ` De Lara Guarch, Pablo
2017-10-25 11:06 ` Akhil Goyal
2017-10-24 20:47 ` Thomas Monjalon
2017-10-25 5:13 ` Hemant Agrawal
2 siblings, 1 reply; 195+ messages in thread
From: De Lara Guarch, Pablo @ 2017-10-24 15:15 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: Doherty, Declan, hemant.agrawal, Nicolau, Radu, borisp, aviadye,
thomas, sandeep.malik, jerin.jacob, Mcnamara, John, Ananyev,
Konstantin, shahafs, olivier.matz
Hi Akhil,
> -----Original Message-----
> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
> Sent: Tuesday, October 24, 2017 3:16 PM
> To: dev@dpdk.org
> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com; Nicolau,
> Radu <radu.nicolau@intel.com>; borisp@mellanox.com;
> aviadye@mellanox.com; thomas@monjalon.net; sandeep.malik@nxp.com;
> jerin.jacob@caviumnetworks.com; Mcnamara, John
> <john.mcnamara@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; shahafs@mellanox.com;
> olivier.matz@6wind.com
> Subject: [PATCH v5 01/11] lib/rte_security: add security library
>
If you are making a v6, I would change the title to "security: ...".
Also, there is an issue described below.
Regards,
Pablo
...
> diff --git a/lib/librte_security/Makefile b/lib/librte_security/Makefile new
> file mode 100644 index 0000000..af87bb2
> --- /dev/null
> +++ b/lib/librte_security/Makefile
...
> +
> +# library name
> +LIB = librte_security.a
> +
> +# library version
> +LIBABIVER := 1
> +
> +# build flags
> +CFLAGS += -O3
> +CFLAGS += $(WERROR_FLAGS)
There is a compilation issue when the building as shared library, because LDLIBS have not been set.
You need to add the following:
+LDLIBS += -lrte_eal -lrte_mempool
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v5 01/11] lib/rte_security: add security library
2017-10-24 15:15 ` De Lara Guarch, Pablo
@ 2017-10-25 11:06 ` Akhil Goyal
0 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-25 11:06 UTC (permalink / raw)
To: De Lara Guarch, Pablo, dev
Cc: Doherty, Declan, hemant.agrawal, Nicolau, Radu, borisp, aviadye,
thomas, sandeep.malik, jerin.jacob, Mcnamara, John, Ananyev,
Konstantin, shahafs, olivier.matz
Hi Pablo,
On 10/24/2017 8:45 PM, De Lara Guarch, Pablo wrote:
> Hi Akhil,
>
>> -----Original Message-----
>> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
>> Sent: Tuesday, October 24, 2017 3:16 PM
>> To: dev@dpdk.org
>> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo
>> <pablo.de.lara.guarch@intel.com>; hemant.agrawal@nxp.com; Nicolau,
>> Radu <radu.nicolau@intel.com>; borisp@mellanox.com;
>> aviadye@mellanox.com; thomas@monjalon.net; sandeep.malik@nxp.com;
>> jerin.jacob@caviumnetworks.com; Mcnamara, John
>> <john.mcnamara@intel.com>; Ananyev, Konstantin
>> <konstantin.ananyev@intel.com>; shahafs@mellanox.com;
>> olivier.matz@6wind.com
>> Subject: [PATCH v5 01/11] lib/rte_security: add security library
>>
>
> If you are making a v6, I would change the title to "security: ...".
> Also, there is an issue described below.
>
> Regards,
> Pablo
Ok will change the title "security: introduce security API and framework"
>
> ...
>
>> diff --git a/lib/librte_security/Makefile b/lib/librte_security/Makefile new
>> file mode 100644 index 0000000..af87bb2
>> --- /dev/null
>> +++ b/lib/librte_security/Makefile
>
> ...
>
>> +
>> +# library name
>> +LIB = librte_security.a
>> +
>> +# library version
>> +LIBABIVER := 1
>> +
>> +# build flags
>> +CFLAGS += -O3
>> +CFLAGS += $(WERROR_FLAGS)
>
> There is a compilation issue when the building as shared library, because LDLIBS have not been set.
>
> You need to add the following:
>
> +LDLIBS += -lrte_eal -lrte_mempool
>
>
>
Thanks for pointing this out. I think it got broken due to some latest
patches merged. Will correct this in v6.
Thanks,
Akhil
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v5 01/11] lib/rte_security: add security library
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 01/11] lib/rte_security: add security library Akhil Goyal
2017-10-24 15:15 ` De Lara Guarch, Pablo
@ 2017-10-24 20:47 ` Thomas Monjalon
2017-10-25 11:08 ` Akhil Goyal
2017-10-25 5:13 ` Hemant Agrawal
2 siblings, 1 reply; 195+ messages in thread
From: Thomas Monjalon @ 2017-10-24 20:47 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, sandeep.malik, jerin.jacob,
john.mcnamara, konstantin.ananyev, shahafs, olivier.matz
Hi,
I am waiting the crypto subtree to be ready before getting this series.
Some last comments below,
24/10/2017 16:15, Akhil Goyal:
> +Security API - EXPERIMENTAL
> +M: Akhil Goyal <akhil.goyal@nxp.com>
> +M: Declan Doherty <declan.doherty@intel.com>
> +F: lib/librte_security/
> +F: doc/guides/prog_guide/rte_security.rst
>
> Networking Drivers
> ------------------
An additional blank line is missing.
> +# build flags
> +CFLAGS += -O3
> +CFLAGS += $(WERROR_FLAGS)
As said by Pablo, please fix the build with LDLIBS.
> +/**
> + * @file rte_security.h
> + *
> + * RTE Security Common Definitions
> + *
> + */
You should add this line:
@b EXPERIMENTAL: this API may change without prior notice
> --- /dev/null
> +++ b/lib/librte_security/rte_security_version.map
> @@ -0,0 +1,13 @@
> +DPDK_17.11 {
The name of this block should be EXPERIMENTAL
> + global:
> +
> + rte_security_attach_session;
> + rte_security_capabilities_get;
> + rte_security_capability_get;
> + rte_security_session_create;
> + rte_security_session_destroy;
> + rte_security_session_stats_get;
> + rte_security_session_update;
> + rte_security_set_pkt_metadata;
> +
> +};
I think you need this line:
local: *;
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v5 01/11] lib/rte_security: add security library
2017-10-24 20:47 ` Thomas Monjalon
@ 2017-10-25 11:08 ` Akhil Goyal
0 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-25 11:08 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, sandeep.malik, jerin.jacob,
john.mcnamara, konstantin.ananyev, shahafs, olivier.matz
Hi Thomas,
On 10/25/2017 2:17 AM, Thomas Monjalon wrote:
> Hi,
>
> I am waiting the crypto subtree to be ready before getting this series.
>
> Some last comments below,
>
> 24/10/2017 16:15, Akhil Goyal:
>> +Security API - EXPERIMENTAL
>> +M: Akhil Goyal <akhil.goyal@nxp.com>
>> +M: Declan Doherty <declan.doherty@intel.com>
>> +F: lib/librte_security/
>> +F: doc/guides/prog_guide/rte_security.rst
>>
>> Networking Drivers
>> ------------------
>
> An additional blank line is missing.
>
>
>> +# build flags
>> +CFLAGS += -O3
>> +CFLAGS += $(WERROR_FLAGS)
>
> As said by Pablo, please fix the build with LDLIBS.
>
>
>> +/**
>> + * @file rte_security.h
>> + *
>> + * RTE Security Common Definitions
>> + *
>> + */
>
> You should add this line:
>
> @b EXPERIMENTAL: this API may change without prior notice
>
>
>> --- /dev/null
>> +++ b/lib/librte_security/rte_security_version.map
>> @@ -0,0 +1,13 @@
>> +DPDK_17.11 {
>
> The name of this block should be EXPERIMENTAL
>
>> + global:
>> +
>> + rte_security_attach_session;
>> + rte_security_capabilities_get;
>> + rte_security_capability_get;
>> + rte_security_session_create;
>> + rte_security_session_destroy;
>> + rte_security_session_stats_get;
>> + rte_security_session_update;
>> + rte_security_set_pkt_metadata;
>> +
>> +};
>
> I think you need this line:
> local: *;
>
>
Will Correct all this in v6.
Thanks,
Akhil
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v5 01/11] lib/rte_security: add security library
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 01/11] lib/rte_security: add security library Akhil Goyal
2017-10-24 15:15 ` De Lara Guarch, Pablo
2017-10-24 20:47 ` Thomas Monjalon
@ 2017-10-25 5:13 ` Hemant Agrawal
2 siblings, 0 replies; 195+ messages in thread
From: Hemant Agrawal @ 2017-10-25 5:13 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: declan.doherty, pablo.de.lara.guarch, radu.nicolau, borisp,
aviadye, thomas, sandeep.malik, jerin.jacob, john.mcnamara,
konstantin.ananyev, shahafs, olivier.matz
Hi Akhil,
Some minor comments.
On 10/24/2017 7:45 PM, Akhil Goyal wrote:
> rte_security library provides APIs for security session
> create/free for protocol offload or offloaded crypto
> operation to ethernet device.
>
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> Signed-off-by: Boris Pismenny <borisp@mellanox.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
> ---
..<snip>
> diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
> new file mode 100644
> index 0000000..87b39fb
> --- /dev/null
> +++ b/lib/librte_security/rte_security.h
> @@ -0,0 +1,528 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright 2017 NXP.
> + * Copyright(c) 2017 Intel Corporation. All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * * Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * * Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in
> + * the documentation and/or other materials provided with the
> + * distribution.
> + * * Neither the name of NXP nor the names of its
> + * contributors may be used to endorse or promote products derived
> + * from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef _RTE_SECURITY_H_
> +#define _RTE_SECURITY_H_
> +
> +/**
> + * @file rte_security.h
> + *
> + * RTE Security Common Definitions
> + *
minor comment:
better to add:
@b EXPERIMENTAL: this API may change without prior notice
> + */
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
..<snip>
> +
> +/**
> + * Create security session as specified by the session configuration
> + *
> + * @param instance security instance
> + * @param conf session configuration parameters
> + * @param mp mempool to allocate session objects from
can you fix the spacing for *mp* details here?
> + * @return
> + * - On success, pointer to session
> + * - On failure, NULL
> + */
> +struct rte_security_session *
> +rte_security_session_create(struct rte_security_ctx *instance,
> + struct rte_security_session_conf *conf,
> + struct rte_mempool *mp);
> +
> diff --git a/lib/librte_security/rte_security_driver.h b/lib/librte_security/rte_security_driver.h
> new file mode 100644
> index 0000000..78814fa
> --- /dev/null
> +++ b/lib/librte_security/rte_security_driver.h
> @@ -0,0 +1,155 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright(c) 2017 Intel Corporation. All rights reserved.
> + * Copyright 2017 NXP.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * * Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * * Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in
> + * the documentation and/or other materials provided with the
> + * distribution.
> + * * Neither the name of Intel Corporation nor the names of its
> + * contributors may be used to endorse or promote products derived
> + * from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef _RTE_SECURITY_DRIVER_H_
> +#define _RTE_SECURITY_DRIVER_H_
> +
> +/**
> + * @file rte_security_driver.h
> + *
> + * RTE Security Common Definitions
RTE Security driver related common function definitions.
@b EXPERIMENTAL: these API may change without prior notice
> diff --git a/lib/librte_security/rte_security_version.map b/lib/librte_security/rte_security_version.map
> new file mode 100644
> index 0000000..8af7fc1
> --- /dev/null
> +++ b/lib/librte_security/rte_security_version.map
> @@ -0,0 +1,13 @@
> +DPDK_17.11 {
This should be EXPERIMENTAL
> + global:
> +
> + rte_security_attach_session;
> + rte_security_capabilities_get;
> + rte_security_capability_get;
> + rte_security_session_create;
> + rte_security_session_destroy;
> + rte_security_session_stats_get;
> + rte_security_session_update;
> + rte_security_set_pkt_metadata;
> +
> +};
>
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v5 02/11] doc: add details of rte security
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 00/11] " Akhil Goyal
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 01/11] lib/rte_security: add security library Akhil Goyal
@ 2017-10-24 14:15 ` Akhil Goyal
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 03/11] cryptodev: support security APIs Akhil Goyal
` (9 subsequent siblings)
11 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-24 14:15 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rte_security.rst | 564 +++++++++++++++++++++++++++++++++
2 files changed, 565 insertions(+)
create mode 100644 doc/guides/prog_guide/rte_security.rst
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index fbd2a72..9759264 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -47,6 +47,7 @@ Programmer's Guide
traffic_metering_and_policing
traffic_management
cryptodev_lib
+ rte_security
link_bonding_poll_mode_drv_lib
timer_lib
hash_lib
diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
new file mode 100644
index 0000000..71be036
--- /dev/null
+++ b/doc/guides/prog_guide/rte_security.rst
@@ -0,0 +1,564 @@
+.. BSD LICENSE
+ Copyright 2017 NXP.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of NXP nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+Security Library
+================
+
+The security library provides a framework for management and provisioning
+of security protocol operations offloaded to hardware based devices. The
+library defines generic APIs to create and free security sessions which can
+support full protocol offload as well as inline crypto operation with
+NIC or crypto devices. The framework currently only supports the IPSec protocol
+and associated operations, other protocols will be added in future.
+
+Design Principles
+-----------------
+
+The security library provides an additional offload capability to an existing
+crypto device and/or ethernet device.
+
+.. code-block:: console
+
+ +---------------+
+ | rte_security |
+ +---------------+
+ \ /
+ +-----------+ +--------------+
+ | NIC PMD | | CRYPTO PMD |
+ +-----------+ +--------------+
+
+.. note::
+
+ Currently, the security library does not support the case of multi-process.
+ It will be updated in the future releases.
+
+The supported offload types are explained in the sections below.
+
+Inline Crypto
+~~~~~~~~~~~~~
+
+RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO:
+The crypto processing for security protocol (e.g. IPSec) is processed
+inline during receive and transmission on NIC port. The flow based
+security action should be configured on the port.
+
+Ingress Data path - The packet is decrypted in RX path and relevant
+crypto status is set in Rx descriptors. After the successful inline
+crypto processing the packet is presented to host as a regular Rx packet
+however all security protocol related headers are still attached to the
+packet. e.g. In case of IPSec, the IPSec tunnel headers (if any),
+ESP/AH headers will remain in the packet but the received packet
+contains the decrypted data where the encrypted data was when the packet
+arrived. The driver Rx path check the descriptors and and based on the
+crypto status sets additional flags in the rte_mbuf.ol_flags field.
+
+.. note::
+
+ The underlying device may not support crypto processing for all ingress packet
+ matching to a particular flow (e.g. fragmented packets), such packets will
+ be passed as encrypted packets. It is the responsibility of application to
+ process such encrypted packets using other crypto driver instance.
+
+Egress Data path - The software prepares the egress packet by adding
+relevant security protocol headers. Only the data will not be
+encrypted by the software. The driver will accordingly configure the
+tx descriptors. The hardware device will encrypt the data before sending the
+the packet out.
+
+.. note::
+
+ The underlying device may support post encryption TSO.
+
+.. code-block:: console
+
+ Egress Data Path
+ |
+ +--------|--------+
+ | egress IPsec |
+ | | |
+ | +------V------+ |
+ | | SADB lookup | |
+ | +------|------+ |
+ | +------V------+ |
+ | | Tunnel | | <------ Add tunnel header to packet
+ | +------|------+ |
+ | +------V------+ |
+ | | ESP | | <------ Add ESP header without trailer to packet
+ | | | | <------ Mark packet to be offloaded, add trailer
+ | +------|------+ | meta-data to mbuf
+ +--------V--------+
+ |
+ +--------V--------+
+ | L2 Stack |
+ +--------|--------+
+ |
+ +--------V--------+
+ | |
+ | NIC PMD | <------ Set hw context for inline crypto offload
+ | |
+ +--------|--------+
+ |
+ +--------|--------+
+ | HW ACCELERATED | <------ Packet Encryption and
+ | NIC | Authentication happens inline
+ | |
+ +-----------------+
+
+
+Inline protocol offload
+~~~~~~~~~~~~~~~~~~~~~~~
+
+RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+The crypto and protocol processing for security protocol (e.g. IPSec)
+is processed inline during receive and transmission. The flow based
+security action should be configured on the port.
+
+Ingress Data path - The packet is decrypted in the RX path and relevant
+crypto status is set in the Rx descriptors. After the successful inline
+crypto processing the packet is presented to the host as a regular Rx packet
+but all security protocol related headers are optionally removed from the
+packet. e.g. in the case of IPSec, the IPSec tunnel headers (if any),
+ESP/AH headers will be removed from the packet and the received packet
+will contains the decrypted packet only. The driver Rx path checks the
+descriptors and based on the crypto status sets additional flags in
+``rte_mbuf.ol_flags`` field.
+
+.. note::
+
+ The underlying device in this case is stateful. It is expected that
+ the device shall support crypto processing for all kind of packets matching
+ to a given flow, this includes fragmented packets (post reassembly).
+ E.g. in case of IPSec the device may internally manage anti-replay etc.
+ It will provide a configuration option for anti-replay behavior i.e. to drop
+ the packets or pass them to driver with error flags set in the descriptor.
+
+Egress Data path - The software will send the plain packet without any
+security protocol headers added to the packet. The driver will configure
+the security index and other requirement in tx descriptors.
+The hardware device will do security processing on the packet that includes
+adding the relevant protocol headers and encrypting the data before sending
+the packet out. The software should make sure that the buffer
+has required head room and tail room for any protocol header addition. The
+software may also do early fragmentation if the resultant packet is expected
+to cross the MTU size.
+
+
+.. note::
+
+ The underlying device will manage state information required for egress
+ processing. E.g. in case of IPSec, the seq number will be added to the
+ packet, however the device shall provide indication when the sequence number
+ is about to overflow. The underlying device may support post encryption TSO.
+
+.. code-block:: console
+
+ Egress Data Path
+ |
+ +--------|--------+
+ | egress IPsec |
+ | | |
+ | +------V------+ |
+ | | SADB lookup | |
+ | +------|------+ |
+ | +------V------+ |
+ | | Desc | | <------ Mark packet to be offloaded
+ | +------|------+ |
+ +--------V--------+
+ |
+ +--------V--------+
+ | L2 Stack |
+ +--------|--------+
+ |
+ +--------V--------+
+ | |
+ | NIC PMD | <------ Set hw context for inline crypto offload
+ | |
+ +--------|--------+
+ |
+ +--------|--------+
+ | HW ACCELERATED | <------ Add tunnel, ESP header etc header to
+ | NIC | packet. Packet Encryption and
+ | | Authentication happens inline.
+ +-----------------+
+
+
+Lookaside protocol offload
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
+This extends librte_cryptodev to support the programming of IPsec
+Security Association (SA) as part of a crypto session creation including
+the definition. In addition to standard crypto processing, as defined by
+the cryptodev, the security protocol processing is also offloaded to the
+crypto device.
+
+Decryption: The packet is sent to the crypto device for security
+protocol processing. The device will decrypt the packet and it will also
+optionally remove additional security headers from the packet.
+E.g. in case of IPSec, IPSec tunnel headers (if any), ESP/AH headers
+will be removed from the packet and the decrypted packet may contain
+plain data only.
+
+.. note::
+
+ In case of IPSec the device may internally manage anti-replay etc.
+ It will provide a configuration option for anti-replay behavior i.e. to drop
+ the packets or pass them to driver with error flags set in descriptor.
+
+Encryption: The software will submit the packet to cryptodev as usual
+for encryption, the hardware device in this case will also add the relevant
+security protocol header along with encrypting the packet. The software
+should make sure that the buffer has required head room and tail room
+for any protocol header addition.
+
+.. note::
+
+ In the case of IPSec, the seq number will be added to the packet,
+ It shall provide an indication when the sequence number is about to
+ overflow.
+
+.. code-block:: console
+
+ Egress Data Path
+ |
+ +--------|--------+
+ | egress IPsec |
+ | | |
+ | +------V------+ |
+ | | SADB lookup | | <------ SA maps to cryptodev session
+ | +------|------+ |
+ | +------|------+ |
+ | | \--------------------\
+ | | Crypto | | | <- Crypto processing through
+ | | /----------------\ | inline crypto PMD
+ | +------|------+ | | |
+ +--------V--------+ | |
+ | | |
+ +--------V--------+ | | create <-- SA is added to hw
+ | L2 Stack | | | inline using existing create
+ +--------|--------+ | | session sym session APIs
+ | | | |
+ +--------V--------+ +---|---|----V---+
+ | | | \---/ | | <--- Add tunnel, ESP header etc
+ | NIC PMD | | INLINE | | header to packet.Packet
+ | | | CRYPTO PMD | | Encryption/Decryption and
+ +--------|--------+ +----------------+ Authentication happens
+ | inline.
+ +--------|--------+
+ | NIC |
+ +--------|--------+
+ V
+
+Device Features and Capabilities
+---------------------------------
+
+Device Capabilities For Security Operations
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The device (crypto or ethernet) capabilities which support security operations,
+are defined by the security action type, security protocol, protocol
+capabilities and corresponding crypto capabilities for security. For the full
+scope of the Security capability see definition of rte_security_capability
+structure in the *DPDK API Reference*.
+
+.. code-block:: c
+
+ struct rte_security_capability;
+
+Each driver (crypto or ethernet) defines its own private array of capabilities
+for the operations it supports. Below is an example of the capabilities for a
+PMD which supports the IPSec protocol.
+
+.. code-block:: c
+
+ static const struct rte_security_capability pmd_security_capabilities[] = {
+ { /* IPsec Lookaside Protocol offload ESP Tunnel Egress */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ .options = { 0 }
+ },
+ .crypto_capabilities = pmd_capabilities
+ },
+ { /* IPsec Lookaside Protocol offload ESP Tunnel Ingress */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+ .options = { 0 }
+ },
+ .crypto_capabilities = pmd_capabilities
+ },
+ {
+ .action = RTE_SECURITY_ACTION_TYPE_NONE
+ }
+ };
+ static const struct rte_cryptodev_capabilities pmd_capabilities[] = {
+ { /* SHA1 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ .sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ .auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 12,
+ .max = 12,
+ .increment = 0
+ },
+ .aad_size = { 0 },
+ .iv_size = { 0 }
+ }
+ }
+ },
+ { /* AES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ .sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ .cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_CBC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }
+ }
+ }
+ }
+
+
+Capabilities Discovery
+~~~~~~~~~~~~~~~~~~~~~~
+
+Discovering the features and capabilities of a driver (crypto/ethernet)
+is achieved through the ``rte_security_capabilities_get()`` function.
+
+.. code-block:: c
+
+ const struct rte_security_capability *rte_security_capabilities_get(uint16_t id);
+
+This allows the user to query a specific driver and get all device
+security capabilities. It returns an array of ``rte_security_capability`` structures
+which contains all the capabilities for that device.
+
+Security Session Create/Free
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Security Sessions are created to store the immutable fields of a particular Security
+Association for a particular protocol which is defined by a security session
+configuration structure which is used in the operation processing of a packet flow.
+Sessions are used to manage protocol specific information as well as crypto parameters.
+Security sessions cache this immutable data in a optimal way for the underlying PMD
+and this allows further acceleration of the offload of Crypto workloads.
+
+The Security framework provides APIs to create and free sessions for crypto/ethernet
+devices, where sessions are mempool objects. It is the application's responsibility
+to create and manage the session mempools. The mempool object size should be able to
+accommodate the driver's private data of security session.
+
+Once the session mempools have been created, ``rte_security_session_create()``
+is used to allocate and initialize a session for the required crypto/ethernet device.
+
+Session APIs need a parameter ``rte_security_ctx`` to identify the crypto/ethernet
+security ops. This parameter can be retrieved using the APIs
+``rte_cryptodev_get_sec_ctx()`` (for crypto device) or ``rte_eth_dev_get_sec_ctx``
+(for ethernet port).
+
+Sessions already created can be updated with ``rte_security_session_update()``.
+
+When a session is no longer used, the user must call ``rte_security_session_destroy()``
+to free the driver private session data and return the memory back to the mempool.
+
+For look aside protocol offload to hardware crypto device, the ``rte_crypto_op``
+created by the application is attached to the security session by the API
+``rte_security_attach_session()``.
+
+For Inline Crypto and Inline protocol offload, device specific defined metadata is
+updated in the mbuf using ``rte_security_set_pkt_metadata()`` if
+``DEV_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
+
+Security session configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Security Session configuration structure is defined as ``rte_security_session_conf``
+
+.. code-block:: c
+
+ struct rte_security_session_conf {
+ enum rte_security_session_action_type action_type;
+ /**< Type of action to be performed on the session */
+ enum rte_security_session_protocol protocol;
+ /**< Security protocol to be configured */
+ union {
+ struct rte_security_ipsec_xform ipsec;
+ struct rte_security_macsec_xform macsec;
+ };
+ /**< Configuration parameters for security session */
+ struct rte_crypto_sym_xform *crypto_xform;
+ /**< Security Session Crypto Transformations */
+ };
+
+The configuration structure reuses the ``rte_crypto_sym_xform`` struct for crypto related
+configuration. The ``rte_security_session_action_type`` struct is used to specify whether the
+session is configured for Lookaside Protocol offload or Inline Crypto or Inline Protocol
+Offload.
+
+.. code-block:: c
+
+ enum rte_security_session_action_type {
+ RTE_SECURITY_ACTION_TYPE_NONE,
+ /**< No security actions */
+ RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+ /**< Crypto processing for security protocol is processed inline
+ * during transmission */
+ RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+ /**< All security protocol processing is performed inline during
+ * transmission */
+ RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+ /**< All security protocol processing including crypto is performed
+ * on a lookaside accelerator */
+ };
+
+The ``rte_security_session_protocol`` is defined as
+
+.. code-block:: c
+
+ enum rte_security_session_protocol {
+ RTE_SECURITY_PROTOCOL_IPSEC,
+ /**< IPsec Protocol */
+ RTE_SECURITY_PROTOCOL_MACSEC,
+ /**< MACSec Protocol */
+ };
+
+Currently the library defines configuration parameters for IPSec only. For other
+protocols like MACSec, structures and enums are defined as place holders which
+will be updated in the future.
+
+IPsec related configuration parameters are defined in ``rte_security_ipsec_xform``
+
+.. code-block:: c
+
+ struct rte_security_ipsec_xform {
+ uint32_t spi;
+ /**< SA security parameter index */
+ uint32_t salt;
+ /**< SA salt */
+ struct rte_security_ipsec_sa_options options;
+ /**< various SA options */
+ enum rte_security_ipsec_sa_direction direction;
+ /**< IPSec SA Direction - Egress/Ingress */
+ enum rte_security_ipsec_sa_protocol proto;
+ /**< IPsec SA Protocol - AH/ESP */
+ enum rte_security_ipsec_sa_mode mode;
+ /**< IPsec SA Mode - transport/tunnel */
+ struct rte_security_ipsec_tunnel_param tunnel;
+ /**< Tunnel parameters, NULL for transport mode */
+ };
+
+
+Security API
+~~~~~~~~~~~~
+
+The rte_security Library API is described in the *DPDK API Reference* document.
+
+Flow based Security Session
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In the case of NIC based offloads, the security session specified in the
+'rte_flow_action_security' must be created on the same port as the
+flow action that is being specified.
+
+The ingress/egress flow attribute should match that specified in the security
+session if the security session supports the definition of the direction.
+
+Multiple flows can be configured to use the same security session. For
+example if the security session specifies an egress IPsec SA, then multiple
+flows can be specified to that SA. In the case of an ingress IPsec SA then
+it is only valid to have a single flow to map to that security session.
+
+.. code-block:: console
+
+ Configuration Path
+ |
+ +--------|--------+
+ | Add/Remove |
+ | IPsec SA | <------ Build security flow action of
+ | | | ipsec transform
+ |--------|--------|
+ |
+ +--------V--------+
+ | Flow API |
+ +--------|--------+
+ |
+ +--------V--------+
+ | |
+ | NIC PMD | <------ Add/Remove SA to/from hw context
+ | |
+ +--------|--------+
+ |
+ +--------|--------+
+ | HW ACCELERATED |
+ | NIC |
+ | |
+ +--------|--------+
+
+* Add/Delete SA flow:
+ To add a new inline SA construct a rte_flow_item for Ethernet + IP + ESP
+ using the SA selectors and the ``rte_crypto_ipsec_xform`` as the ``rte_flow_action``.
+ Note that any rte_flow_items may be empty, which means it is not checked.
+
+.. code-block:: console
+
+ In its most basic form, IPsec flow specification is as follows:
+ +-------+ +----------+ +--------+ +-----+
+ | Eth | -> | IP4/6 | -> | ESP | -> | END |
+ +-------+ +----------+ +--------+ +-----+
+
+ However, the API can represent, IPsec crypto offload with any encapsulation:
+ +-------+ +--------+ +-----+
+ | Eth | -> ... -> | ESP | -> | END |
+ +-------+ +--------+ +-----+
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v5 03/11] cryptodev: support security APIs
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 00/11] " Akhil Goyal
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 01/11] lib/rte_security: add security library Akhil Goyal
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 02/11] doc: add details of rte security Akhil Goyal
@ 2017-10-24 14:15 ` Akhil Goyal
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 04/11] net: add ESP header to generic flow steering Akhil Goyal
` (8 subsequent siblings)
11 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-24 14:15 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
Security ops are added to crypto device to support
protocol offloaded security operations.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
doc/guides/cryptodevs/features/default.ini | 1 +
lib/librte_cryptodev/rte_crypto.h | 3 ++-
lib/librte_cryptodev/rte_crypto_sym.h | 2 ++
lib/librte_cryptodev/rte_cryptodev.c | 10 ++++++++++
lib/librte_cryptodev/rte_cryptodev.h | 8 ++++++++
lib/librte_cryptodev/rte_cryptodev_version.map | 1 +
6 files changed, 24 insertions(+), 1 deletion(-)
diff --git a/doc/guides/cryptodevs/features/default.ini b/doc/guides/cryptodevs/features/default.ini
index c98717a..18d66cb 100644
--- a/doc/guides/cryptodevs/features/default.ini
+++ b/doc/guides/cryptodevs/features/default.ini
@@ -10,6 +10,7 @@ Symmetric crypto =
Asymmetric crypto =
Sym operation chaining =
HW Accelerated =
+Protocol offload =
CPU SSE =
CPU AVX =
CPU AVX2 =
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
index 3ef9e41..eeed9ee 100644
--- a/lib/librte_cryptodev/rte_crypto.h
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -86,7 +86,8 @@ enum rte_crypto_op_status {
*/
enum rte_crypto_op_sess_type {
RTE_CRYPTO_OP_WITH_SESSION, /**< Session based crypto operation */
- RTE_CRYPTO_OP_SESSIONLESS /**< Session-less crypto operation */
+ RTE_CRYPTO_OP_SESSIONLESS, /**< Session-less crypto operation */
+ RTE_CRYPTO_OP_SECURITY_SESSION /**< Security session crypto operation */
};
/**
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index 0a0ea59..5992063 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -508,6 +508,8 @@ struct rte_crypto_sym_op {
/**< Handle for the initialised session context */
struct rte_crypto_sym_xform *xform;
/**< Session-less API crypto operation parameters */
+ struct rte_security_session *sec_session;
+ /**< Handle for the initialised security session context */
};
RTE_STD_C11
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index e48d562..b9fbe0a 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -488,6 +488,16 @@ rte_cryptodev_devices_get(const char *driver_name, uint8_t *devices,
return count;
}
+void *
+rte_cryptodev_get_sec_ctx(uint8_t dev_id)
+{
+ if (rte_crypto_devices[dev_id].feature_flags &
+ RTE_CRYPTODEV_FF_SECURITY)
+ return rte_crypto_devices[dev_id].security_ctx;
+
+ return NULL;
+}
+
int
rte_cryptodev_socket_id(uint8_t dev_id)
{
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index fd0e3f1..cdc12db 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -351,6 +351,8 @@ rte_cryptodev_get_aead_algo_enum(enum rte_crypto_aead_algorithm *algo_enum,
/**< Utilises CPU NEON instructions */
#define RTE_CRYPTODEV_FF_CPU_ARM_CE (1ULL << 11)
/**< Utilises ARM CPU Cryptographic Extensions */
+#define RTE_CRYPTODEV_FF_SECURITY (1ULL << 12)
+/**< Support Security Protocol Processing */
/**
@@ -769,11 +771,17 @@ struct rte_cryptodev {
struct rte_cryptodev_cb_list link_intr_cbs;
/**< User application callback for interrupts if present */
+ void *security_ctx;
+ /**< Context for security ops */
+
__extension__
uint8_t attached : 1;
/**< Flag indicating the device is attached */
} __rte_cache_aligned;
+void *
+rte_cryptodev_get_sec_ctx(uint8_t dev_id);
+
/**
*
* The data part, with no function pointers, associated with each device.
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
index 919b6cc..3df3018 100644
--- a/lib/librte_cryptodev/rte_cryptodev_version.map
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -83,6 +83,7 @@ DPDK_17.08 {
DPDK_17.11 {
global:
+ rte_cryptodev_get_sec_ctx;
rte_cryptodev_name_get;
} DPDK_17.08;
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v5 04/11] net: add ESP header to generic flow steering
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 00/11] " Akhil Goyal
` (2 preceding siblings ...)
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 03/11] cryptodev: support security APIs Akhil Goyal
@ 2017-10-24 14:15 ` Akhil Goyal
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 05/11] mbuf: add security crypto flags and mbuf fields Akhil Goyal
` (7 subsequent siblings)
11 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-24 14:15 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
From: Boris Pismenny <borisp@mellanox.com>
The ESP header is required for IPsec crypto actions.
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
---
doc/api/doxy-api-index.md | 1 +
lib/librte_ether/rte_flow.h | 26 ++++++++++++++++++++
lib/librte_net/Makefile | 2 +-
lib/librte_net/rte_esp.h | 60 +++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 88 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_net/rte_esp.h
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 0f8d6d9..ac994ed 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -101,6 +101,7 @@ The public API headers are grouped by topics:
[ethernet] (@ref rte_ether.h),
[ARP] (@ref rte_arp.h),
[ICMP] (@ref rte_icmp.h),
+ [ESP] (@ref rte_esp.h),
[IP] (@ref rte_ip.h),
[SCTP] (@ref rte_sctp.h),
[TCP] (@ref rte_tcp.h),
diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
index 062e3ac..bd8274d 100644
--- a/lib/librte_ether/rte_flow.h
+++ b/lib/librte_ether/rte_flow.h
@@ -50,6 +50,7 @@
#include <rte_tcp.h>
#include <rte_udp.h>
#include <rte_byteorder.h>
+#include <rte_esp.h>
#ifdef __cplusplus
extern "C" {
@@ -336,6 +337,13 @@ enum rte_flow_item_type {
* See struct rte_flow_item_gtp.
*/
RTE_FLOW_ITEM_TYPE_GTPU,
+
+ /**
+ * Matches a ESP header.
+ *
+ * See struct rte_flow_item_esp.
+ */
+ RTE_FLOW_ITEM_TYPE_ESP,
};
/**
@@ -787,6 +795,24 @@ static const struct rte_flow_item_gtp rte_flow_item_gtp_mask = {
#endif
/**
+ * RTE_FLOW_ITEM_TYPE_ESP
+ *
+ * Matches an ESP header.
+ */
+struct rte_flow_item_esp {
+ struct esp_hdr hdr; /**< ESP header definition. */
+};
+
+/** Default mask for RTE_FLOW_ITEM_TYPE_ESP. */
+#ifndef __cplusplus
+static const struct rte_flow_item_esp rte_flow_item_esp_mask = {
+ .hdr = {
+ .spi = 0xffffffff,
+ },
+};
+#endif
+
+/**
* Matching pattern item definition.
*
* A pattern is formed by stacking items starting from the lowest protocol
diff --git a/lib/librte_net/Makefile b/lib/librte_net/Makefile
index cdaf0c7..50c358e 100644
--- a/lib/librte_net/Makefile
+++ b/lib/librte_net/Makefile
@@ -43,7 +43,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_NET) := rte_net.c
SRCS-$(CONFIG_RTE_LIBRTE_NET) += rte_net_crc.c
# install includes
-SYMLINK-$(CONFIG_RTE_LIBRTE_NET)-include := rte_ip.h rte_tcp.h rte_udp.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_NET)-include := rte_ip.h rte_tcp.h rte_udp.h rte_esp.h
SYMLINK-$(CONFIG_RTE_LIBRTE_NET)-include += rte_sctp.h rte_icmp.h rte_arp.h
SYMLINK-$(CONFIG_RTE_LIBRTE_NET)-include += rte_ether.h rte_gre.h rte_net.h
SYMLINK-$(CONFIG_RTE_LIBRTE_NET)-include += rte_net_crc.h
diff --git a/lib/librte_net/rte_esp.h b/lib/librte_net/rte_esp.h
new file mode 100644
index 0000000..e228af0
--- /dev/null
+++ b/lib/librte_net/rte_esp.h
@@ -0,0 +1,60 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright (c) 2016-2017, Mellanox Technologies. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_ESP_H_
+#define _RTE_ESP_H_
+
+/**
+ * @file
+ *
+ * ESP-related defines
+ */
+
+#include <stdint.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * ESP Header
+ */
+struct esp_hdr {
+ uint32_t spi; /**< Security Parameters Index */
+ uint32_t seq; /**< packet sequence number */
+} __attribute__((__packed__));
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* RTE_ESP_H_ */
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v5 05/11] mbuf: add security crypto flags and mbuf fields
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 00/11] " Akhil Goyal
` (3 preceding siblings ...)
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 04/11] net: add ESP header to generic flow steering Akhil Goyal
@ 2017-10-24 14:15 ` Akhil Goyal
2017-10-25 9:38 ` Olivier MATZ
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 06/11] ethdev: support security APIs Akhil Goyal
` (6 subsequent siblings)
11 siblings, 1 reply; 195+ messages in thread
From: Akhil Goyal @ 2017-10-24 14:15 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
From: Boris Pismenny <borisp@mellanox.com>
Add security crypto flags and update mbuf fields to support
IPsec crypto offload for transmitted packets, and to indicate
crypto result for received packets.
Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
---
lib/librte_mbuf/rte_mbuf.c | 6 ++++++
lib/librte_mbuf/rte_mbuf.h | 35 ++++++++++++++++++++++++++++++++---
lib/librte_mbuf/rte_mbuf_ptype.c | 1 +
lib/librte_mbuf/rte_mbuf_ptype.h | 11 +++++++++++
4 files changed, 50 insertions(+), 3 deletions(-)
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 0e18709..6659261 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -324,6 +324,8 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)
case PKT_RX_QINQ_STRIPPED: return "PKT_RX_QINQ_STRIPPED";
case PKT_RX_LRO: return "PKT_RX_LRO";
case PKT_RX_TIMESTAMP: return "PKT_RX_TIMESTAMP";
+ case PKT_RX_SEC_OFFLOAD: return "PKT_RX_SEC_OFFLOAD";
+ case PKT_RX_SEC_OFFLOAD_FAILED: return "PKT_RX_SEC_OFFLOAD_FAILED";
default: return NULL;
}
}
@@ -359,6 +361,8 @@ rte_get_rx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
{ PKT_RX_QINQ_STRIPPED, PKT_RX_QINQ_STRIPPED, NULL },
{ PKT_RX_LRO, PKT_RX_LRO, NULL },
{ PKT_RX_TIMESTAMP, PKT_RX_TIMESTAMP, NULL },
+ { PKT_RX_SEC_OFFLOAD, PKT_RX_SEC_OFFLOAD, NULL },
+ { PKT_RX_SEC_OFFLOAD_FAILED, PKT_RX_SEC_OFFLOAD_FAILED, NULL },
};
const char *name;
unsigned int i;
@@ -411,6 +415,7 @@ const char *rte_get_tx_ol_flag_name(uint64_t mask)
case PKT_TX_TUNNEL_GENEVE: return "PKT_TX_TUNNEL_GENEVE";
case PKT_TX_TUNNEL_MPLSINUDP: return "PKT_TX_TUNNEL_MPLSINUDP";
case PKT_TX_MACSEC: return "PKT_TX_MACSEC";
+ case PKT_TX_SEC_OFFLOAD: return "PKT_TX_SEC_OFFLOAD";
default: return NULL;
}
}
@@ -444,6 +449,7 @@ rte_get_tx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
{ PKT_TX_TUNNEL_MPLSINUDP, PKT_TX_TUNNEL_MASK,
"PKT_TX_TUNNEL_NONE" },
{ PKT_TX_MACSEC, PKT_TX_MACSEC, NULL },
+ { PKT_TX_SEC_OFFLOAD, PKT_TX_SEC_OFFLOAD, NULL },
};
const char *name;
unsigned int i;
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index cc38040..5d478da 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -189,11 +189,26 @@ extern "C" {
*/
#define PKT_RX_TIMESTAMP (1ULL << 17)
+/**
+ * Indicate that security offload processing was applied on the RX packet.
+ */
+#define PKT_RX_SEC_OFFLOAD (1ULL << 18)
+
+/**
+ * Indicate that security offload processing failed on the RX packet.
+ */
+#define PKT_RX_SEC_OFFLOAD_FAILED (1ULL << 19)
+
/* add new RX flags here */
/* add new TX flags here */
/**
+ * Request security offload processing on the TX packet.
+ */
+#define PKT_TX_SEC_OFFLOAD (1ULL << 43)
+
+/**
* Offload the MACsec. This flag must be set by the application to enable
* this offload feature for a packet to be transmitted.
*/
@@ -316,7 +331,8 @@ extern "C" {
PKT_TX_QINQ_PKT | \
PKT_TX_VLAN_PKT | \
PKT_TX_TUNNEL_MASK | \
- PKT_TX_MACSEC)
+ PKT_TX_MACSEC | \
+ PKT_TX_SEC_OFFLOAD)
#define __RESERVED (1ULL << 61) /**< reserved for future mbuf use */
@@ -456,8 +472,21 @@ struct rte_mbuf {
uint32_t l3_type:4; /**< (Outer) L3 type. */
uint32_t l4_type:4; /**< (Outer) L4 type. */
uint32_t tun_type:4; /**< Tunnel type. */
- uint32_t inner_l2_type:4; /**< Inner L2 type. */
- uint32_t inner_l3_type:4; /**< Inner L3 type. */
+ RTE_STD_C11
+ union {
+ uint8_t inner_esp_next_proto;
+ /**< ESP next protocol type, valid if
+ * RTE_PTYPE_TUNNEL_ESP tunnel type is set
+ * on both Tx and Rx.
+ */
+ __extension__
+ struct {
+ uint8_t inner_l2_type:4;
+ /**< Inner L2 type. */
+ uint8_t inner_l3_type:4;
+ /**< Inner L3 type. */
+ };
+ };
uint32_t inner_l4_type:4; /**< Inner L4 type. */
};
};
diff --git a/lib/librte_mbuf/rte_mbuf_ptype.c b/lib/librte_mbuf/rte_mbuf_ptype.c
index a450814..a623226 100644
--- a/lib/librte_mbuf/rte_mbuf_ptype.c
+++ b/lib/librte_mbuf/rte_mbuf_ptype.c
@@ -91,6 +91,7 @@ const char *rte_get_ptype_tunnel_name(uint32_t ptype)
case RTE_PTYPE_TUNNEL_GRENAT: return "TUNNEL_GRENAT";
case RTE_PTYPE_TUNNEL_GTPC: return "TUNNEL_GTPC";
case RTE_PTYPE_TUNNEL_GTPU: return "TUNNEL_GTPU";
+ case RTE_PTYPE_TUNNEL_ESP: return "TUNNEL_ESP";
default: return "TUNNEL_UNKNOWN";
}
}
diff --git a/lib/librte_mbuf/rte_mbuf_ptype.h b/lib/librte_mbuf/rte_mbuf_ptype.h
index 978c4a2..5c62435 100644
--- a/lib/librte_mbuf/rte_mbuf_ptype.h
+++ b/lib/librte_mbuf/rte_mbuf_ptype.h
@@ -415,6 +415,17 @@ extern "C" {
*/
#define RTE_PTYPE_TUNNEL_GTPU 0x00008000
/**
+ * ESP (IP Encapsulating Security Payload) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=51>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=51>
+ */
+#define RTE_PTYPE_TUNNEL_ESP 0x00009000
+/**
* Mask of tunneling packet types.
*/
#define RTE_PTYPE_TUNNEL_MASK 0x0000f000
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v5 05/11] mbuf: add security crypto flags and mbuf fields
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 05/11] mbuf: add security crypto flags and mbuf fields Akhil Goyal
@ 2017-10-25 9:38 ` Olivier MATZ
2017-10-25 12:05 ` Akhil Goyal
0 siblings, 1 reply; 195+ messages in thread
From: Olivier MATZ @ 2017-10-25 9:38 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs
On Tue, Oct 24, 2017 at 07:45:39PM +0530, Akhil Goyal wrote:
> From: Boris Pismenny <borisp@mellanox.com>
>
> Add security crypto flags and update mbuf fields to support
> IPsec crypto offload for transmitted packets, and to indicate
> crypto result for received packets.
>
> Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
> Signed-off-by: Boris Pismenny <borisp@mellanox.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
[...]
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -189,11 +189,26 @@ extern "C" {
> */
> #define PKT_RX_TIMESTAMP (1ULL << 17)
>
> +/**
> + * Indicate that security offload processing was applied on the RX packet.
> + */
> +#define PKT_RX_SEC_OFFLOAD (1ULL << 18)
> +
> +/**
> + * Indicate that security offload processing failed on the RX packet.
> + */
> +#define PKT_RX_SEC_OFFLOAD_FAILED (1ULL << 19)
> +
in case you do a v6, please fix the alignment, else we'll
fix it globally in another patch later.
Acked-by: Olivier Matz <olivier.matz@6wind.com>
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v5 05/11] mbuf: add security crypto flags and mbuf fields
2017-10-25 9:38 ` Olivier MATZ
@ 2017-10-25 12:05 ` Akhil Goyal
0 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-25 12:05 UTC (permalink / raw)
To: Olivier MATZ
Cc: dev, declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs
Hi Olivier,
On 10/25/2017 3:08 PM, Olivier MATZ wrote:
> On Tue, Oct 24, 2017 at 07:45:39PM +0530, Akhil Goyal wrote:
>> From: Boris Pismenny <borisp@mellanox.com>
>>
>> Add security crypto flags and update mbuf fields to support
>> IPsec crypto offload for transmitted packets, and to indicate
>> crypto result for received packets.
>>
>> Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
>> Signed-off-by: Boris Pismenny <borisp@mellanox.com>
>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>
> [...]
>
>> --- a/lib/librte_mbuf/rte_mbuf.h
>> +++ b/lib/librte_mbuf/rte_mbuf.h
>> @@ -189,11 +189,26 @@ extern "C" {
>> */
>> #define PKT_RX_TIMESTAMP (1ULL << 17)
>>
>> +/**
>> + * Indicate that security offload processing was applied on the RX packet.
>> + */
>> +#define PKT_RX_SEC_OFFLOAD (1ULL << 18)
>> +
>> +/**
>> + * Indicate that security offload processing failed on the RX packet.
>> + */
>> +#define PKT_RX_SEC_OFFLOAD_FAILED (1ULL << 19)
>> +
>
> in case you do a v6, please fix the alignment, else we'll
> fix it globally in another patch later.
>
> Acked-by: Olivier Matz <olivier.matz@6wind.com>
>
will fix the alignment of the defines made in this patch.
Thanks,
Akhil
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v5 06/11] ethdev: support security APIs
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 00/11] " Akhil Goyal
` (4 preceding siblings ...)
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 05/11] mbuf: add security crypto flags and mbuf fields Akhil Goyal
@ 2017-10-24 14:15 ` Akhil Goyal
2017-10-25 5:05 ` Hemant Agrawal
2017-10-25 7:01 ` Shahaf Shuler
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 07/11] ethdev: add rte flow action for crypto Akhil Goyal
` (5 subsequent siblings)
11 siblings, 2 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-24 14:15 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
From: Declan Doherty <declan.doherty@intel.com>
rte_flow_action type and ethdev updated to support rte_security
sessions for crypto offload to ethernet device.
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
lib/librte_ether/rte_ethdev.c | 7 +++++++
lib/librte_ether/rte_ethdev.h | 8 ++++++++
lib/librte_ether/rte_ethdev_version.map | 1 +
3 files changed, 16 insertions(+)
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 0b1e928..a3b0e4e 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -301,6 +301,13 @@ rte_eth_dev_socket_id(uint16_t port_id)
return rte_eth_devices[port_id].data->numa_node;
}
+void *
+rte_eth_dev_get_sec_ctx(uint8_t port_id)
+{
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL);
+ return rte_eth_devices[port_id].security_ctx;
+}
+
uint16_t
rte_eth_dev_count(void)
{
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index b773589..119f7fc 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -180,6 +180,8 @@ extern "C" {
#include <rte_dev.h>
#include <rte_devargs.h>
#include <rte_errno.h>
+#include <rte_common.h>
+
#include "rte_ether.h"
#include "rte_eth_ctrl.h"
#include "rte_dev_info.h"
@@ -963,6 +965,7 @@ struct rte_eth_conf {
#define DEV_RX_OFFLOAD_CRC_STRIP 0x00001000
#define DEV_RX_OFFLOAD_SCATTER 0x00002000
#define DEV_RX_OFFLOAD_TIMESTAMP 0x00004000
+#define DEV_RX_OFFLOAD_SECURITY 0x00008000
#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
DEV_RX_OFFLOAD_UDP_CKSUM | \
DEV_RX_OFFLOAD_TCP_CKSUM)
@@ -998,6 +1001,7 @@ struct rte_eth_conf {
* When set application must guarantee that per-queue all mbufs comes from
* the same mempool and has refcnt = 1.
*/
+#define DEV_TX_OFFLOAD_SECURITY 0x00020000
struct rte_pci_device;
@@ -1741,8 +1745,12 @@ struct rte_eth_dev {
*/
struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
enum rte_eth_dev_state state; /**< Flag indicating the port state */
+ void *security_ctx; /**< Context for security ops */
} __rte_cache_aligned;
+void *
+rte_eth_dev_get_sec_ctx(uint8_t port_id);
+
struct rte_eth_dev_sriov {
uint8_t active; /**< SRIOV is active with 16, 32 or 64 pools */
uint8_t nb_q_per_pool; /**< rx queue number per pool */
diff --git a/lib/librte_ether/rte_ethdev_version.map b/lib/librte_ether/rte_ethdev_version.map
index 57d9b54..e9681ac 100644
--- a/lib/librte_ether/rte_ethdev_version.map
+++ b/lib/librte_ether/rte_ethdev_version.map
@@ -191,6 +191,7 @@ DPDK_17.08 {
DPDK_17.11 {
global:
+ rte_eth_dev_get_sec_ctx;
rte_eth_dev_pool_ops_supported;
rte_eth_dev_reset;
rte_flow_error_set;
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v5 06/11] ethdev: support security APIs
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 06/11] ethdev: support security APIs Akhil Goyal
@ 2017-10-25 5:05 ` Hemant Agrawal
2017-10-25 7:01 ` Shahaf Shuler
1 sibling, 0 replies; 195+ messages in thread
From: Hemant Agrawal @ 2017-10-25 5:05 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: declan.doherty, pablo.de.lara.guarch, radu.nicolau, borisp,
aviadye, thomas, sandeep.malik, jerin.jacob, john.mcnamara,
konstantin.ananyev, shahafs, olivier.matz
On 10/24/2017 7:45 PM, Akhil Goyal wrote:
> From: Declan Doherty <declan.doherty@intel.com>
>
> rte_flow_action type and ethdev updated to support rte_security
> sessions for crypto offload to ethernet device.
>
> Signed-off-by: Boris Pismenny <borisp@mellanox.com>
> Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> ---
> lib/librte_ether/rte_ethdev.c | 7 +++++++
> lib/librte_ether/rte_ethdev.h | 8 ++++++++
> lib/librte_ether/rte_ethdev_version.map | 1 +
> 3 files changed, 16 insertions(+)
>
> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> index 0b1e928..a3b0e4e 100644
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c
> @@ -301,6 +301,13 @@ rte_eth_dev_socket_id(uint16_t port_id)
> return rte_eth_devices[port_id].data->numa_node;
> }
>
> +void *
> +rte_eth_dev_get_sec_ctx(uint8_t port_id)
> +{
> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL);
> + return rte_eth_devices[port_id].security_ctx;
> +}
> +
> uint16_t
> rte_eth_dev_count(void)
> {
> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
> index b773589..119f7fc 100644
> --- a/lib/librte_ether/rte_ethdev.h
> +++ b/lib/librte_ether/rte_ethdev.h
> @@ -180,6 +180,8 @@ extern "C" {
> #include <rte_dev.h>
> #include <rte_devargs.h>
> #include <rte_errno.h>
> +#include <rte_common.h>
> +
> #include "rte_ether.h"
> #include "rte_eth_ctrl.h"
> #include "rte_dev_info.h"
> @@ -963,6 +965,7 @@ struct rte_eth_conf {
> #define DEV_RX_OFFLOAD_CRC_STRIP 0x00001000
> #define DEV_RX_OFFLOAD_SCATTER 0x00002000
> #define DEV_RX_OFFLOAD_TIMESTAMP 0x00004000
> +#define DEV_RX_OFFLOAD_SECURITY 0x00008000
> #define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
> DEV_RX_OFFLOAD_UDP_CKSUM | \
> DEV_RX_OFFLOAD_TCP_CKSUM)
> @@ -998,6 +1001,7 @@ struct rte_eth_conf {
> * When set application must guarantee that per-queue all mbufs comes from
> * the same mempool and has refcnt = 1.
> */
> +#define DEV_TX_OFFLOAD_SECURITY 0x00020000
>
> struct rte_pci_device;
>
> @@ -1741,8 +1745,12 @@ struct rte_eth_dev {
> */
> struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
> enum rte_eth_dev_state state; /**< Flag indicating the port state */
> + void *security_ctx; /**< Context for security ops */
> } __rte_cache_aligned;
>
> +void *
> +rte_eth_dev_get_sec_ctx(uint8_t port_id);
> +
> struct rte_eth_dev_sriov {
> uint8_t active; /**< SRIOV is active with 16, 32 or 64 pools */
> uint8_t nb_q_per_pool; /**< rx queue number per pool */
> diff --git a/lib/librte_ether/rte_ethdev_version.map b/lib/librte_ether/rte_ethdev_version.map
> index 57d9b54..e9681ac 100644
> --- a/lib/librte_ether/rte_ethdev_version.map
> +++ b/lib/librte_ether/rte_ethdev_version.map
> @@ -191,6 +191,7 @@ DPDK_17.08 {
> DPDK_17.11 {
> global:
>
> + rte_eth_dev_get_sec_ctx;
> rte_eth_dev_pool_ops_supported;
> rte_eth_dev_reset;
> rte_flow_error_set;
>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v5 06/11] ethdev: support security APIs
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 06/11] ethdev: support security APIs Akhil Goyal
2017-10-25 5:05 ` Hemant Agrawal
@ 2017-10-25 7:01 ` Shahaf Shuler
2017-10-25 12:35 ` Aviad Yehezkel
1 sibling, 1 reply; 195+ messages in thread
From: Shahaf Shuler @ 2017-10-25 7:01 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, Boris Pismenny, Aviad Yehezkel, Thomas Monjalon,
sandeep.malik, jerin.jacob, john.mcnamara, konstantin.ananyev,
olivier.matz
Hi,
I know we are in a rush to put this patches in before RC2. however I still see critical issue (below).
Tuesday, October 24, 2017 5:16 PM, Akhil Goyal:
> From: Declan Doherty <declan.doherty@intel.com>
>
> rte_flow_action type and ethdev updated to support rte_security sessions
> for crypto offload to ethernet device.
>
> Signed-off-by: Boris Pismenny <borisp@mellanox.com>
> Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> ---
> lib/librte_ether/rte_ethdev.c | 7 +++++++
> lib/librte_ether/rte_ethdev.h | 8 ++++++++
> lib/librte_ether/rte_ethdev_version.map | 1 +
> 3 files changed, 16 insertions(+)
>
> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> index 0b1e928..a3b0e4e 100644
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c
> @@ -301,6 +301,13 @@ rte_eth_dev_socket_id(uint16_t port_id)
> return rte_eth_devices[port_id].data->numa_node;
> }
>
> +void *
> +rte_eth_dev_get_sec_ctx(uint8_t port_id) {
> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL);
> + return rte_eth_devices[port_id].security_ctx;
> +}
> +
> uint16_t
> rte_eth_dev_count(void)
> {
> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
> index b773589..119f7fc 100644
> --- a/lib/librte_ether/rte_ethdev.h
> +++ b/lib/librte_ether/rte_ethdev.h
> @@ -180,6 +180,8 @@ extern "C" {
> #include <rte_dev.h>
> #include <rte_devargs.h>
> #include <rte_errno.h>
> +#include <rte_common.h>
> +
> #include "rte_ether.h"
> #include "rte_eth_ctrl.h"
> #include "rte_dev_info.h"
> @@ -963,6 +965,7 @@ struct rte_eth_conf {
> #define DEV_RX_OFFLOAD_CRC_STRIP 0x00001000
> #define DEV_RX_OFFLOAD_SCATTER 0x00002000
> #define DEV_RX_OFFLOAD_TIMESTAMP 0x00004000
> +#define DEV_RX_OFFLOAD_SECURITY 0x00008000
How application will control this offload on 17.11 ?
The PMDs are not yet moved to the new API, so crypto offload is going to be enabled by default with no way to disable?
> #define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM |
> \
> DEV_RX_OFFLOAD_UDP_CKSUM | \
> DEV_RX_OFFLOAD_TCP_CKSUM)
> @@ -998,6 +1001,7 @@ struct rte_eth_conf {
> * When set application must guarantee that per-queue all mbufs comes
> from
> * the same mempool and has refcnt = 1.
> */
> +#define DEV_TX_OFFLOAD_SECURITY 0x00020000
Same point here.
>
> struct rte_pci_device;
>
> @@ -1741,8 +1745,12 @@ struct rte_eth_dev {
> */
> struct rte_eth_rxtx_callback
> *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
> enum rte_eth_dev_state state; /**< Flag indicating the port state */
> + void *security_ctx; /**< Context for security ops */
> } __rte_cache_aligned;
>
> +void *
> +rte_eth_dev_get_sec_ctx(uint8_t port_id);
> +
> struct rte_eth_dev_sriov {
> uint8_t active; /**< SRIOV is active with 16, 32 or 64 pools */
> uint8_t nb_q_per_pool; /**< rx queue number per pool */
> diff --git a/lib/librte_ether/rte_ethdev_version.map
> b/lib/librte_ether/rte_ethdev_version.map
> index 57d9b54..e9681ac 100644
> --- a/lib/librte_ether/rte_ethdev_version.map
> +++ b/lib/librte_ether/rte_ethdev_version.map
> @@ -191,6 +191,7 @@ DPDK_17.08 {
> DPDK_17.11 {
> global:
>
> + rte_eth_dev_get_sec_ctx;
> rte_eth_dev_pool_ops_supported;
> rte_eth_dev_reset;
> rte_flow_error_set;
> --
> 2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v5 06/11] ethdev: support security APIs
2017-10-25 7:01 ` Shahaf Shuler
@ 2017-10-25 12:35 ` Aviad Yehezkel
0 siblings, 0 replies; 195+ messages in thread
From: Aviad Yehezkel @ 2017-10-25 12:35 UTC (permalink / raw)
To: Shahaf Shuler, Akhil Goyal, dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, Boris Pismenny, Aviad Yehezkel, Thomas Monjalon,
sandeep.malik, jerin.jacob, john.mcnamara, konstantin.ananyev,
olivier.matz
On 10/25/2017 10:01 AM, Shahaf Shuler wrote:
> Hi,
>
> I know we are in a rush to put this patches in before RC2. however I still see critical issue (below).
>
> Tuesday, October 24, 2017 5:16 PM, Akhil Goyal:
>> From: Declan Doherty <declan.doherty@intel.com>
>>
>> rte_flow_action type and ethdev updated to support rte_security sessions
>> for crypto offload to ethernet device.
>>
>> Signed-off-by: Boris Pismenny <borisp@mellanox.com>
>> Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>> ---
>> lib/librte_ether/rte_ethdev.c | 7 +++++++
>> lib/librte_ether/rte_ethdev.h | 8 ++++++++
>> lib/librte_ether/rte_ethdev_version.map | 1 +
>> 3 files changed, 16 insertions(+)
>>
>> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
>> index 0b1e928..a3b0e4e 100644
>> --- a/lib/librte_ether/rte_ethdev.c
>> +++ b/lib/librte_ether/rte_ethdev.c
>> @@ -301,6 +301,13 @@ rte_eth_dev_socket_id(uint16_t port_id)
>> return rte_eth_devices[port_id].data->numa_node;
>> }
>>
>> +void *
>> +rte_eth_dev_get_sec_ctx(uint8_t port_id) {
>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL);
>> + return rte_eth_devices[port_id].security_ctx;
>> +}
>> +
>> uint16_t
>> rte_eth_dev_count(void)
>> {
>> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
>> index b773589..119f7fc 100644
>> --- a/lib/librte_ether/rte_ethdev.h
>> +++ b/lib/librte_ether/rte_ethdev.h
>> @@ -180,6 +180,8 @@ extern "C" {
>> #include <rte_dev.h>
>> #include <rte_devargs.h>
>> #include <rte_errno.h>
>> +#include <rte_common.h>
>> +
>> #include "rte_ether.h"
>> #include "rte_eth_ctrl.h"
>> #include "rte_dev_info.h"
>> @@ -963,6 +965,7 @@ struct rte_eth_conf {
>> #define DEV_RX_OFFLOAD_CRC_STRIP 0x00001000
>> #define DEV_RX_OFFLOAD_SCATTER 0x00002000
>> #define DEV_RX_OFFLOAD_TIMESTAMP 0x00004000
>> +#define DEV_RX_OFFLOAD_SECURITY 0x00008000
> How application will control this offload on 17.11 ?
> The PMDs are not yet moved to the new API, so crypto offload is going to be enabled by default with no way to disable?
will be fixed in v6
>
>> #define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM |
>> \
>> DEV_RX_OFFLOAD_UDP_CKSUM | \
>> DEV_RX_OFFLOAD_TCP_CKSUM)
>> @@ -998,6 +1001,7 @@ struct rte_eth_conf {
>> * When set application must guarantee that per-queue all mbufs comes
>> from
>> * the same mempool and has refcnt = 1.
>> */
>> +#define DEV_TX_OFFLOAD_SECURITY 0x00020000
> Same point here.
>
>> struct rte_pci_device;
>>
>> @@ -1741,8 +1745,12 @@ struct rte_eth_dev {
>> */
>> struct rte_eth_rxtx_callback
>> *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
>> enum rte_eth_dev_state state; /**< Flag indicating the port state */
>> + void *security_ctx; /**< Context for security ops */
>> } __rte_cache_aligned;
>>
>> +void *
>> +rte_eth_dev_get_sec_ctx(uint8_t port_id);
>> +
>> struct rte_eth_dev_sriov {
>> uint8_t active; /**< SRIOV is active with 16, 32 or 64 pools */
>> uint8_t nb_q_per_pool; /**< rx queue number per pool */
>> diff --git a/lib/librte_ether/rte_ethdev_version.map
>> b/lib/librte_ether/rte_ethdev_version.map
>> index 57d9b54..e9681ac 100644
>> --- a/lib/librte_ether/rte_ethdev_version.map
>> +++ b/lib/librte_ether/rte_ethdev_version.map
>> @@ -191,6 +191,7 @@ DPDK_17.08 {
>> DPDK_17.11 {
>> global:
>>
>> + rte_eth_dev_get_sec_ctx;
>> rte_eth_dev_pool_ops_supported;
>> rte_eth_dev_reset;
>> rte_flow_error_set;
>> --
>> 2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v5 07/11] ethdev: add rte flow action for crypto
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 00/11] " Akhil Goyal
` (5 preceding siblings ...)
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 06/11] ethdev: support security APIs Akhil Goyal
@ 2017-10-24 14:15 ` Akhil Goyal
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 08/11] mk: add rte security into build system Akhil Goyal
` (4 subsequent siblings)
11 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-24 14:15 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
From: Boris Pismenny <borisp@mellanox.com>
The crypto action is specified by an application to request
crypto offload for a flow.
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
Reviewed-by: John McNamara <john.mcnamara@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
doc/guides/prog_guide/rte_flow.rst | 84 +++++++++++++++++++++++++++++++++++++-
lib/librte_ether/rte_flow.h | 39 ++++++++++++++++++
2 files changed, 121 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index bcb438e..d158be5 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -187,7 +187,7 @@ Pattern item
Pattern items fall in two categories:
- Matching protocol headers and packet data (ANY, RAW, ETH, VLAN, IPV4,
- IPV6, ICMP, UDP, TCP, SCTP, VXLAN, MPLS, GRE and so on), usually
+ IPV6, ICMP, UDP, TCP, SCTP, VXLAN, MPLS, GRE, ESP and so on), usually
associated with a specification structure.
- Matching meta-data or affecting pattern processing (END, VOID, INVERT, PF,
@@ -972,6 +972,14 @@ flow rules.
- ``teid``: tunnel endpoint identifier.
- Default ``mask`` matches teid only.
+Item: ``ESP``
+^^^^^^^^^^^^^
+
+Matches an ESP header.
+
+- ``hdr``: ESP header definition (``rte_esp.h``).
+- Default ``mask`` matches SPI only.
+
Actions
~~~~~~~
@@ -989,7 +997,7 @@ They fall in three categories:
additional processing by subsequent flow rules.
- Other non-terminating meta actions that do not affect the fate of packets
- (END, VOID, MARK, FLAG, COUNT).
+ (END, VOID, MARK, FLAG, COUNT, SECURITY).
When several actions are combined in a flow rule, they should all have
different types (e.g. dropping a packet twice is not possible).
@@ -1394,6 +1402,78 @@ the rte_mtr* API.
| ``mtr_id`` | MTR object ID |
+--------------+---------------+
+Action: ``SECURITY``
+^^^^^^^^^^^^^^^^^^^^
+
+Perform the security action on flows matched by the pattern items
+according to the configuration of the security session.
+
+This action modifies the payload of matched flows. For INLINE_CRYPTO, the
+security protocol headers and IV are fully provided by the application as
+specified in the flow pattern. The payload of matching packets is
+encrypted on egress, and decrypted and authenticated on ingress.
+For INLINE_PROTOCOL, the security protocol is fully offloaded to HW,
+providing full encapsulation and decapsulation of packets in security
+protocols. The flow pattern specifies both the outer security header fields
+and the inner packet fields. The security session specified in the action
+must match the pattern parameters.
+
+The security session specified in the action must be created on the same
+port as the flow action that is being specified.
+
+The ingress/egress flow attribute should match that specified in the
+security session if the security session supports the definition of the
+direction.
+
+Multiple flows can be configured to use the same security session.
+
+- Non-terminating by default.
+
+.. _table_rte_flow_action_security:
+
+.. table:: SECURITY
+
+ +----------------------+--------------------------------------+
+ | Field | Value |
+ +======================+======================================+
+ | ``security_session`` | security session to apply |
+ +----------------------+--------------------------------------+
+
+The following is an example of configuring IPsec inline using the
+INLINE_CRYPTO security session:
+
+The encryption algorithm, keys and salt are part of the opaque
+``rte_security_session``. The SA is identified according to the IP and ESP
+fields in the pattern items.
+
+.. _table_rte_flow_item_esp_inline_example:
+
+.. table:: IPsec inline crypto flow pattern items.
+
+ +-------+----------+
+ | Index | Item |
+ +=======+==========+
+ | 0 | Ethernet |
+ +-------+----------+
+ | 1 | IPv4 |
+ +-------+----------+
+ | 2 | ESP |
+ +-------+----------+
+ | 3 | END |
+ +-------+----------+
+
+.. _table_rte_flow_action_esp_inline_example:
+
+.. table:: IPsec inline flow actions.
+
+ +-------+----------+
+ | Index | Action |
+ +=======+==========+
+ | 0 | SECURITY |
+ +-------+----------+
+ | 1 | END |
+ +-------+----------+
+
Negative types
~~~~~~~~~~~~~~
diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
index bd8274d..47c88ea 100644
--- a/lib/librte_ether/rte_flow.h
+++ b/lib/librte_ether/rte_flow.h
@@ -1001,6 +1001,14 @@ enum rte_flow_action_type {
* See file rte_mtr.h for MTR object configuration.
*/
RTE_FLOW_ACTION_TYPE_METER,
+
+ /**
+ * Redirects packets to security engine of current device for security
+ * processing as specified by security session.
+ *
+ * See struct rte_flow_action_security.
+ */
+ RTE_FLOW_ACTION_TYPE_SECURITY
};
/**
@@ -1108,6 +1116,37 @@ struct rte_flow_action_meter {
};
/**
+ * RTE_FLOW_ACTION_TYPE_SECURITY
+ *
+ * Perform the security action on flows matched by the pattern items
+ * according to the configuration of the security session.
+ *
+ * This action modifies the payload of matched flows. For INLINE_CRYPTO, the
+ * security protocol headers and IV are fully provided by the application as
+ * specified in the flow pattern. The payload of matching packets is
+ * encrypted on egress, and decrypted and authenticated on ingress.
+ * For INLINE_PROTOCOL, the security protocol is fully offloaded to HW,
+ * providing full encapsulation and decapsulation of packets in security
+ * protocols. The flow pattern specifies both the outer security header fields
+ * and the inner packet fields. The security session specified in the action
+ * must match the pattern parameters.
+ *
+ * The security session specified in the action must be created on the same
+ * port as the flow action that is being specified.
+ *
+ * The ingress/egress flow attribute should match that specified in the
+ * security session if the security session supports the definition of the
+ * direction.
+ *
+ * Multiple flows can be configured to use the same security session.
+ *
+ * Non-terminating by default.
+ */
+struct rte_flow_action_security {
+ void *security_session; /**< Pointer to security session structure. */
+};
+
+/**
* Definition of a single action.
*
* A list of actions is terminated by a END action.
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v5 08/11] mk: add rte security into build system
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 00/11] " Akhil Goyal
` (6 preceding siblings ...)
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 07/11] ethdev: add rte flow action for crypto Akhil Goyal
@ 2017-10-24 14:15 ` Akhil Goyal
2017-10-24 20:48 ` Thomas Monjalon
2017-10-25 5:04 ` Hemant Agrawal
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 09/11] net/ixgbe: enable inline ipsec Akhil Goyal
` (3 subsequent siblings)
11 siblings, 2 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-24 14:15 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
---
config/common_base | 5 +++++
lib/Makefile | 5 +++++
mk/rte.app.mk | 1 +
3 files changed, 11 insertions(+)
diff --git a/config/common_base b/config/common_base
index d9471e8..f5d085d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -548,6 +548,11 @@ CONFIG_RTE_LIBRTE_PMD_MRVL_CRYPTO=n
CONFIG_RTE_LIBRTE_PMD_MRVL_CRYPTO_DEBUG=n
#
+# Compile generic security library
+#
+CONFIG_RTE_LIBRTE_SECURITY=y
+
+#
# Compile generic event device library
#
CONFIG_RTE_LIBRTE_EVENTDEV=y
diff --git a/lib/Makefile b/lib/Makefile
index 527b95b..645094c 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -50,6 +50,11 @@ DEPDIRS-librte_ether += librte_mbuf
DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
DEPDIRS-librte_cryptodev := librte_eal librte_mempool librte_ring librte_mbuf
DEPDIRS-librte_cryptodev += librte_kvargs
+DEPDIRS-librte_cryptodev += librte_ether
+DIRS-$(CONFIG_RTE_LIBRTE_SECURITY) += librte_security
+DEPDIRS-librte_security := librte_eal librte_mempool librte_ring librte_mbuf
+DEPDIRS-librte_security += librte_ether
+DEPDIRS-librte_security += librte_cryptodev
DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += librte_eventdev
DEPDIRS-librte_eventdev := librte_eal librte_ring librte_ether librte_hash
DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 8192b98..d975fad 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -93,6 +93,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF) += -lrte_mbuf
_LDLIBS-$(CONFIG_RTE_LIBRTE_NET) += -lrte_net
_LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER) += -lrte_ethdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += -lrte_cryptodev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -lrte_security
_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v5 08/11] mk: add rte security into build system
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 08/11] mk: add rte security into build system Akhil Goyal
@ 2017-10-24 20:48 ` Thomas Monjalon
2017-10-25 11:12 ` Akhil Goyal
2017-10-25 5:04 ` Hemant Agrawal
1 sibling, 1 reply; 195+ messages in thread
From: Thomas Monjalon @ 2017-10-24 20:48 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, sandeep.malik, jerin.jacob,
john.mcnamara, konstantin.ananyev, shahafs, olivier.matz
Can you squash this patch with the one bringing the lib?
> +DEPDIRS-librte_cryptodev += librte_ether
I don't like this dependency.
Why is it needed?
> +DIRS-$(CONFIG_RTE_LIBRTE_SECURITY) += librte_security
> +DEPDIRS-librte_security := librte_eal librte_mempool librte_ring librte_mbuf
> +DEPDIRS-librte_security += librte_ether
> +DEPDIRS-librte_security += librte_cryptodev
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v5 08/11] mk: add rte security into build system
2017-10-24 20:48 ` Thomas Monjalon
@ 2017-10-25 11:12 ` Akhil Goyal
0 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-25 11:12 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, sandeep.malik, jerin.jacob,
john.mcnamara, konstantin.ananyev, shahafs, olivier.matz
Hi Thomas,
On 10/25/2017 2:18 AM, Thomas Monjalon wrote:
> Can you squash this patch with the one bringing the lib?
Ok will do that. Will need to move the cryptodev/mbuf/ethdev/net patches
before the library patch.
>
>> +DEPDIRS-librte_cryptodev += librte_ether
>
> I don't like this dependency.
> Why is it needed?
It will be removed in v6.
We no longer need it. It was used in some previous versions of this
series. It accidentally got missed. Thanks for pointing this out.
>
>> +DIRS-$(CONFIG_RTE_LIBRTE_SECURITY) += librte_security
>> +DEPDIRS-librte_security := librte_eal librte_mempool librte_ring librte_mbuf
>> +DEPDIRS-librte_security += librte_ether
>> +DEPDIRS-librte_security += librte_cryptodev
>
>
-Akhil
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v5 08/11] mk: add rte security into build system
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 08/11] mk: add rte security into build system Akhil Goyal
2017-10-24 20:48 ` Thomas Monjalon
@ 2017-10-25 5:04 ` Hemant Agrawal
1 sibling, 0 replies; 195+ messages in thread
From: Hemant Agrawal @ 2017-10-25 5:04 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: declan.doherty, pablo.de.lara.guarch, radu.nicolau, borisp,
aviadye, thomas, sandeep.malik, jerin.jacob, john.mcnamara,
konstantin.ananyev, shahafs, olivier.matz
On 10/24/2017 7:45 PM, Akhil Goyal wrote:
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> ---
> config/common_base | 5 +++++
> lib/Makefile | 5 +++++
> mk/rte.app.mk | 1 +
> 3 files changed, 11 insertions(+)
>
> diff --git a/config/common_base b/config/common_base
> index d9471e8..f5d085d 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -548,6 +548,11 @@ CONFIG_RTE_LIBRTE_PMD_MRVL_CRYPTO=n
> CONFIG_RTE_LIBRTE_PMD_MRVL_CRYPTO_DEBUG=n
>
> #
> +# Compile generic security library
> +#
> +CONFIG_RTE_LIBRTE_SECURITY=y
> +
> +#
> # Compile generic event device library
> #
> CONFIG_RTE_LIBRTE_EVENTDEV=y
> diff --git a/lib/Makefile b/lib/Makefile
> index 527b95b..645094c 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -50,6 +50,11 @@ DEPDIRS-librte_ether += librte_mbuf
> DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
> DEPDIRS-librte_cryptodev := librte_eal librte_mempool librte_ring librte_mbuf
> DEPDIRS-librte_cryptodev += librte_kvargs
> +DEPDIRS-librte_cryptodev += librte_ether
> +DIRS-$(CONFIG_RTE_LIBRTE_SECURITY) += librte_security
> +DEPDIRS-librte_security := librte_eal librte_mempool librte_ring librte_mbuf
> +DEPDIRS-librte_security += librte_ether
> +DEPDIRS-librte_security += librte_cryptodev
> DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += librte_eventdev
> DEPDIRS-librte_eventdev := librte_eal librte_ring librte_ether librte_hash
> DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
> diff --git a/mk/rte.app.mk b/mk/rte.app.mk
> index 8192b98..d975fad 100644
> --- a/mk/rte.app.mk
> +++ b/mk/rte.app.mk
> @@ -93,6 +93,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF) += -lrte_mbuf
> _LDLIBS-$(CONFIG_RTE_LIBRTE_NET) += -lrte_net
> _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER) += -lrte_ethdev
> _LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += -lrte_cryptodev
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -lrte_security
> _LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
> _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
> _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v5 09/11] net/ixgbe: enable inline ipsec
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 00/11] " Akhil Goyal
` (7 preceding siblings ...)
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 08/11] mk: add rte security into build system Akhil Goyal
@ 2017-10-24 14:15 ` Akhil Goyal
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 10/11] crypto/dpaa2_sec: add support for protocol offload ipsec Akhil Goyal
` (2 subsequent siblings)
11 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-24 14:15 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
From: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
drivers/net/ixgbe/Makefile | 2 +-
drivers/net/ixgbe/base/ixgbe_osdep.h | 8 +
drivers/net/ixgbe/ixgbe_ethdev.c | 11 +
drivers/net/ixgbe/ixgbe_ethdev.h | 6 +-
drivers/net/ixgbe/ixgbe_flow.c | 47 +++
drivers/net/ixgbe/ixgbe_ipsec.c | 737 +++++++++++++++++++++++++++++++++
drivers/net/ixgbe/ixgbe_ipsec.h | 151 +++++++
drivers/net/ixgbe/ixgbe_rxtx.c | 59 ++-
drivers/net/ixgbe/ixgbe_rxtx.h | 11 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 57 +++
10 files changed, 1082 insertions(+), 7 deletions(-)
create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.c
create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.h
diff --git a/drivers/net/ixgbe/Makefile b/drivers/net/ixgbe/Makefile
index 6a144e7..f03c426 100644
--- a/drivers/net/ixgbe/Makefile
+++ b/drivers/net/ixgbe/Makefile
@@ -120,11 +120,11 @@ SRCS-$(CONFIG_RTE_IXGBE_INC_VECTOR) += ixgbe_rxtx_vec_neon.c
else
SRCS-$(CONFIG_RTE_IXGBE_INC_VECTOR) += ixgbe_rxtx_vec_sse.c
endif
-
ifeq ($(CONFIG_RTE_LIBRTE_IXGBE_BYPASS),y)
SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_bypass.c
SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_82599_bypass.c
endif
+SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_ipsec.c
SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += rte_pmd_ixgbe.c
SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_tm.c
diff --git a/drivers/net/ixgbe/base/ixgbe_osdep.h b/drivers/net/ixgbe/base/ixgbe_osdep.h
index 4aab278..bb5dfd2 100644
--- a/drivers/net/ixgbe/base/ixgbe_osdep.h
+++ b/drivers/net/ixgbe/base/ixgbe_osdep.h
@@ -161,4 +161,12 @@ static inline uint32_t ixgbe_read_addr(volatile void* addr)
#define IXGBE_WRITE_REG_ARRAY(hw, reg, index, value) \
IXGBE_PCI_REG_WRITE(IXGBE_PCI_REG_ARRAY_ADDR((hw), (reg), (index)), (value))
+#define IXGBE_WRITE_REG_THEN_POLL_MASK(hw, reg, val, mask, poll_ms) \
+do { \
+ uint32_t cnt = poll_ms; \
+ IXGBE_WRITE_REG(hw, (reg), (val)); \
+ while (((IXGBE_READ_REG(hw, (reg))) & (mask)) && (cnt--)) \
+ rte_delay_ms(1); \
+} while (0)
+
#endif /* _IXGBE_OS_H_ */
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 14b9c53..10bf486 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -61,6 +61,7 @@
#include <rte_random.h>
#include <rte_dev.h>
#include <rte_hash_crc.h>
+#include <rte_security_driver.h>
#include "ixgbe_logs.h"
#include "base/ixgbe_api.h"
@@ -1167,6 +1168,11 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
return 0;
}
+ /* Initialize security_ctx only for primary process*/
+ eth_dev->security_ctx = ixgbe_ipsec_ctx_create(eth_dev);
+ if (eth_dev->security_ctx == NULL)
+ return -ENOMEM;
+
rte_eth_copy_pci_info(eth_dev, pci_dev);
eth_dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE;
@@ -1401,6 +1407,8 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
/* Remove all Traffic Manager configuration */
ixgbe_tm_conf_uninit(eth_dev);
+ rte_free(eth_dev->security_ctx);
+
return 0;
}
@@ -3695,6 +3703,9 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
hw->mac.type == ixgbe_mac_X550EM_a)
dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_SECURITY;
+ dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
.pthresh = IXGBE_DEFAULT_RX_PTHRESH,
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index e28c856..f5b52c4 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -38,6 +38,7 @@
#include "base/ixgbe_dcb_82599.h"
#include "base/ixgbe_dcb_82598.h"
#include "ixgbe_bypass.h"
+#include "ixgbe_ipsec.h"
#include <rte_time.h>
#include <rte_hash.h>
#include <rte_pci.h>
@@ -486,7 +487,7 @@ struct ixgbe_adapter {
struct ixgbe_filter_info filter;
struct ixgbe_l2_tn_info l2_tn;
struct ixgbe_bw_conf bw_conf;
-
+ struct ixgbe_ipsec ipsec;
bool rx_bulk_alloc_allowed;
bool rx_vec_allowed;
struct rte_timecounter systime_tc;
@@ -543,6 +544,9 @@ struct ixgbe_adapter {
#define IXGBE_DEV_PRIVATE_TO_TM_CONF(adapter) \
(&((struct ixgbe_adapter *)adapter)->tm_conf)
+#define IXGBE_DEV_PRIVATE_TO_IPSEC(adapter)\
+ (&((struct ixgbe_adapter *)adapter)->ipsec)
+
/*
* RX/TX function prototypes
*/
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 904c146..13c8243 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -187,6 +187,9 @@ const struct rte_flow_action *next_no_void_action(
* END
* other members in mask and spec should set to 0x00.
* item->last should be NULL.
+ *
+ * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY.
+ *
*/
static int
cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
@@ -226,6 +229,41 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
return -rte_errno;
}
+ /**
+ * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
+ */
+ act = next_no_void_action(actions, NULL);
+ if (act->type == RTE_FLOW_ACTION_TYPE_SECURITY) {
+ const void *conf = act->conf;
+ /* check if the next not void item is END */
+ act = next_no_void_action(actions, act);
+ if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+ memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "Not supported action.");
+ return -rte_errno;
+ }
+
+ /* get the IP pattern*/
+ item = next_no_void_pattern(pattern, NULL);
+ while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+ item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
+ if (item->last ||
+ item->type == RTE_FLOW_ITEM_TYPE_END) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "IP pattern missing.");
+ return -rte_errno;
+ }
+ item = next_no_void_pattern(pattern, item);
+ }
+
+ filter->proto = IPPROTO_ESP;
+ return ixgbe_crypto_add_ingress_sa_from_flow(conf, item->spec,
+ item->type == RTE_FLOW_ITEM_TYPE_IPV6);
+ }
+
/* the first not void item can be MAC or IPv4 */
item = next_no_void_pattern(pattern, NULL);
@@ -519,6 +557,10 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
+ /* ESP flow not really a flow*/
+ if (filter->proto == IPPROTO_ESP)
+ return 0;
+
/* Ixgbe doesn't support tcp flags. */
if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -2758,6 +2800,11 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
actions, &ntuple_filter, error);
+
+ /* ESP flow not really a flow*/
+ if (ntuple_filter.proto == IPPROTO_ESP)
+ return flow;
+
if (!ret) {
ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
if (!ret) {
diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c
new file mode 100644
index 0000000..99c0a73
--- /dev/null
+++ b/drivers/net/ixgbe/ixgbe_ipsec.c
@@ -0,0 +1,737 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_ethdev.h>
+#include <rte_ethdev_pci.h>
+#include <rte_ip.h>
+#include <rte_jhash.h>
+#include <rte_security_driver.h>
+#include <rte_cryptodev.h>
+#include <rte_flow.h>
+
+#include "base/ixgbe_type.h"
+#include "base/ixgbe_api.h"
+#include "ixgbe_ethdev.h"
+#include "ixgbe_ipsec.h"
+
+#define RTE_IXGBE_REGISTER_POLL_WAIT_5_MS 5
+
+#define IXGBE_WAIT_RREAD \
+ IXGBE_WRITE_REG_THEN_POLL_MASK(hw, IXGBE_IPSRXIDX, reg_val, \
+ IPSRXIDX_READ, RTE_IXGBE_REGISTER_POLL_WAIT_5_MS)
+#define IXGBE_WAIT_RWRITE \
+ IXGBE_WRITE_REG_THEN_POLL_MASK(hw, IXGBE_IPSRXIDX, reg_val, \
+ IPSRXIDX_WRITE, RTE_IXGBE_REGISTER_POLL_WAIT_5_MS)
+#define IXGBE_WAIT_TREAD \
+ IXGBE_WRITE_REG_THEN_POLL_MASK(hw, IXGBE_IPSTXIDX, reg_val, \
+ IPSRXIDX_READ, RTE_IXGBE_REGISTER_POLL_WAIT_5_MS)
+#define IXGBE_WAIT_TWRITE \
+ IXGBE_WRITE_REG_THEN_POLL_MASK(hw, IXGBE_IPSTXIDX, reg_val, \
+ IPSRXIDX_WRITE, RTE_IXGBE_REGISTER_POLL_WAIT_5_MS)
+
+#define CMP_IP(a, b) (\
+ (a).ipv6[0] == (b).ipv6[0] && \
+ (a).ipv6[1] == (b).ipv6[1] && \
+ (a).ipv6[2] == (b).ipv6[2] && \
+ (a).ipv6[3] == (b).ipv6[3])
+
+
+static void
+ixgbe_crypto_clear_ipsec_tables(struct rte_eth_dev *dev)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ int i = 0;
+
+ /* clear Rx IP table*/
+ for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
+ uint16_t index = i << 3;
+ uint32_t reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_IP | index;
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3), 0);
+ IXGBE_WAIT_RWRITE;
+ }
+
+ /* clear Rx SPI and Rx/Tx SA tables*/
+ for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+ uint32_t index = i << 3;
+ uint32_t reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_SPI | index;
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI, 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX, 0);
+ IXGBE_WAIT_RWRITE;
+ reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_KEY | index;
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT, 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD, 0);
+ IXGBE_WAIT_RWRITE;
+ reg_val = IPSRXIDX_WRITE | index;
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT, 0);
+ IXGBE_WAIT_TWRITE;
+ }
+}
+
+static int
+ixgbe_crypto_add_sa(struct ixgbe_crypto_session *ic_session)
+{
+ struct rte_eth_dev *dev = ic_session->dev;
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ixgbe_ipsec *priv = IXGBE_DEV_PRIVATE_TO_IPSEC(
+ dev->data->dev_private);
+ uint32_t reg_val;
+ int sa_index = -1;
+
+ if (ic_session->op == IXGBE_OP_AUTHENTICATED_DECRYPTION) {
+ int i, ip_index = -1;
+
+ /* Find a match in the IP table*/
+ for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
+ if (CMP_IP(priv->rx_ip_tbl[i].ip,
+ ic_session->dst_ip)) {
+ ip_index = i;
+ break;
+ }
+ }
+ /* If no match, find a free entry in the IP table*/
+ if (ip_index < 0) {
+ for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
+ if (priv->rx_ip_tbl[i].ref_count == 0) {
+ ip_index = i;
+ break;
+ }
+ }
+ }
+
+ /* Fail if no match and no free entries*/
+ if (ip_index < 0) {
+ PMD_DRV_LOG(ERR,
+ "No free entry left in the Rx IP table\n");
+ return -1;
+ }
+
+ /* Find a free entry in the SA table*/
+ for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+ if (priv->rx_sa_tbl[i].used == 0) {
+ sa_index = i;
+ break;
+ }
+ }
+ /* Fail if no free entries*/
+ if (sa_index < 0) {
+ PMD_DRV_LOG(ERR,
+ "No free entry left in the Rx SA table\n");
+ return -1;
+ }
+
+ priv->rx_ip_tbl[ip_index].ip.ipv6[0] =
+ ic_session->dst_ip.ipv6[0];
+ priv->rx_ip_tbl[ip_index].ip.ipv6[1] =
+ ic_session->dst_ip.ipv6[1];
+ priv->rx_ip_tbl[ip_index].ip.ipv6[2] =
+ ic_session->dst_ip.ipv6[2];
+ priv->rx_ip_tbl[ip_index].ip.ipv6[3] =
+ ic_session->dst_ip.ipv6[3];
+ priv->rx_ip_tbl[ip_index].ref_count++;
+
+ priv->rx_sa_tbl[sa_index].spi =
+ rte_cpu_to_be_32(ic_session->spi);
+ priv->rx_sa_tbl[sa_index].ip_index = ip_index;
+ priv->rx_sa_tbl[sa_index].key[3] =
+ rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[0]);
+ priv->rx_sa_tbl[sa_index].key[2] =
+ rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[4]);
+ priv->rx_sa_tbl[sa_index].key[1] =
+ rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[8]);
+ priv->rx_sa_tbl[sa_index].key[0] =
+ rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[12]);
+ priv->rx_sa_tbl[sa_index].salt =
+ rte_cpu_to_be_32(ic_session->salt);
+ priv->rx_sa_tbl[sa_index].mode = IPSRXMOD_VALID;
+ if (ic_session->op == IXGBE_OP_AUTHENTICATED_DECRYPTION)
+ priv->rx_sa_tbl[sa_index].mode |=
+ (IPSRXMOD_PROTO | IPSRXMOD_DECRYPT);
+ if (ic_session->dst_ip.type == IPv6)
+ priv->rx_sa_tbl[sa_index].mode |= IPSRXMOD_IPV6;
+ priv->rx_sa_tbl[sa_index].used = 1;
+
+ /* write IP table entry*/
+ reg_val = IPSRXIDX_RX_EN | IPSRXIDX_WRITE |
+ IPSRXIDX_TABLE_IP | (ip_index << 3);
+ if (priv->rx_ip_tbl[ip_index].ip.type == IPv4) {
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3),
+ priv->rx_ip_tbl[ip_index].ip.ipv4);
+ } else {
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0),
+ priv->rx_ip_tbl[ip_index].ip.ipv6[0]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1),
+ priv->rx_ip_tbl[ip_index].ip.ipv6[1]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2),
+ priv->rx_ip_tbl[ip_index].ip.ipv6[2]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3),
+ priv->rx_ip_tbl[ip_index].ip.ipv6[3]);
+ }
+ IXGBE_WAIT_RWRITE;
+
+ /* write SPI table entry*/
+ reg_val = IPSRXIDX_RX_EN | IPSRXIDX_WRITE |
+ IPSRXIDX_TABLE_SPI | (sa_index << 3);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI,
+ priv->rx_sa_tbl[sa_index].spi);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX,
+ priv->rx_sa_tbl[sa_index].ip_index);
+ IXGBE_WAIT_RWRITE;
+
+ /* write Key table entry*/
+ reg_val = IPSRXIDX_RX_EN | IPSRXIDX_WRITE |
+ IPSRXIDX_TABLE_KEY | (sa_index << 3);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0),
+ priv->rx_sa_tbl[sa_index].key[0]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1),
+ priv->rx_sa_tbl[sa_index].key[1]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2),
+ priv->rx_sa_tbl[sa_index].key[2]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3),
+ priv->rx_sa_tbl[sa_index].key[3]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT,
+ priv->rx_sa_tbl[sa_index].salt);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD,
+ priv->rx_sa_tbl[sa_index].mode);
+ IXGBE_WAIT_RWRITE;
+
+ } else { /* sess->dir == RTE_CRYPTO_OUTBOUND */
+ int i;
+
+ /* Find a free entry in the SA table*/
+ for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+ if (priv->tx_sa_tbl[i].used == 0) {
+ sa_index = i;
+ break;
+ }
+ }
+ /* Fail if no free entries*/
+ if (sa_index < 0) {
+ PMD_DRV_LOG(ERR,
+ "No free entry left in the Tx SA table\n");
+ return -1;
+ }
+
+ priv->tx_sa_tbl[sa_index].spi =
+ rte_cpu_to_be_32(ic_session->spi);
+ priv->tx_sa_tbl[sa_index].key[3] =
+ rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[0]);
+ priv->tx_sa_tbl[sa_index].key[2] =
+ rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[4]);
+ priv->tx_sa_tbl[sa_index].key[1] =
+ rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[8]);
+ priv->tx_sa_tbl[sa_index].key[0] =
+ rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[12]);
+ priv->tx_sa_tbl[sa_index].salt =
+ rte_cpu_to_be_32(ic_session->salt);
+
+ reg_val = IPSRXIDX_RX_EN | IPSRXIDX_WRITE | (sa_index << 3);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0),
+ priv->tx_sa_tbl[sa_index].key[0]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1),
+ priv->tx_sa_tbl[sa_index].key[1]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2),
+ priv->tx_sa_tbl[sa_index].key[2]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3),
+ priv->tx_sa_tbl[sa_index].key[3]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT,
+ priv->tx_sa_tbl[sa_index].salt);
+ IXGBE_WAIT_TWRITE;
+
+ priv->tx_sa_tbl[i].used = 1;
+ ic_session->sa_index = sa_index;
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_crypto_remove_sa(struct rte_eth_dev *dev,
+ struct ixgbe_crypto_session *ic_session)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ixgbe_ipsec *priv =
+ IXGBE_DEV_PRIVATE_TO_IPSEC(dev->data->dev_private);
+ uint32_t reg_val;
+ int sa_index = -1;
+
+ if (ic_session->op == IXGBE_OP_AUTHENTICATED_DECRYPTION) {
+ int i, ip_index = -1;
+
+ /* Find a match in the IP table*/
+ for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
+ if (CMP_IP(priv->rx_ip_tbl[i].ip, ic_session->dst_ip)) {
+ ip_index = i;
+ break;
+ }
+ }
+
+ /* Fail if no match*/
+ if (ip_index < 0) {
+ PMD_DRV_LOG(ERR,
+ "Entry not found in the Rx IP table\n");
+ return -1;
+ }
+
+ /* Find a free entry in the SA table*/
+ for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+ if (priv->rx_sa_tbl[i].spi ==
+ rte_cpu_to_be_32(ic_session->spi)) {
+ sa_index = i;
+ break;
+ }
+ }
+ /* Fail if no match*/
+ if (sa_index < 0) {
+ PMD_DRV_LOG(ERR,
+ "Entry not found in the Rx SA table\n");
+ return -1;
+ }
+
+ /* Disable and clear Rx SPI and key table table entryes*/
+ reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_SPI | (sa_index << 3);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI, 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX, 0);
+ IXGBE_WAIT_RWRITE;
+ reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_KEY | (sa_index << 3);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT, 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD, 0);
+ IXGBE_WAIT_RWRITE;
+ priv->rx_sa_tbl[sa_index].used = 0;
+
+ /* If last used then clear the IP table entry*/
+ priv->rx_ip_tbl[ip_index].ref_count--;
+ if (priv->rx_ip_tbl[ip_index].ref_count == 0) {
+ reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_IP |
+ (ip_index << 3);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3), 0);
+ }
+ } else { /* session->dir == RTE_CRYPTO_OUTBOUND */
+ int i;
+
+ /* Find a match in the SA table*/
+ for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+ if (priv->tx_sa_tbl[i].spi ==
+ rte_cpu_to_be_32(ic_session->spi)) {
+ sa_index = i;
+ break;
+ }
+ }
+ /* Fail if no match entries*/
+ if (sa_index < 0) {
+ PMD_DRV_LOG(ERR,
+ "Entry not found in the Tx SA table\n");
+ return -1;
+ }
+ reg_val = IPSRXIDX_WRITE | (sa_index << 3);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT, 0);
+ IXGBE_WAIT_TWRITE;
+
+ priv->tx_sa_tbl[sa_index].used = 0;
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_crypto_create_session(void *device,
+ struct rte_security_session_conf *conf,
+ struct rte_security_session *session,
+ struct rte_mempool *mempool)
+{
+ struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
+ struct ixgbe_crypto_session *ic_session = NULL;
+ struct rte_crypto_aead_xform *aead_xform;
+ struct rte_eth_conf *dev_conf = ð_dev->data->dev_conf;
+
+ if (rte_mempool_get(mempool, (void **)&ic_session)) {
+ PMD_DRV_LOG(ERR, "Cannot get object from ic_session mempool");
+ return -ENOMEM;
+ }
+
+ if (conf->crypto_xform->type != RTE_CRYPTO_SYM_XFORM_AEAD ||
+ conf->crypto_xform->aead.algo !=
+ RTE_CRYPTO_AEAD_AES_GCM) {
+ PMD_DRV_LOG(ERR, "Unsupported crypto transformation mode\n");
+ return -ENOTSUP;
+ }
+ aead_xform = &conf->crypto_xform->aead;
+
+ if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
+ if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+ ic_session->op = IXGBE_OP_AUTHENTICATED_DECRYPTION;
+ } else {
+ PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
+ return -ENOTSUP;
+ }
+ } else {
+ if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+ ic_session->op = IXGBE_OP_AUTHENTICATED_ENCRYPTION;
+ } else {
+ PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
+ return -ENOTSUP;
+ }
+ }
+
+ ic_session->key = aead_xform->key.data;
+ memcpy(&ic_session->salt,
+ &aead_xform->key.data[aead_xform->key.length], 4);
+ ic_session->spi = conf->ipsec.spi;
+ ic_session->dev = eth_dev;
+
+ set_sec_session_private_data(session, ic_session);
+
+ if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
+ if (ixgbe_crypto_add_sa(ic_session)) {
+ PMD_DRV_LOG(ERR, "Failed to add SA\n");
+ return -EPERM;
+ }
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_crypto_remove_session(void *device,
+ struct rte_security_session *session)
+{
+ struct rte_eth_dev *eth_dev = device;
+ struct ixgbe_crypto_session *ic_session =
+ (struct ixgbe_crypto_session *)
+ get_sec_session_private_data(session);
+ struct rte_mempool *mempool = rte_mempool_from_obj(ic_session);
+
+ if (eth_dev != ic_session->dev) {
+ PMD_DRV_LOG(ERR, "Session not bound to this device\n");
+ return -ENODEV;
+ }
+
+ if (ixgbe_crypto_remove_sa(eth_dev, ic_session)) {
+ PMD_DRV_LOG(ERR, "Failed to remove session\n");
+ return -EFAULT;
+ }
+
+ rte_mempool_put(mempool, (void *)ic_session);
+
+ return 0;
+}
+
+static inline uint8_t
+ixgbe_crypto_compute_pad_len(struct rte_mbuf *m)
+{
+ if (m->nb_segs == 1) {
+ /* 16 bytes ICV + 2 bytes ESP trailer + payload padding size
+ * payload padding size is stored at <pkt_len - 18>
+ */
+ uint8_t *esp_pad_len = rte_pktmbuf_mtod_offset(m, uint8_t *,
+ rte_pktmbuf_pkt_len(m) -
+ (ESP_TRAILER_SIZE + ESP_ICV_SIZE));
+ return *esp_pad_len + ESP_TRAILER_SIZE + ESP_ICV_SIZE;
+ }
+ return 0;
+}
+
+static int
+ixgbe_crypto_update_mb(void *device __rte_unused,
+ struct rte_security_session *session,
+ struct rte_mbuf *m, void *params __rte_unused)
+{
+ struct ixgbe_crypto_session *ic_session =
+ get_sec_session_private_data(session);
+ if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
+ union ixgbe_crypto_tx_desc_md *mdata =
+ (union ixgbe_crypto_tx_desc_md *)&m->udata64;
+ mdata->enc = 1;
+ mdata->sa_idx = ic_session->sa_index;
+ mdata->pad_len = ixgbe_crypto_compute_pad_len(m);
+ }
+ return 0;
+}
+
+
+static const struct rte_security_capability *
+ixgbe_crypto_capabilities_get(void *device __rte_unused)
+{
+ static const struct rte_cryptodev_capabilities
+ aes_gcm_gmac_crypto_capabilities[] = {
+ { /* AES GMAC (128-bit) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_GMAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 12,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ { /* AES GCM (128-bit) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ {.aead = {
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 65535,
+ .increment = 1
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 12,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ .op = RTE_CRYPTO_OP_TYPE_UNDEFINED,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_NOT_SPECIFIED
+ }, }
+ },
+ };
+
+ static const struct rte_security_capability
+ ixgbe_security_capabilities[] = {
+ { /* IPsec Inline Crypto ESP Transport Egress */
+ .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ .options = { 0 }
+ },
+ .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
+ .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+ },
+ { /* IPsec Inline Crypto ESP Transport Ingress */
+ .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+ .options = { 0 }
+ },
+ .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
+ .ol_flags = 0
+ },
+ { /* IPsec Inline Crypto ESP Tunnel Egress */
+ .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ .options = { 0 }
+ },
+ .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
+ .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+ },
+ { /* IPsec Inline Crypto ESP Tunnel Ingress */
+ .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+ .options = { 0 }
+ },
+ .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
+ .ol_flags = 0
+ },
+ {
+ .action = RTE_SECURITY_ACTION_TYPE_NONE
+ }
+ };
+
+ return ixgbe_security_capabilities;
+}
+
+
+int
+ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ uint32_t reg;
+
+ /* sanity checks */
+ if (dev->data->dev_conf.rxmode.enable_lro) {
+ PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
+ return -1;
+ }
+ if (!dev->data->dev_conf.rxmode.hw_strip_crc) {
+ PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
+ return -1;
+ }
+
+
+ /* Set IXGBE_SECTXBUFFAF to 0x15 as required in the datasheet*/
+ IXGBE_WRITE_REG(hw, IXGBE_SECTXBUFFAF, 0x15);
+
+ /* IFG needs to be set to 3 when we are using security. Otherwise a Tx
+ * hang will occur with heavy traffic.
+ */
+ reg = IXGBE_READ_REG(hw, IXGBE_SECTXMINIFG);
+ reg = (reg & 0xFFFFFFF0) | 0x3;
+ IXGBE_WRITE_REG(hw, IXGBE_SECTXMINIFG, reg);
+
+ reg = IXGBE_READ_REG(hw, IXGBE_HLREG0);
+ reg |= IXGBE_HLREG0_TXCRCEN | IXGBE_HLREG0_RXCRCSTRP;
+ IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
+
+ if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+ IXGBE_WRITE_REG(hw, IXGBE_SECRXCTRL, 0);
+ reg = IXGBE_READ_REG(hw, IXGBE_SECRXCTRL);
+ if (reg != 0) {
+ PMD_DRV_LOG(ERR, "Error enabling Rx Crypto");
+ return -1;
+ }
+ }
+ if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+ IXGBE_WRITE_REG(hw, IXGBE_SECTXCTRL,
+ IXGBE_SECTXCTRL_STORE_FORWARD);
+ reg = IXGBE_READ_REG(hw, IXGBE_SECTXCTRL);
+ if (reg != IXGBE_SECTXCTRL_STORE_FORWARD) {
+ PMD_DRV_LOG(ERR, "Error enabling Rx Crypto");
+ return -1;
+ }
+ }
+
+ ixgbe_crypto_clear_ipsec_tables(dev);
+
+ return 0;
+}
+
+int
+ixgbe_crypto_add_ingress_sa_from_flow(const void *sess,
+ const void *ip_spec,
+ uint8_t is_ipv6)
+{
+ struct ixgbe_crypto_session *ic_session
+ = get_sec_session_private_data(sess);
+
+ if (ic_session->op == IXGBE_OP_AUTHENTICATED_DECRYPTION) {
+ if (is_ipv6) {
+ const struct rte_flow_item_ipv6 *ipv6 = ip_spec;
+ ic_session->src_ip.type = IPv6;
+ ic_session->dst_ip.type = IPv6;
+ rte_memcpy(ic_session->src_ip.ipv6,
+ ipv6->hdr.src_addr, 16);
+ rte_memcpy(ic_session->dst_ip.ipv6,
+ ipv6->hdr.dst_addr, 16);
+ } else {
+ const struct rte_flow_item_ipv4 *ipv4 = ip_spec;
+ ic_session->src_ip.type = IPv4;
+ ic_session->dst_ip.type = IPv4;
+ ic_session->src_ip.ipv4 = ipv4->hdr.src_addr;
+ ic_session->dst_ip.ipv4 = ipv4->hdr.dst_addr;
+ }
+ return ixgbe_crypto_add_sa(ic_session);
+ }
+
+ return 0;
+}
+
+static struct rte_security_ops ixgbe_security_ops = {
+ .session_create = ixgbe_crypto_create_session,
+ .session_update = NULL,
+ .session_stats_get = NULL,
+ .session_destroy = ixgbe_crypto_remove_session,
+ .set_pkt_metadata = ixgbe_crypto_update_mb,
+ .capabilities_get = ixgbe_crypto_capabilities_get
+};
+
+struct rte_security_ctx *
+ixgbe_ipsec_ctx_create(struct rte_eth_dev *dev)
+{
+ struct rte_security_ctx *ctx = rte_malloc("rte_security_instances_ops",
+ sizeof(struct rte_security_ctx), 0);
+ if (ctx) {
+ ctx->device = (void *)dev;
+ ctx->ops = &ixgbe_security_ops;
+ ctx->sess_cnt = 0;
+ }
+ return ctx;
+}
diff --git a/drivers/net/ixgbe/ixgbe_ipsec.h b/drivers/net/ixgbe/ixgbe_ipsec.h
new file mode 100644
index 0000000..fb8fefc
--- /dev/null
+++ b/drivers/net/ixgbe/ixgbe_ipsec.h
@@ -0,0 +1,151 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef IXGBE_IPSEC_H_
+#define IXGBE_IPSEC_H_
+
+#include <rte_security.h>
+
+#define IPSRXIDX_RX_EN 0x00000001
+#define IPSRXIDX_TABLE_IP 0x00000002
+#define IPSRXIDX_TABLE_SPI 0x00000004
+#define IPSRXIDX_TABLE_KEY 0x00000006
+#define IPSRXIDX_WRITE 0x80000000
+#define IPSRXIDX_READ 0x40000000
+#define IPSRXMOD_VALID 0x00000001
+#define IPSRXMOD_PROTO 0x00000004
+#define IPSRXMOD_DECRYPT 0x00000008
+#define IPSRXMOD_IPV6 0x00000010
+#define IXGBE_ADVTXD_POPTS_IPSEC 0x00000400
+#define IXGBE_ADVTXD_TUCMD_IPSEC_TYPE_ESP 0x00002000
+#define IXGBE_ADVTXD_TUCMD_IPSEC_ENCRYPT_EN 0x00004000
+#define IXGBE_RXDADV_IPSEC_STATUS_SECP 0x00020000
+#define IXGBE_RXDADV_IPSEC_ERROR_BIT_MASK 0x18000000
+#define IXGBE_RXDADV_IPSEC_ERROR_INVALID_PROTOCOL 0x08000000
+#define IXGBE_RXDADV_IPSEC_ERROR_INVALID_LENGTH 0x10000000
+#define IXGBE_RXDADV_IPSEC_ERROR_AUTHENTICATION_FAILED 0x18000000
+
+#define IPSEC_MAX_RX_IP_COUNT 128
+#define IPSEC_MAX_SA_COUNT 1024
+
+#define ESP_ICV_SIZE 16
+#define ESP_TRAILER_SIZE 2
+
+enum ixgbe_operation {
+ IXGBE_OP_AUTHENTICATED_ENCRYPTION,
+ IXGBE_OP_AUTHENTICATED_DECRYPTION
+};
+
+enum ixgbe_gcm_key {
+ IXGBE_GCM_KEY_128,
+ IXGBE_GCM_KEY_256
+};
+
+/**
+ * Generic IP address structure
+ * TODO: Find better location for this rte_net.h possibly.
+ **/
+struct ipaddr {
+ enum ipaddr_type {
+ IPv4,
+ IPv6
+ } type;
+ /**< IP Address Type - IPv4/IPv6 */
+
+ union {
+ uint32_t ipv4;
+ uint32_t ipv6[4];
+ };
+};
+
+/** inline crypto crypto private session structure */
+struct ixgbe_crypto_session {
+ enum ixgbe_operation op;
+ uint8_t *key;
+ uint32_t salt;
+ uint32_t sa_index;
+ uint32_t spi;
+ struct ipaddr src_ip;
+ struct ipaddr dst_ip;
+ struct rte_eth_dev *dev;
+} __rte_cache_aligned;
+
+struct ixgbe_crypto_rx_ip_table {
+ struct ipaddr ip;
+ uint16_t ref_count;
+};
+struct ixgbe_crypto_rx_sa_table {
+ uint32_t spi;
+ uint32_t ip_index;
+ uint32_t key[4];
+ uint32_t salt;
+ uint8_t mode;
+ uint8_t used;
+};
+
+struct ixgbe_crypto_tx_sa_table {
+ uint32_t spi;
+ uint32_t key[4];
+ uint32_t salt;
+ uint8_t used;
+};
+
+union ixgbe_crypto_tx_desc_md {
+ uint64_t data;
+ struct {
+ /**< SA table index */
+ uint32_t sa_idx;
+ /**< ICV and ESP trailer length */
+ uint8_t pad_len;
+ /**< enable encryption */
+ uint8_t enc;
+ };
+};
+
+struct ixgbe_ipsec {
+ struct ixgbe_crypto_rx_ip_table rx_ip_tbl[IPSEC_MAX_RX_IP_COUNT];
+ struct ixgbe_crypto_rx_sa_table rx_sa_tbl[IPSEC_MAX_SA_COUNT];
+ struct ixgbe_crypto_tx_sa_table tx_sa_tbl[IPSEC_MAX_SA_COUNT];
+};
+
+
+struct rte_security_ctx *
+ixgbe_ipsec_ctx_create(struct rte_eth_dev *dev);
+int ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev);
+int ixgbe_crypto_add_ingress_sa_from_flow(const void *sess,
+ const void *ip_spec,
+ uint8_t is_ipv6);
+
+
+
+#endif /*IXGBE_IPSEC_H_*/
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 0038dfb..38a014a 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -93,6 +93,7 @@
PKT_TX_TCP_SEG | \
PKT_TX_MACSEC | \
PKT_TX_OUTER_IP_CKSUM | \
+ PKT_TX_SEC_OFFLOAD | \
IXGBE_TX_IEEE1588_TMST)
#define IXGBE_TX_OFFLOAD_NOTSUP_MASK \
@@ -395,7 +396,8 @@ ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
static inline void
ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
volatile struct ixgbe_adv_tx_context_desc *ctx_txd,
- uint64_t ol_flags, union ixgbe_tx_offload tx_offload)
+ uint64_t ol_flags, union ixgbe_tx_offload tx_offload,
+ union ixgbe_crypto_tx_desc_md *mdata)
{
uint32_t type_tucmd_mlhl;
uint32_t mss_l4len_idx = 0;
@@ -479,6 +481,17 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
seqnum_seed |= tx_offload.l2_len
<< IXGBE_ADVTXD_TUNNEL_LEN;
}
+ if (ol_flags & PKT_TX_SEC_OFFLOAD) {
+ seqnum_seed |=
+ (IXGBE_ADVTXD_IPSEC_SA_INDEX_MASK & mdata->sa_idx);
+ type_tucmd_mlhl |= mdata->enc ?
+ (IXGBE_ADVTXD_TUCMD_IPSEC_TYPE_ESP |
+ IXGBE_ADVTXD_TUCMD_IPSEC_ENCRYPT_EN) : 0;
+ type_tucmd_mlhl |=
+ (mdata->pad_len & IXGBE_ADVTXD_IPSEC_ESP_LEN_MASK);
+ tx_offload_mask.sa_idx |= ~0;
+ tx_offload_mask.sec_pad_len |= ~0;
+ }
txq->ctx_cache[ctx_idx].flags = ol_flags;
txq->ctx_cache[ctx_idx].tx_offload.data[0] =
@@ -657,6 +670,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint32_t ctx = 0;
uint32_t new_ctx;
union ixgbe_tx_offload tx_offload;
+ uint8_t use_ipsec;
tx_offload.data[0] = 0;
tx_offload.data[1] = 0;
@@ -684,6 +698,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* are needed for offload functionality.
*/
ol_flags = tx_pkt->ol_flags;
+ use_ipsec = txq->using_ipsec && (ol_flags & PKT_TX_SEC_OFFLOAD);
/* If hardware offload required */
tx_ol_req = ol_flags & IXGBE_TX_OFFLOAD_MASK;
@@ -695,6 +710,13 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.tso_segsz = tx_pkt->tso_segsz;
tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
+ if (use_ipsec) {
+ union ixgbe_crypto_tx_desc_md *ipsec_mdata =
+ (union ixgbe_crypto_tx_desc_md *)
+ &tx_pkt->udata64;
+ tx_offload.sa_idx = ipsec_mdata->sa_idx;
+ tx_offload.sec_pad_len = ipsec_mdata->pad_len;
+ }
/* If new context need be built or reuse the exist ctx. */
ctx = what_advctx_update(txq, tx_ol_req,
@@ -855,7 +877,9 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
}
ixgbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req,
- tx_offload);
+ tx_offload,
+ (union ixgbe_crypto_tx_desc_md *)
+ &tx_pkt->udata64);
txe->last_id = tx_last;
tx_id = txe->next_id;
@@ -873,6 +897,8 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
}
olinfo_status |= (pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT);
+ if (use_ipsec)
+ olinfo_status |= IXGBE_ADVTXD_POPTS_IPSEC;
m_seg = tx_pkt;
do {
@@ -1447,6 +1473,12 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status)
pkt_flags |= PKT_RX_EIP_CKSUM_BAD;
}
+ if (rx_status & IXGBE_RXD_STAT_SECP) {
+ pkt_flags |= PKT_RX_SEC_OFFLOAD;
+ if (rx_status & IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG)
+ pkt_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+ }
+
return pkt_flags;
}
@@ -2364,8 +2396,10 @@ void __attribute__((cold))
ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ixgbe_tx_queue *txq)
{
/* Use a simple Tx queue (no offloads, no multi segs) if possible */
- if (((txq->txq_flags & IXGBE_SIMPLE_FLAGS) == IXGBE_SIMPLE_FLAGS)
- && (txq->tx_rs_thresh >= RTE_PMD_IXGBE_TX_MAX_BURST)) {
+ if (((txq->txq_flags & IXGBE_SIMPLE_FLAGS) == IXGBE_SIMPLE_FLAGS) &&
+ (txq->tx_rs_thresh >= RTE_PMD_IXGBE_TX_MAX_BURST) &&
+ !(dev->data->dev_conf.txmode.offloads
+ & DEV_TX_OFFLOAD_SECURITY)) {
PMD_INIT_LOG(DEBUG, "Using simple tx code path");
dev->tx_pkt_prepare = NULL;
#ifdef RTE_IXGBE_INC_VECTOR
@@ -2535,6 +2569,8 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->txq_flags = tx_conf->txq_flags;
txq->ops = &def_txq_ops;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
+ txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
+ DEV_TX_OFFLOAD_SECURITY);
/*
* Modification to set VFTDT for virtual function if vf is detected
@@ -4519,6 +4555,8 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev)
struct ixgbe_rx_queue *rxq = dev->data->rx_queues[i];
rxq->rx_using_sse = rx_using_sse;
+ rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
+ DEV_RX_OFFLOAD_SECURITY);
}
}
@@ -5006,6 +5044,19 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
dev->data->dev_conf.lpbk_mode == IXGBE_LPBK_82599_TX_RX)
ixgbe_setup_loopback_link_82599(hw);
+ if ((dev->data->dev_conf.rxmode.offloads &
+ DEV_RX_OFFLOAD_SECURITY) ||
+ (dev->data->dev_conf.txmode.offloads &
+ DEV_TX_OFFLOAD_SECURITY)) {
+ ret = ixgbe_crypto_enable_ipsec(dev);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR,
+ "ixgbe_crypto_enable_ipsec fails with %d.",
+ ret);
+ return ret;
+ }
+ }
+
return 0;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 81c527f..4017831 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -138,8 +138,10 @@ struct ixgbe_rx_queue {
uint16_t rx_nb_avail; /**< nr of staged pkts ready to ret to app */
uint16_t rx_next_avail; /**< idx of next staged pkt to ret to app */
uint16_t rx_free_trigger; /**< triggers rx buffer allocation */
- uint16_t rx_using_sse;
+ uint8_t rx_using_sse;
/**< indicates that vector RX is in use */
+ uint8_t using_ipsec;
+ /**< indicates that IPsec RX feature is in use */
#ifdef RTE_IXGBE_INC_VECTOR
uint16_t rxrearm_nb; /**< number of remaining to be re-armed */
uint16_t rxrearm_start; /**< the idx we start the re-arming from */
@@ -183,6 +185,10 @@ union ixgbe_tx_offload {
/* fields for TX offloading of tunnels */
uint64_t outer_l3_len:8; /**< Outer L3 (IP) Hdr Length. */
uint64_t outer_l2_len:8; /**< Outer L2 (MAC) Hdr Length. */
+
+ /* inline ipsec related*/
+ uint64_t sa_idx:8; /**< TX SA database entry index */
+ uint64_t sec_pad_len:4; /**< padding length */
};
};
@@ -247,6 +253,9 @@ struct ixgbe_tx_queue {
struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
const struct ixgbe_txq_ops *ops; /**< txq ops */
uint8_t tx_deferred_start; /**< not in global dev start. */
+ uint8_t using_ipsec;
+ /**< indicates that IPsec TX feature is in use */
+
};
struct ixgbe_txq_ops {
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index e704a7f..b65220f 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -123,6 +123,59 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq)
}
static inline void
+desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
+{
+ __m128i sterr0, sterr1, sterr2, sterr3;
+ __m128i tmp1, tmp2, tmp3, tmp4;
+ __m128i rearm0, rearm1, rearm2, rearm3;
+
+ const __m128i ipsec_sterr_msk = _mm_set_epi32(
+ 0, IXGBE_RXDADV_IPSEC_STATUS_SECP |
+ IXGBE_RXDADV_IPSEC_ERROR_AUTH_FAILED,
+ 0, 0);
+ const __m128i ipsec_proc_msk = _mm_set_epi32(
+ 0, IXGBE_RXDADV_IPSEC_STATUS_SECP, 0, 0);
+ const __m128i ipsec_err_flag = _mm_set_epi32(
+ 0, PKT_RX_SEC_OFFLOAD_FAILED | PKT_RX_SEC_OFFLOAD,
+ 0, 0);
+ const __m128i ipsec_proc_flag = _mm_set_epi32(
+ 0, PKT_RX_SEC_OFFLOAD, 0, 0);
+
+ rearm0 = _mm_load_si128((__m128i *)&rx_pkts[0]->rearm_data);
+ rearm1 = _mm_load_si128((__m128i *)&rx_pkts[1]->rearm_data);
+ rearm2 = _mm_load_si128((__m128i *)&rx_pkts[2]->rearm_data);
+ rearm3 = _mm_load_si128((__m128i *)&rx_pkts[3]->rearm_data);
+ sterr0 = _mm_and_si128(descs[0], ipsec_sterr_msk);
+ sterr1 = _mm_and_si128(descs[1], ipsec_sterr_msk);
+ sterr2 = _mm_and_si128(descs[2], ipsec_sterr_msk);
+ sterr3 = _mm_and_si128(descs[3], ipsec_sterr_msk);
+ tmp1 = _mm_cmpeq_epi32(sterr0, ipsec_sterr_msk);
+ tmp2 = _mm_cmpeq_epi32(sterr0, ipsec_proc_msk);
+ tmp3 = _mm_cmpeq_epi32(sterr1, ipsec_sterr_msk);
+ tmp4 = _mm_cmpeq_epi32(sterr1, ipsec_proc_msk);
+ sterr0 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
+ _mm_and_si128(tmp2, ipsec_proc_flag));
+ sterr1 = _mm_or_si128(_mm_and_si128(tmp3, ipsec_err_flag),
+ _mm_and_si128(tmp4, ipsec_proc_flag));
+ tmp1 = _mm_cmpeq_epi32(sterr2, ipsec_sterr_msk);
+ tmp2 = _mm_cmpeq_epi32(sterr2, ipsec_proc_msk);
+ tmp3 = _mm_cmpeq_epi32(sterr3, ipsec_sterr_msk);
+ tmp4 = _mm_cmpeq_epi32(sterr3, ipsec_proc_msk);
+ sterr2 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
+ _mm_and_si128(tmp2, ipsec_proc_flag));
+ sterr3 = _mm_or_si128(_mm_and_si128(tmp3, ipsec_err_flag),
+ _mm_and_si128(tmp4, ipsec_proc_flag));
+ rearm0 = _mm_or_si128(rearm0, sterr0);
+ rearm1 = _mm_or_si128(rearm1, sterr1);
+ rearm2 = _mm_or_si128(rearm2, sterr2);
+ rearm3 = _mm_or_si128(rearm3, sterr3);
+ _mm_store_si128((__m128i *)&rx_pkts[0]->rearm_data, rearm0);
+ _mm_store_si128((__m128i *)&rx_pkts[1]->rearm_data, rearm1);
+ _mm_store_si128((__m128i *)&rx_pkts[2]->rearm_data, rearm2);
+ _mm_store_si128((__m128i *)&rx_pkts[3]->rearm_data, rearm3);
+}
+
+static inline void
desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
struct rte_mbuf **rx_pkts)
{
@@ -310,6 +363,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
volatile union ixgbe_adv_rx_desc *rxdp;
struct ixgbe_rx_entry *sw_ring;
uint16_t nb_pkts_recd;
+ uint8_t use_ipsec = rxq->using_ipsec;
int pos;
uint64_t var;
__m128i shuf_msk;
@@ -473,6 +527,9 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* set ol_flags with vlan packet type */
desc_to_olflags_v(descs, mbuf_init, vlan_flags, &rx_pkts[pos]);
+ if (unlikely(use_ipsec))
+ desc_to_olflags_v_ipsec(descs, rx_pkts);
+
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
pkt_mb4 = _mm_add_epi16(pkt_mb4, crc_adjust);
pkt_mb3 = _mm_add_epi16(pkt_mb3, crc_adjust);
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v5 10/11] crypto/dpaa2_sec: add support for protocol offload ipsec
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 00/11] " Akhil Goyal
` (8 preceding siblings ...)
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 09/11] net/ixgbe: enable inline ipsec Akhil Goyal
@ 2017-10-24 14:15 ` Akhil Goyal
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 11/11] examples/ipsec-secgw: add support for security offload Akhil Goyal
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 00/10] introduce security offload library Akhil Goyal
11 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-24 14:15 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
Driver implementation to support rte_security APIs
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
doc/guides/cryptodevs/features/dpaa2_sec.ini | 1 +
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 422 ++++++++++++++++++++++++++-
drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 62 ++++
3 files changed, 474 insertions(+), 11 deletions(-)
diff --git a/doc/guides/cryptodevs/features/dpaa2_sec.ini b/doc/guides/cryptodevs/features/dpaa2_sec.ini
index c3bb3dd..8fd07d6 100644
--- a/doc/guides/cryptodevs/features/dpaa2_sec.ini
+++ b/doc/guides/cryptodevs/features/dpaa2_sec.ini
@@ -7,6 +7,7 @@
Symmetric crypto = Y
Sym operation chaining = Y
HW Accelerated = Y
+Protocol offload = Y
;
; Supported crypto algorithms of the 'dpaa2_sec' crypto driver.
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index c67548e..2cdc8c1 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -36,6 +36,7 @@
#include <rte_mbuf.h>
#include <rte_cryptodev.h>
+#include <rte_security_driver.h>
#include <rte_malloc.h>
#include <rte_memcpy.h>
#include <rte_string_fns.h>
@@ -73,12 +74,44 @@
#define FLE_POOL_NUM_BUFS 32000
#define FLE_POOL_BUF_SIZE 256
#define FLE_POOL_CACHE_SIZE 512
+#define SEC_FLC_DHR_OUTBOUND -114
+#define SEC_FLC_DHR_INBOUND 0
enum rta_sec_era rta_sec_era = RTA_SEC_ERA_8;
static uint8_t cryptodev_driver_id;
static inline int
+build_proto_fd(dpaa2_sec_session *sess,
+ struct rte_crypto_op *op,
+ struct qbman_fd *fd, uint16_t bpid)
+{
+ struct rte_crypto_sym_op *sym_op = op->sym;
+ struct ctxt_priv *priv = sess->ctxt;
+ struct sec_flow_context *flc;
+ struct rte_mbuf *mbuf = sym_op->m_src;
+
+ if (likely(bpid < MAX_BPID))
+ DPAA2_SET_FD_BPID(fd, bpid);
+ else
+ DPAA2_SET_FD_IVP(fd);
+
+ /* Save the shared descriptor */
+ flc = &priv->flc_desc[0].flc;
+
+ DPAA2_SET_FD_ADDR(fd, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+ DPAA2_SET_FD_OFFSET(fd, sym_op->m_src->data_off);
+ DPAA2_SET_FD_LEN(fd, sym_op->m_src->pkt_len);
+ DPAA2_SET_FD_FLC(fd, ((uint64_t)flc));
+
+ /* save physical address of mbuf */
+ op->sym->aead.digest.phys_addr = mbuf->buf_physaddr;
+ mbuf->buf_physaddr = (uint64_t)op;
+
+ return 0;
+}
+
+static inline int
build_authenc_gcm_fd(dpaa2_sec_session *sess,
struct rte_crypto_op *op,
struct qbman_fd *fd, uint16_t bpid)
@@ -560,10 +593,11 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
}
static inline int
-build_sec_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+build_sec_fd(struct rte_crypto_op *op,
struct qbman_fd *fd, uint16_t bpid)
{
int ret = -1;
+ dpaa2_sec_session *sess;
PMD_INIT_FUNC_TRACE();
/*
@@ -573,6 +607,16 @@ build_sec_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
op->status = RTE_CRYPTO_OP_STATUS_ERROR;
return -ENOTSUP;
}
+
+ if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
+ sess = (dpaa2_sec_session *)get_session_private_data(
+ op->sym->session, cryptodev_driver_id);
+ else if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION)
+ sess = (dpaa2_sec_session *)get_sec_session_private_data(
+ op->sym->sec_session);
+ else
+ return -1;
+
switch (sess->ctxt_type) {
case DPAA2_SEC_CIPHER:
ret = build_cipher_fd(sess, op, fd, bpid);
@@ -586,6 +630,9 @@ build_sec_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
case DPAA2_SEC_CIPHER_HASH:
ret = build_authenc_fd(sess, op, fd, bpid);
break;
+ case DPAA2_SEC_IPSEC:
+ ret = build_proto_fd(sess, op, fd, bpid);
+ break;
case DPAA2_SEC_HASH_CIPHER:
default:
RTE_LOG(ERR, PMD, "error: Unsupported session\n");
@@ -609,12 +656,11 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
/*todo - need to support multiple buffer pools */
uint16_t bpid;
struct rte_mempool *mb_pool;
- dpaa2_sec_session *sess;
if (unlikely(nb_ops == 0))
return 0;
- if (ops[0]->sess_type != RTE_CRYPTO_OP_WITH_SESSION) {
+ if (ops[0]->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
RTE_LOG(ERR, PMD, "sessionless crypto op not supported\n");
return 0;
}
@@ -639,13 +685,9 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
for (loop = 0; loop < frames_to_send; loop++) {
/*Clear the unused FD fields before sending*/
memset(&fd_arr[loop], 0, sizeof(struct qbman_fd));
- sess = (dpaa2_sec_session *)
- get_session_private_data(
- (*ops)->sym->session,
- cryptodev_driver_id);
mb_pool = (*ops)->sym->m_src->pool;
bpid = mempool_to_bpid(mb_pool);
- ret = build_sec_fd(sess, *ops, &fd_arr[loop], bpid);
+ ret = build_sec_fd(*ops, &fd_arr[loop], bpid);
if (ret) {
PMD_DRV_LOG(ERR, "error: Improper packet"
" contents for crypto operation\n");
@@ -670,13 +712,45 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
}
static inline struct rte_crypto_op *
-sec_fd_to_mbuf(const struct qbman_fd *fd)
+sec_simple_fd_to_mbuf(const struct qbman_fd *fd, __rte_unused uint8_t id)
+{
+ struct rte_crypto_op *op;
+ uint16_t len = DPAA2_GET_FD_LEN(fd);
+ uint16_t diff = 0;
+ dpaa2_sec_session *sess_priv;
+
+ struct rte_mbuf *mbuf = DPAA2_INLINE_MBUF_FROM_BUF(
+ DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd)),
+ rte_dpaa2_bpid_info[DPAA2_GET_FD_BPID(fd)].meta_data_size);
+
+ op = (struct rte_crypto_op *)mbuf->buf_physaddr;
+ mbuf->buf_physaddr = op->sym->aead.digest.phys_addr;
+ op->sym->aead.digest.phys_addr = 0L;
+
+ sess_priv = (dpaa2_sec_session *)get_sec_session_private_data(
+ op->sym->sec_session);
+ if (sess_priv->dir == DIR_ENC)
+ mbuf->data_off += SEC_FLC_DHR_OUTBOUND;
+ else
+ mbuf->data_off += SEC_FLC_DHR_INBOUND;
+ diff = len - mbuf->pkt_len;
+ mbuf->pkt_len += diff;
+ mbuf->data_len += diff;
+
+ return op;
+}
+
+static inline struct rte_crypto_op *
+sec_fd_to_mbuf(const struct qbman_fd *fd, uint8_t driver_id)
{
struct qbman_fle *fle;
struct rte_crypto_op *op;
struct ctxt_priv *priv;
struct rte_mbuf *dst, *src;
+ if (DPAA2_FD_GET_FORMAT(fd) == qbman_fd_single)
+ return sec_simple_fd_to_mbuf(fd, driver_id);
+
fle = (struct qbman_fle *)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd));
PMD_RX_LOG(DEBUG, "FLE addr = %x - %x, offset = %x",
@@ -730,6 +804,8 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
{
/* Function is responsible to receive frames for a given device and VQ*/
struct dpaa2_sec_qp *dpaa2_qp = (struct dpaa2_sec_qp *)qp;
+ struct rte_cryptodev *dev =
+ (struct rte_cryptodev *)(dpaa2_qp->rx_vq.dev);
struct qbman_result *dq_storage;
uint32_t fqid = dpaa2_qp->rx_vq.fqid;
int ret, num_rx = 0;
@@ -799,7 +875,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
}
fd = qbman_result_DQ_fd(dq_storage);
- ops[num_rx] = sec_fd_to_mbuf(fd);
+ ops[num_rx] = sec_fd_to_mbuf(fd, dev->driver_id);
if (unlikely(fd->simple.frc)) {
/* TODO Parse SEC errors */
@@ -1576,6 +1652,300 @@ dpaa2_sec_set_session_parameters(struct rte_cryptodev *dev,
}
static int
+dpaa2_sec_set_ipsec_session(struct rte_cryptodev *dev,
+ struct rte_security_session_conf *conf,
+ void *sess)
+{
+ struct rte_security_ipsec_xform *ipsec_xform = &conf->ipsec;
+ struct rte_crypto_auth_xform *auth_xform;
+ struct rte_crypto_cipher_xform *cipher_xform;
+ dpaa2_sec_session *session = (dpaa2_sec_session *)sess;
+ struct ctxt_priv *priv;
+ struct ipsec_encap_pdb encap_pdb;
+ struct ipsec_decap_pdb decap_pdb;
+ struct alginfo authdata, cipherdata;
+ unsigned int bufsize;
+ struct sec_flow_context *flc;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (ipsec_xform->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ cipher_xform = &conf->crypto_xform->cipher;
+ auth_xform = &conf->crypto_xform->next->auth;
+ } else {
+ auth_xform = &conf->crypto_xform->auth;
+ cipher_xform = &conf->crypto_xform->next->cipher;
+ }
+ priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+ sizeof(struct ctxt_priv) +
+ sizeof(struct sec_flc_desc),
+ RTE_CACHE_LINE_SIZE);
+
+ if (priv == NULL) {
+ RTE_LOG(ERR, PMD, "\nNo memory for priv CTXT");
+ return -ENOMEM;
+ }
+
+ flc = &priv->flc_desc[0].flc;
+
+ session->ctxt_type = DPAA2_SEC_IPSEC;
+ session->cipher_key.data = rte_zmalloc(NULL,
+ cipher_xform->key.length,
+ RTE_CACHE_LINE_SIZE);
+ if (session->cipher_key.data == NULL &&
+ cipher_xform->key.length > 0) {
+ RTE_LOG(ERR, PMD, "No Memory for cipher key\n");
+ rte_free(priv);
+ return -ENOMEM;
+ }
+
+ session->cipher_key.length = cipher_xform->key.length;
+ session->auth_key.data = rte_zmalloc(NULL,
+ auth_xform->key.length,
+ RTE_CACHE_LINE_SIZE);
+ if (session->auth_key.data == NULL &&
+ auth_xform->key.length > 0) {
+ RTE_LOG(ERR, PMD, "No Memory for auth key\n");
+ rte_free(session->cipher_key.data);
+ rte_free(priv);
+ return -ENOMEM;
+ }
+ session->auth_key.length = auth_xform->key.length;
+ memcpy(session->cipher_key.data, cipher_xform->key.data,
+ cipher_xform->key.length);
+ memcpy(session->auth_key.data, auth_xform->key.data,
+ auth_xform->key.length);
+
+ authdata.key = (uint64_t)session->auth_key.data;
+ authdata.keylen = session->auth_key.length;
+ authdata.key_enc_flags = 0;
+ authdata.key_type = RTA_DATA_IMM;
+ switch (auth_xform->algo) {
+ case RTE_CRYPTO_AUTH_SHA1_HMAC:
+ authdata.algtype = OP_PCL_IPSEC_HMAC_SHA1_96;
+ authdata.algmode = OP_ALG_AAI_HMAC;
+ session->auth_alg = RTE_CRYPTO_AUTH_SHA1_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_MD5_HMAC:
+ authdata.algtype = OP_PCL_IPSEC_HMAC_MD5_96;
+ authdata.algmode = OP_ALG_AAI_HMAC;
+ session->auth_alg = RTE_CRYPTO_AUTH_MD5_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256_HMAC:
+ authdata.algtype = OP_PCL_IPSEC_HMAC_SHA2_256_128;
+ authdata.algmode = OP_ALG_AAI_HMAC;
+ session->auth_alg = RTE_CRYPTO_AUTH_SHA256_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA384_HMAC:
+ authdata.algtype = OP_PCL_IPSEC_HMAC_SHA2_384_192;
+ authdata.algmode = OP_ALG_AAI_HMAC;
+ session->auth_alg = RTE_CRYPTO_AUTH_SHA384_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA512_HMAC:
+ authdata.algtype = OP_PCL_IPSEC_HMAC_SHA2_512_256;
+ authdata.algmode = OP_ALG_AAI_HMAC;
+ session->auth_alg = RTE_CRYPTO_AUTH_SHA512_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_AES_CMAC:
+ authdata.algtype = OP_PCL_IPSEC_AES_CMAC_96;
+ session->auth_alg = RTE_CRYPTO_AUTH_AES_CMAC;
+ break;
+ case RTE_CRYPTO_AUTH_NULL:
+ authdata.algtype = OP_PCL_IPSEC_HMAC_NULL;
+ session->auth_alg = RTE_CRYPTO_AUTH_NULL;
+ break;
+ case RTE_CRYPTO_AUTH_SHA224_HMAC:
+ case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+ case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+ case RTE_CRYPTO_AUTH_SHA1:
+ case RTE_CRYPTO_AUTH_SHA256:
+ case RTE_CRYPTO_AUTH_SHA512:
+ case RTE_CRYPTO_AUTH_SHA224:
+ case RTE_CRYPTO_AUTH_SHA384:
+ case RTE_CRYPTO_AUTH_MD5:
+ case RTE_CRYPTO_AUTH_AES_GMAC:
+ case RTE_CRYPTO_AUTH_KASUMI_F9:
+ case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+ case RTE_CRYPTO_AUTH_ZUC_EIA3:
+ RTE_LOG(ERR, PMD, "Crypto: Unsupported auth alg %u\n",
+ auth_xform->algo);
+ goto out;
+ default:
+ RTE_LOG(ERR, PMD, "Crypto: Undefined Auth specified %u\n",
+ auth_xform->algo);
+ goto out;
+ }
+ cipherdata.key = (uint64_t)session->cipher_key.data;
+ cipherdata.keylen = session->cipher_key.length;
+ cipherdata.key_enc_flags = 0;
+ cipherdata.key_type = RTA_DATA_IMM;
+
+ switch (cipher_xform->algo) {
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ cipherdata.algtype = OP_PCL_IPSEC_AES_CBC;
+ cipherdata.algmode = OP_ALG_AAI_CBC;
+ session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CBC;
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_CBC:
+ cipherdata.algtype = OP_PCL_IPSEC_3DES;
+ cipherdata.algmode = OP_ALG_AAI_CBC;
+ session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CBC;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CTR:
+ cipherdata.algtype = OP_PCL_IPSEC_AES_CTR;
+ cipherdata.algmode = OP_ALG_AAI_CTR;
+ session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CTR;
+ break;
+ case RTE_CRYPTO_CIPHER_NULL:
+ cipherdata.algtype = OP_PCL_IPSEC_NULL;
+ break;
+ case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+ case RTE_CRYPTO_CIPHER_3DES_ECB:
+ case RTE_CRYPTO_CIPHER_AES_ECB:
+ case RTE_CRYPTO_CIPHER_KASUMI_F8:
+ RTE_LOG(ERR, PMD, "Crypto: Unsupported Cipher alg %u\n",
+ cipher_xform->algo);
+ goto out;
+ default:
+ RTE_LOG(ERR, PMD, "Crypto: Undefined Cipher specified %u\n",
+ cipher_xform->algo);
+ goto out;
+ }
+
+ if (ipsec_xform->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ struct ip ip4_hdr;
+
+ flc->dhr = SEC_FLC_DHR_OUTBOUND;
+ ip4_hdr.ip_v = IPVERSION;
+ ip4_hdr.ip_hl = 5;
+ ip4_hdr.ip_len = rte_cpu_to_be_16(sizeof(ip4_hdr));
+ ip4_hdr.ip_tos = ipsec_xform->tunnel.ipv4.dscp;
+ ip4_hdr.ip_id = 0;
+ ip4_hdr.ip_off = 0;
+ ip4_hdr.ip_ttl = ipsec_xform->tunnel.ipv4.ttl;
+ ip4_hdr.ip_p = 0x32;
+ ip4_hdr.ip_sum = 0;
+ ip4_hdr.ip_src = ipsec_xform->tunnel.ipv4.src_ip;
+ ip4_hdr.ip_dst = ipsec_xform->tunnel.ipv4.dst_ip;
+ ip4_hdr.ip_sum = calc_chksum((uint16_t *)(void *)&ip4_hdr,
+ sizeof(struct ip));
+
+ /* For Sec Proto only one descriptor is required. */
+ memset(&encap_pdb, 0, sizeof(struct ipsec_encap_pdb));
+ encap_pdb.options = (IPVERSION << PDBNH_ESP_ENCAP_SHIFT) |
+ PDBOPTS_ESP_OIHI_PDB_INL |
+ PDBOPTS_ESP_IVSRC |
+ PDBHMO_ESP_ENCAP_DTTL;
+ encap_pdb.spi = ipsec_xform->spi;
+ encap_pdb.ip_hdr_len = sizeof(struct ip);
+
+ session->dir = DIR_ENC;
+ bufsize = cnstr_shdsc_ipsec_new_encap(priv->flc_desc[0].desc,
+ 1, 0, &encap_pdb,
+ (uint8_t *)&ip4_hdr,
+ &cipherdata, &authdata);
+ } else if (ipsec_xform->direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
+ flc->dhr = SEC_FLC_DHR_INBOUND;
+ memset(&decap_pdb, 0, sizeof(struct ipsec_decap_pdb));
+ decap_pdb.options = sizeof(struct ip) << 16;
+ session->dir = DIR_DEC;
+ bufsize = cnstr_shdsc_ipsec_new_decap(priv->flc_desc[0].desc,
+ 1, 0, &decap_pdb, &cipherdata, &authdata);
+ } else
+ goto out;
+ flc->word1_sdl = (uint8_t)bufsize;
+
+ /* Enable the stashing control bit */
+ DPAA2_SET_FLC_RSC(flc);
+ flc->word2_rflc_31_0 = lower_32_bits(
+ (uint64_t)&(((struct dpaa2_sec_qp *)
+ dev->data->queue_pairs[0])->rx_vq) | 0x14);
+ flc->word3_rflc_63_32 = upper_32_bits(
+ (uint64_t)&(((struct dpaa2_sec_qp *)
+ dev->data->queue_pairs[0])->rx_vq));
+
+ /* Set EWS bit i.e. enable write-safe */
+ DPAA2_SET_FLC_EWS(flc);
+ /* Set BS = 1 i.e reuse input buffers as output buffers */
+ DPAA2_SET_FLC_REUSE_BS(flc);
+ /* Set FF = 10; reuse input buffers if they provide sufficient space */
+ DPAA2_SET_FLC_REUSE_FF(flc);
+
+ session->ctxt = priv;
+
+ return 0;
+out:
+ rte_free(session->auth_key.data);
+ rte_free(session->cipher_key.data);
+ rte_free(priv);
+ return -1;
+}
+
+static int
+dpaa2_sec_security_session_create(void *dev,
+ struct rte_security_session_conf *conf,
+ struct rte_security_session *sess,
+ struct rte_mempool *mempool)
+{
+ void *sess_private_data;
+ struct rte_cryptodev *cdev = (struct rte_cryptodev *)dev;
+ int ret;
+
+ if (rte_mempool_get(mempool, &sess_private_data)) {
+ CDEV_LOG_ERR(
+ "Couldn't get object from session mempool");
+ return -ENOMEM;
+ }
+
+ switch (conf->protocol) {
+ case RTE_SECURITY_PROTOCOL_IPSEC:
+ ret = dpaa2_sec_set_ipsec_session(cdev, conf,
+ sess_private_data);
+ break;
+ case RTE_SECURITY_PROTOCOL_MACSEC:
+ return -ENOTSUP;
+ default:
+ return -EINVAL;
+ }
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR,
+ "DPAA2 PMD: failed to configure session parameters");
+
+ /* Return session to mempool */
+ rte_mempool_put(mempool, sess_private_data);
+ return ret;
+ }
+
+ set_sec_session_private_data(sess, sess_private_data);
+
+ return ret;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static int
+dpaa2_sec_security_session_destroy(void *dev __rte_unused,
+ struct rte_security_session *sess)
+{
+ PMD_INIT_FUNC_TRACE();
+ void *sess_priv = get_sec_session_private_data(sess);
+
+ dpaa2_sec_session *s = (dpaa2_sec_session *)sess_priv;
+
+ if (sess_priv) {
+ struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
+
+ rte_free(s->ctxt);
+ rte_free(s->cipher_key.data);
+ rte_free(s->auth_key.data);
+ memset(sess, 0, sizeof(dpaa2_sec_session));
+ set_sec_session_private_data(sess, NULL);
+ rte_mempool_put(sess_mp, sess_priv);
+ }
+ return 0;
+}
+
+static int
dpaa2_sec_session_configure(struct rte_cryptodev *dev,
struct rte_crypto_sym_xform *xform,
struct rte_cryptodev_sym_session *sess,
@@ -1849,11 +2219,28 @@ static struct rte_cryptodev_ops crypto_ops = {
.session_clear = dpaa2_sec_session_clear,
};
+static const struct rte_security_capability *
+dpaa2_sec_capabilities_get(void *device __rte_unused)
+{
+ return dpaa2_sec_security_cap;
+}
+
+struct rte_security_ops dpaa2_sec_security_ops = {
+ .session_create = dpaa2_sec_security_session_create,
+ .session_update = NULL,
+ .session_stats_get = NULL,
+ .session_destroy = dpaa2_sec_security_session_destroy,
+ .set_pkt_metadata = NULL,
+ .capabilities_get = dpaa2_sec_capabilities_get
+};
+
static int
dpaa2_sec_uninit(const struct rte_cryptodev *dev)
{
struct dpaa2_sec_dev_private *internals = dev->data->dev_private;
+ rte_free(dev->security_ctx);
+
rte_mempool_free(internals->fle_pool);
PMD_INIT_LOG(INFO, "Closing DPAA2_SEC device %s on numa socket %u\n",
@@ -1868,6 +2255,7 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
struct dpaa2_sec_dev_private *internals;
struct rte_device *dev = cryptodev->device;
struct rte_dpaa2_device *dpaa2_dev;
+ struct rte_security_ctx *security_instance;
struct fsl_mc_io *dpseci;
uint16_t token;
struct dpseci_attr attr;
@@ -1889,7 +2277,8 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
cryptodev->dequeue_burst = dpaa2_sec_dequeue_burst;
cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
RTE_CRYPTODEV_FF_HW_ACCELERATED |
- RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
+ RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
+ RTE_CRYPTODEV_FF_SECURITY;
internals = cryptodev->data->dev_private;
internals->max_nb_sessions = RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS;
@@ -1903,6 +2292,17 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
PMD_INIT_LOG(DEBUG, "Device already init by primary process");
return 0;
}
+
+ /* Initialize security_ctx only for primary process*/
+ security_instance = rte_malloc("rte_security_instances_ops",
+ sizeof(struct rte_security_ctx), 0);
+ if (security_instance == NULL)
+ return -ENOMEM;
+ security_instance->device = (void *)cryptodev;
+ security_instance->ops = &dpaa2_sec_security_ops;
+ security_instance->sess_cnt = 0;
+ cryptodev->security_ctx = security_instance;
+
/*Open the rte device via MC and save the handle for further use*/
dpseci = (struct fsl_mc_io *)rte_calloc(NULL, 1,
sizeof(struct fsl_mc_io), 0);
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index 3849a05..14e71df 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -67,6 +67,11 @@ enum shr_desc_type {
#define DIR_ENC 1
#define DIR_DEC 0
+#define DPAA2_SET_FLC_EWS(flc) (flc->word1_bits23_16 |= 0x1)
+#define DPAA2_SET_FLC_RSC(flc) (flc->word1_bits31_24 |= 0x1)
+#define DPAA2_SET_FLC_REUSE_BS(flc) (flc->mode_bits |= 0x8000)
+#define DPAA2_SET_FLC_REUSE_FF(flc) (flc->mode_bits |= 0x2000)
+
/* SEC Flow Context Descriptor */
struct sec_flow_context {
/* word 0 */
@@ -411,4 +416,61 @@ static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
};
+
+static const struct rte_security_capability dpaa2_sec_security_cap[] = {
+ { /* IPsec Lookaside Protocol offload ESP Transport Egress */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ .options = { 0 }
+ },
+ .crypto_capabilities = dpaa2_sec_capabilities
+ },
+ { /* IPsec Lookaside Protocol offload ESP Tunnel Ingress */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+ .options = { 0 }
+ },
+ .crypto_capabilities = dpaa2_sec_capabilities
+ },
+ {
+ .action = RTE_SECURITY_ACTION_TYPE_NONE
+ }
+};
+
+/**
+ * Checksum
+ *
+ * @param buffer calculate chksum for buffer
+ * @param len buffer length
+ *
+ * @return checksum value in host cpu order
+ */
+static inline uint16_t
+calc_chksum(void *buffer, int len)
+{
+ uint16_t *buf = (uint16_t *)buffer;
+ uint32_t sum = 0;
+ uint16_t result;
+
+ for (sum = 0; len > 1; len -= 2)
+ sum += *buf++;
+
+ if (len == 1)
+ sum += *(unsigned char *)buf;
+
+ sum = (sum >> 16) + (sum & 0xFFFF);
+ sum += (sum >> 16);
+ result = ~sum;
+
+ return result;
+}
+
#endif /* _RTE_DPAA2_SEC_PMD_PRIVATE_H_ */
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v5 11/11] examples/ipsec-secgw: add support for security offload
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 00/11] " Akhil Goyal
` (9 preceding siblings ...)
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 10/11] crypto/dpaa2_sec: add support for protocol offload ipsec Akhil Goyal
@ 2017-10-24 14:15 ` Akhil Goyal
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 00/10] introduce security offload library Akhil Goyal
11 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-24 14:15 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
Ipsec-secgw application is modified so that it can support
following type of actions for crypto operations
1. full protocol offload using crypto devices.
2. inline ipsec using ethernet devices to perform crypto operations
3. full protocol offload using ethernet devices.
4. non protocol offload
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
---
doc/guides/sample_app_ug/ipsec_secgw.rst | 52 +++++-
examples/ipsec-secgw/esp.c | 120 ++++++++----
examples/ipsec-secgw/esp.h | 10 -
examples/ipsec-secgw/ipsec-secgw.c | 5 +
examples/ipsec-secgw/ipsec.c | 308 ++++++++++++++++++++++++++-----
examples/ipsec-secgw/ipsec.h | 32 +++-
examples/ipsec-secgw/sa.c | 151 +++++++++++----
7 files changed, 545 insertions(+), 133 deletions(-)
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index a292859..358e763 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -52,13 +52,22 @@ The application classifies the ports as *Protected* and *Unprotected*.
Thus, traffic received on an Unprotected or Protected port is consider
Inbound or Outbound respectively.
+The application also supports complete IPSec protocol offload to hardware
+(Look aside crypto accelarator or using ethernet device). It also support
+inline ipsec processing by the supported ethernet device during transmission.
+These modes can be selected during the SA creation configuration.
+
+In case of complete protocol offload, the processing of headers(ESP and outer
+IP header) is done by the hardware and the application does not need to
+add/remove them during outbound/inbound processing.
+
The Path for IPsec Inbound traffic is:
* Read packets from the port.
* Classify packets between IPv4 and ESP.
* Perform Inbound SA lookup for ESP packets based on their SPI.
-* Perform Verification/Decryption.
-* Remove ESP and outer IP header
+* Perform Verification/Decryption (Not needed in case of inline ipsec).
+* Remove ESP and outer IP header (Not needed in case of protocol offload).
* Inbound SP check using ACL of decrypted packets and any other IPv4 packets.
* Routing.
* Write packet to port.
@@ -68,8 +77,8 @@ The Path for the IPsec Outbound traffic is:
* Read packets from the port.
* Perform Outbound SP check using ACL of all IPv4 traffic.
* Perform Outbound SA lookup for packets that need IPsec protection.
-* Add ESP and outer IP header.
-* Perform Encryption/Digest.
+* Add ESP and outer IP header (Not needed in case protocol offload).
+* Perform Encryption/Digest (Not needed in case of inline ipsec).
* Routing.
* Write packet to port.
@@ -389,7 +398,7 @@ The SA rule syntax is shown as follows:
.. code-block:: console
sa <dir> <spi> <cipher_algo> <cipher_key> <auth_algo> <auth_key>
- <mode> <src_ip> <dst_ip>
+ <mode> <src_ip> <dst_ip> <action_type> <port_id>
where each options means:
@@ -530,6 +539,34 @@ where each options means:
* *dst X.X.X.X* for IPv4
* *dst XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX* for IPv6
+``<type>``
+
+ * Action type to specify the security action. This option specify
+ the SA to be performed with look aside protocol offload to HW
+ accelerator or protocol offload on ethernet device or inline
+ crypto processing on the ethernet device during transmission.
+
+ * Optional: Yes, default type *no-offload*
+
+ * Available options:
+
+ * *lookaside-protocol-offload*: look aside protocol offload to HW accelerator
+ * *inline-protocol-offload*: inline protocol offload on ethernet device
+ * *inline-crypto-offload*: inline crypto processing on ethernet device
+ * *no-offload*: no offloading to hardware
+
+ ``<port_id>``
+
+ * Port/device ID of the ethernet/crypto accelerator for which the SA is
+ configured. This option is used when *type* is NOT *no-offload*
+
+ * Optional: No, if *type* is not *no-offload*
+
+ * Syntax:
+
+ * *port_id X* X is a valid device number in decimal
+
+
Example SA rules:
.. code-block:: console
@@ -549,6 +586,11 @@ Example SA rules:
aead_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
mode ipv4-tunnel src 172.16.2.5 dst 172.16.1.5
+ sa out 5 cipher_algo aes-128-cbc cipher_key 0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0 \
+ auth_algo sha1-hmac auth_key 0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0 \
+ mode ipv4-tunnel src 172.16.1.5 dst 172.16.2.5 \
+ type lookaside-protocol-offload port_id 4
+
Routing rule syntax
^^^^^^^^^^^^^^^^^^^
diff --git a/examples/ipsec-secgw/esp.c b/examples/ipsec-secgw/esp.c
index a63fb95..f7afe13 100644
--- a/examples/ipsec-secgw/esp.c
+++ b/examples/ipsec-secgw/esp.c
@@ -58,8 +58,11 @@ esp_inbound(struct rte_mbuf *m, struct ipsec_sa *sa,
struct rte_crypto_sym_op *sym_cop;
int32_t payload_len, ip_hdr_len;
- RTE_ASSERT(m != NULL);
RTE_ASSERT(sa != NULL);
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO)
+ return 0;
+
+ RTE_ASSERT(m != NULL);
RTE_ASSERT(cop != NULL);
ip4 = rte_pktmbuf_mtod(m, struct ip *);
@@ -175,29 +178,44 @@ esp_inbound_post(struct rte_mbuf *m, struct ipsec_sa *sa,
RTE_ASSERT(sa != NULL);
RTE_ASSERT(cop != NULL);
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) {
+ if (m->ol_flags & PKT_RX_SEC_OFFLOAD) {
+ if (m->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
+ cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ else
+ cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ } else
+ cop->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+ }
+
if (cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS) {
RTE_LOG(ERR, IPSEC_ESP, "failed crypto op\n");
return -1;
}
- nexthdr = rte_pktmbuf_mtod_offset(m, uint8_t*,
- rte_pktmbuf_pkt_len(m) - sa->digest_len - 1);
- pad_len = nexthdr - 1;
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO &&
+ sa->ol_flags & RTE_SECURITY_RX_HW_TRAILER_OFFLOAD) {
+ nexthdr = &m->inner_esp_next_proto;
+ } else {
+ nexthdr = rte_pktmbuf_mtod_offset(m, uint8_t*,
+ rte_pktmbuf_pkt_len(m) - sa->digest_len - 1);
+ pad_len = nexthdr - 1;
+
+ padding = pad_len - *pad_len;
+ for (i = 0; i < *pad_len; i++) {
+ if (padding[i] != i + 1) {
+ RTE_LOG(ERR, IPSEC_ESP, "invalid padding\n");
+ return -EINVAL;
+ }
+ }
- padding = pad_len - *pad_len;
- for (i = 0; i < *pad_len; i++) {
- if (padding[i] != i + 1) {
- RTE_LOG(ERR, IPSEC_ESP, "invalid padding\n");
+ if (rte_pktmbuf_trim(m, *pad_len + 2 + sa->digest_len)) {
+ RTE_LOG(ERR, IPSEC_ESP,
+ "failed to remove pad_len + digest\n");
return -EINVAL;
}
}
- if (rte_pktmbuf_trim(m, *pad_len + 2 + sa->digest_len)) {
- RTE_LOG(ERR, IPSEC_ESP,
- "failed to remove pad_len + digest\n");
- return -EINVAL;
- }
-
if (unlikely(sa->flags == TRANSPORT)) {
ip = rte_pktmbuf_mtod(m, struct ip *);
ip4 = (struct ip *)rte_pktmbuf_adj(m,
@@ -227,14 +245,13 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
struct ip *ip4;
struct ip6_hdr *ip6;
struct esp_hdr *esp = NULL;
- uint8_t *padding, *new_ip, nlp;
+ uint8_t *padding = NULL, *new_ip, nlp;
struct rte_crypto_sym_op *sym_cop;
int32_t i;
uint16_t pad_payload_len, pad_len, ip_hdr_len;
RTE_ASSERT(m != NULL);
RTE_ASSERT(sa != NULL);
- RTE_ASSERT(cop != NULL);
ip_hdr_len = 0;
@@ -284,12 +301,19 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
return -EINVAL;
}
- padding = (uint8_t *)rte_pktmbuf_append(m, pad_len + sa->digest_len);
- if (unlikely(padding == NULL)) {
- RTE_LOG(ERR, IPSEC_ESP, "not enough mbuf trailing space\n");
- return -ENOSPC;
+ /* Add trailer padding if it is not constructed by HW */
+ if (sa->type != RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO ||
+ (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO &&
+ !(sa->ol_flags & RTE_SECURITY_TX_HW_TRAILER_OFFLOAD))) {
+ padding = (uint8_t *)rte_pktmbuf_append(m, pad_len +
+ sa->digest_len);
+ if (unlikely(padding == NULL)) {
+ RTE_LOG(ERR, IPSEC_ESP,
+ "not enough mbuf trailing space\n");
+ return -ENOSPC;
+ }
+ rte_prefetch0(padding);
}
- rte_prefetch0(padding);
switch (sa->flags) {
case IP4_TUNNEL:
@@ -323,15 +347,46 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
esp->spi = rte_cpu_to_be_32(sa->spi);
esp->seq = rte_cpu_to_be_32((uint32_t)sa->seq);
+ /* set iv */
uint64_t *iv = (uint64_t *)(esp + 1);
+ if (sa->aead_algo == RTE_CRYPTO_AEAD_AES_GCM) {
+ *iv = rte_cpu_to_be_64(sa->seq);
+ } else {
+ switch (sa->cipher_algo) {
+ case RTE_CRYPTO_CIPHER_NULL:
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ memset(iv, 0, sa->iv_len);
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CTR:
+ *iv = rte_cpu_to_be_64(sa->seq);
+ break;
+ default:
+ RTE_LOG(ERR, IPSEC_ESP,
+ "unsupported cipher algorithm %u\n",
+ sa->cipher_algo);
+ return -EINVAL;
+ }
+ }
+
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) {
+ if (sa->ol_flags & RTE_SECURITY_TX_HW_TRAILER_OFFLOAD) {
+ /* Set the inner esp next protocol for HW trailer */
+ m->inner_esp_next_proto = nlp;
+ m->packet_type |= RTE_PTYPE_TUNNEL_ESP;
+ } else {
+ padding[pad_len - 2] = pad_len - 2;
+ padding[pad_len - 1] = nlp;
+ }
+ goto done;
+ }
+ RTE_ASSERT(cop != NULL);
sym_cop = get_sym_cop(cop);
sym_cop->m_src = m;
if (sa->aead_algo == RTE_CRYPTO_AEAD_AES_GCM) {
uint8_t *aad;
- *iv = rte_cpu_to_be_64(sa->seq);
sym_cop->aead.data.offset = ip_hdr_len +
sizeof(struct esp_hdr) + sa->iv_len;
sym_cop->aead.data.length = pad_payload_len;
@@ -361,13 +416,11 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
switch (sa->cipher_algo) {
case RTE_CRYPTO_CIPHER_NULL:
case RTE_CRYPTO_CIPHER_AES_CBC:
- memset(iv, 0, sa->iv_len);
sym_cop->cipher.data.offset = ip_hdr_len +
sizeof(struct esp_hdr);
sym_cop->cipher.data.length = pad_payload_len + sa->iv_len;
break;
case RTE_CRYPTO_CIPHER_AES_CTR:
- *iv = rte_cpu_to_be_64(sa->seq);
sym_cop->cipher.data.offset = ip_hdr_len +
sizeof(struct esp_hdr) + sa->iv_len;
sym_cop->cipher.data.length = pad_payload_len;
@@ -409,21 +462,26 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
rte_pktmbuf_pkt_len(m) - sa->digest_len);
}
+done:
return 0;
}
int
-esp_outbound_post(struct rte_mbuf *m __rte_unused,
- struct ipsec_sa *sa __rte_unused,
- struct rte_crypto_op *cop)
+esp_outbound_post(struct rte_mbuf *m,
+ struct ipsec_sa *sa,
+ struct rte_crypto_op *cop)
{
RTE_ASSERT(m != NULL);
RTE_ASSERT(sa != NULL);
- RTE_ASSERT(cop != NULL);
- if (cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS) {
- RTE_LOG(ERR, IPSEC_ESP, "Failed crypto op\n");
- return -1;
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) {
+ m->ol_flags |= PKT_TX_SEC_OFFLOAD;
+ } else {
+ RTE_ASSERT(cop != NULL);
+ if (cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS) {
+ RTE_LOG(ERR, IPSEC_ESP, "Failed crypto op\n");
+ return -1;
+ }
}
return 0;
diff --git a/examples/ipsec-secgw/esp.h b/examples/ipsec-secgw/esp.h
index fa5cc8a..23601e3 100644
--- a/examples/ipsec-secgw/esp.h
+++ b/examples/ipsec-secgw/esp.h
@@ -35,16 +35,6 @@
struct mbuf;
-/* RFC4303 */
-struct esp_hdr {
- uint32_t spi;
- uint32_t seq;
- /* Payload */
- /* Padding */
- /* Pad Length */
- /* Next Header */
- /* Integrity Check Value - ICV */
-};
int
esp_inbound(struct rte_mbuf *m, struct ipsec_sa *sa,
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 6abf852..6201d85 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -1390,6 +1390,11 @@ port_init(uint16_t portid)
port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
}
+ if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_SECURITY)
+ port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SECURITY;
+ if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_SECURITY)
+ port_conf.txmode.offloads |= DEV_TX_OFFLOAD_SECURITY;
+
ret = rte_eth_dev_configure(portid, nb_rx_queue, nb_tx_queue,
&port_conf);
if (ret < 0)
diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
index 36fb8c8..c24284d 100644
--- a/examples/ipsec-secgw/ipsec.c
+++ b/examples/ipsec-secgw/ipsec.c
@@ -37,7 +37,9 @@
#include <rte_branch_prediction.h>
#include <rte_log.h>
#include <rte_crypto.h>
+#include <rte_security.h>
#include <rte_cryptodev.h>
+#include <rte_ethdev.h>
#include <rte_mbuf.h>
#include <rte_hash.h>
@@ -49,7 +51,7 @@ create_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa)
{
struct rte_cryptodev_info cdev_info;
unsigned long cdev_id_qp = 0;
- int32_t ret;
+ int32_t ret = 0;
struct cdev_key key = { 0 };
key.lcore_id = (uint8_t)rte_lcore_id();
@@ -58,16 +60,19 @@ create_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa)
key.auth_algo = (uint8_t)sa->auth_algo;
key.aead_algo = (uint8_t)sa->aead_algo;
- ret = rte_hash_lookup_data(ipsec_ctx->cdev_map, &key,
- (void **)&cdev_id_qp);
- if (ret < 0) {
- RTE_LOG(ERR, IPSEC, "No cryptodev: core %u, cipher_algo %u, "
- "auth_algo %u, aead_algo %u\n",
- key.lcore_id,
- key.cipher_algo,
- key.auth_algo,
- key.aead_algo);
- return -1;
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_NONE) {
+ ret = rte_hash_lookup_data(ipsec_ctx->cdev_map, &key,
+ (void **)&cdev_id_qp);
+ if (ret < 0) {
+ RTE_LOG(ERR, IPSEC,
+ "No cryptodev: core %u, cipher_algo %u, "
+ "auth_algo %u, aead_algo %u\n",
+ key.lcore_id,
+ key.cipher_algo,
+ key.auth_algo,
+ key.aead_algo);
+ return -1;
+ }
}
RTE_LOG_DP(DEBUG, IPSEC, "Create session for SA spi %u on cryptodev "
@@ -75,23 +80,153 @@ create_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa)
ipsec_ctx->tbl[cdev_id_qp].id,
ipsec_ctx->tbl[cdev_id_qp].qp);
- sa->crypto_session = rte_cryptodev_sym_session_create(
- ipsec_ctx->session_pool);
- rte_cryptodev_sym_session_init(ipsec_ctx->tbl[cdev_id_qp].id,
- sa->crypto_session, sa->xforms,
- ipsec_ctx->session_pool);
-
- rte_cryptodev_info_get(ipsec_ctx->tbl[cdev_id_qp].id, &cdev_info);
- if (cdev_info.sym.max_nb_sessions_per_qp > 0) {
- ret = rte_cryptodev_queue_pair_attach_sym_session(
- ipsec_ctx->tbl[cdev_id_qp].id,
- ipsec_ctx->tbl[cdev_id_qp].qp,
- sa->crypto_session);
- if (ret < 0) {
- RTE_LOG(ERR, IPSEC,
- "Session cannot be attached to qp %u ",
- ipsec_ctx->tbl[cdev_id_qp].qp);
- return -1;
+ if (sa->type != RTE_SECURITY_ACTION_TYPE_NONE) {
+ struct rte_security_session_conf sess_conf = {
+ .action_type = sa->type,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .spi = sa->spi,
+ .salt = sa->salt,
+ .options = { 0 },
+ .direction = sa->direction,
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = (sa->flags == IP4_TUNNEL ||
+ sa->flags == IP6_TUNNEL) ?
+ RTE_SECURITY_IPSEC_SA_MODE_TUNNEL :
+ RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
+ },
+ .crypto_xform = sa->xforms
+
+ };
+
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL) {
+ struct rte_security_ctx *ctx = (struct rte_security_ctx *)
+ rte_cryptodev_get_sec_ctx(
+ ipsec_ctx->tbl[cdev_id_qp].id);
+
+ if (sess_conf.ipsec.mode ==
+ RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
+ struct rte_security_ipsec_tunnel_param *tunnel =
+ &sess_conf.ipsec.tunnel;
+ if (sa->flags == IP4_TUNNEL) {
+ tunnel->type =
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+ tunnel->ipv4.ttl = IPDEFTTL;
+
+ memcpy((uint8_t *)&tunnel->ipv4.src_ip,
+ (uint8_t *)&sa->src.ip.ip4, 4);
+
+ memcpy((uint8_t *)&tunnel->ipv4.dst_ip,
+ (uint8_t *)&sa->dst.ip.ip4, 4);
+ }
+ /* TODO support for Transport and IPV6 tunnel */
+ }
+
+ sa->sec_session = rte_security_session_create(ctx,
+ &sess_conf, ipsec_ctx->session_pool);
+ if (sa->sec_session == NULL) {
+ RTE_LOG(ERR, IPSEC,
+ "SEC Session init failed: err: %d\n", ret);
+ return -1;
+ }
+ } else if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) {
+ struct rte_flow_error err;
+ struct rte_security_ctx *ctx = (struct rte_security_ctx *)
+ rte_eth_dev_get_sec_ctx(
+ sa->portid);
+ const struct rte_security_capability *sec_cap;
+
+ sa->sec_session = rte_security_session_create(ctx,
+ &sess_conf, ipsec_ctx->session_pool);
+ if (sa->sec_session == NULL) {
+ RTE_LOG(ERR, IPSEC,
+ "SEC Session init failed: err: %d\n", ret);
+ return -1;
+ }
+
+ sec_cap = rte_security_capabilities_get(ctx);
+
+ /* iterate until ESP tunnel*/
+ while (sec_cap->action !=
+ RTE_SECURITY_ACTION_TYPE_NONE) {
+
+ if (sec_cap->action == sa->type &&
+ sec_cap->protocol ==
+ RTE_SECURITY_PROTOCOL_IPSEC &&
+ sec_cap->ipsec.mode ==
+ RTE_SECURITY_IPSEC_SA_MODE_TUNNEL &&
+ sec_cap->ipsec.direction == sa->direction)
+ break;
+ sec_cap++;
+ }
+
+ if (sec_cap->action == RTE_SECURITY_ACTION_TYPE_NONE) {
+ RTE_LOG(ERR, IPSEC,
+ "No suitable security capability found\n");
+ return -1;
+ }
+
+ sa->ol_flags = sec_cap->ol_flags;
+ sa->security_ctx = ctx;
+ sa->pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH;
+
+ sa->pattern[1].type = RTE_FLOW_ITEM_TYPE_IPV4;
+ sa->pattern[1].mask = &rte_flow_item_ipv4_mask;
+ if (sa->flags & IP6_TUNNEL) {
+ sa->pattern[1].spec = &sa->ipv6_spec;
+ memcpy(sa->ipv6_spec.hdr.dst_addr,
+ sa->dst.ip.ip6.ip6_b, 16);
+ memcpy(sa->ipv6_spec.hdr.src_addr,
+ sa->src.ip.ip6.ip6_b, 16);
+ } else {
+ sa->pattern[1].spec = &sa->ipv4_spec;
+ sa->ipv4_spec.hdr.dst_addr = sa->dst.ip.ip4;
+ sa->ipv4_spec.hdr.src_addr = sa->src.ip.ip4;
+ }
+
+ sa->pattern[2].type = RTE_FLOW_ITEM_TYPE_ESP;
+ sa->pattern[2].spec = &sa->esp_spec;
+ sa->pattern[2].mask = &rte_flow_item_esp_mask;
+ sa->esp_spec.hdr.spi = sa->spi;
+
+ sa->pattern[3].type = RTE_FLOW_ITEM_TYPE_END;
+
+ sa->action[0].type = RTE_FLOW_ACTION_TYPE_SECURITY;
+ sa->action[0].conf = sa->sec_session;
+
+ sa->action[1].type = RTE_FLOW_ACTION_TYPE_END;
+
+ sa->attr.egress = (sa->direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS);
+ sa->flow = rte_flow_create(sa->portid,
+ &sa->attr, sa->pattern, sa->action, &err);
+ if (sa->flow == NULL) {
+ RTE_LOG(ERR, IPSEC,
+ "Failed to create ipsec flow msg: %s\n",
+ err.message);
+ return -1;
+ }
+ }
+ } else {
+ sa->crypto_session = rte_cryptodev_sym_session_create(
+ ipsec_ctx->session_pool);
+ rte_cryptodev_sym_session_init(ipsec_ctx->tbl[cdev_id_qp].id,
+ sa->crypto_session, sa->xforms,
+ ipsec_ctx->session_pool);
+
+ rte_cryptodev_info_get(ipsec_ctx->tbl[cdev_id_qp].id,
+ &cdev_info);
+ if (cdev_info.sym.max_nb_sessions_per_qp > 0) {
+ ret = rte_cryptodev_queue_pair_attach_sym_session(
+ ipsec_ctx->tbl[cdev_id_qp].id,
+ ipsec_ctx->tbl[cdev_id_qp].qp,
+ sa->crypto_session);
+ if (ret < 0) {
+ RTE_LOG(ERR, IPSEC,
+ "Session cannot be attached to qp %u\n",
+ ipsec_ctx->tbl[cdev_id_qp].qp);
+ return -1;
+ }
}
}
sa->cdev_id_qp = cdev_id_qp;
@@ -129,7 +264,9 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
{
int32_t ret = 0, i;
struct ipsec_mbuf_metadata *priv;
+ struct rte_crypto_sym_op *sym_cop;
struct ipsec_sa *sa;
+ struct cdev_qp *cqp;
for (i = 0; i < nb_pkts; i++) {
if (unlikely(sas[i] == NULL)) {
@@ -144,23 +281,76 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
sa = sas[i];
priv->sa = sa;
- priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
- priv->cop.status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
-
- rte_prefetch0(&priv->sym_cop);
-
- if ((unlikely(sa->crypto_session == NULL)) &&
- create_session(ipsec_ctx, sa)) {
- rte_pktmbuf_free(pkts[i]);
- continue;
- }
-
- rte_crypto_op_attach_sym_session(&priv->cop,
- sa->crypto_session);
-
- ret = xform_func(pkts[i], sa, &priv->cop);
- if (unlikely(ret)) {
- rte_pktmbuf_free(pkts[i]);
+ switch (sa->type) {
+ case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
+ priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ priv->cop.status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+
+ rte_prefetch0(&priv->sym_cop);
+
+ if ((unlikely(sa->sec_session == NULL)) &&
+ create_session(ipsec_ctx, sa)) {
+ rte_pktmbuf_free(pkts[i]);
+ continue;
+ }
+
+ sym_cop = get_sym_cop(&priv->cop);
+ sym_cop->m_src = pkts[i];
+
+ rte_security_attach_session(&priv->cop,
+ sa->sec_session);
+ break;
+ case RTE_SECURITY_ACTION_TYPE_NONE:
+
+ priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ priv->cop.status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+
+ rte_prefetch0(&priv->sym_cop);
+
+ if ((unlikely(sa->crypto_session == NULL)) &&
+ create_session(ipsec_ctx, sa)) {
+ rte_pktmbuf_free(pkts[i]);
+ continue;
+ }
+
+ rte_crypto_op_attach_sym_session(&priv->cop,
+ sa->crypto_session);
+
+ ret = xform_func(pkts[i], sa, &priv->cop);
+ if (unlikely(ret)) {
+ rte_pktmbuf_free(pkts[i]);
+ continue;
+ }
+ break;
+ case RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+ break;
+ case RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO:
+ priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ priv->cop.status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+
+ rte_prefetch0(&priv->sym_cop);
+
+ if ((unlikely(sa->sec_session == NULL)) &&
+ create_session(ipsec_ctx, sa)) {
+ rte_pktmbuf_free(pkts[i]);
+ continue;
+ }
+
+ rte_security_attach_session(&priv->cop,
+ sa->sec_session);
+
+ ret = xform_func(pkts[i], sa, &priv->cop);
+ if (unlikely(ret)) {
+ rte_pktmbuf_free(pkts[i]);
+ continue;
+ }
+
+ cqp = &ipsec_ctx->tbl[sa->cdev_id_qp];
+ cqp->ol_pkts[cqp->ol_pkts_cnt++] = pkts[i];
+ if (sa->ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA)
+ rte_security_set_pkt_metadata(
+ sa->security_ctx,
+ sa->sec_session, pkts[i], NULL);
continue;
}
@@ -171,7 +361,7 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
static inline int
ipsec_dequeue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
- struct rte_mbuf *pkts[], uint16_t max_pkts)
+ struct rte_mbuf *pkts[], uint16_t max_pkts)
{
int32_t nb_pkts = 0, ret = 0, i, j, nb_cops;
struct ipsec_mbuf_metadata *priv;
@@ -186,6 +376,19 @@ ipsec_dequeue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
if (ipsec_ctx->last_qp == ipsec_ctx->nb_qps)
ipsec_ctx->last_qp %= ipsec_ctx->nb_qps;
+ while (cqp->ol_pkts_cnt > 0 && nb_pkts < max_pkts) {
+ pkt = cqp->ol_pkts[--cqp->ol_pkts_cnt];
+ rte_prefetch0(pkt);
+ priv = get_priv(pkt);
+ sa = priv->sa;
+ ret = xform_func(pkt, sa, &priv->cop);
+ if (unlikely(ret)) {
+ rte_pktmbuf_free(pkt);
+ continue;
+ }
+ pkts[nb_pkts++] = pkt;
+ }
+
if (cqp->in_flight == 0)
continue;
@@ -203,11 +406,14 @@ ipsec_dequeue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
RTE_ASSERT(sa != NULL);
- ret = xform_func(pkt, sa, cops[j]);
- if (unlikely(ret))
- rte_pktmbuf_free(pkt);
- else
- pkts[nb_pkts++] = pkt;
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_NONE) {
+ ret = xform_func(pkt, sa, cops[j]);
+ if (unlikely(ret)) {
+ rte_pktmbuf_free(pkt);
+ continue;
+ }
+ }
+ pkts[nb_pkts++] = pkt;
}
}
diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h
index 7d057ae..775b316 100644
--- a/examples/ipsec-secgw/ipsec.h
+++ b/examples/ipsec-secgw/ipsec.h
@@ -38,6 +38,8 @@
#include <rte_byteorder.h>
#include <rte_crypto.h>
+#include <rte_security.h>
+#include <rte_flow.h>
#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1
#define RTE_LOGTYPE_IPSEC_ESP RTE_LOGTYPE_USER2
@@ -99,7 +101,10 @@ struct ipsec_sa {
uint32_t cdev_id_qp;
uint64_t seq;
uint32_t salt;
- struct rte_cryptodev_sym_session *crypto_session;
+ union {
+ struct rte_cryptodev_sym_session *crypto_session;
+ struct rte_security_session *sec_session;
+ };
enum rte_crypto_cipher_algorithm cipher_algo;
enum rte_crypto_auth_algorithm auth_algo;
enum rte_crypto_aead_algorithm aead_algo;
@@ -117,7 +122,28 @@ struct ipsec_sa {
uint8_t auth_key[MAX_KEY_SIZE];
uint16_t auth_key_len;
uint16_t aad_len;
- struct rte_crypto_sym_xform *xforms;
+ union {
+ struct rte_crypto_sym_xform *xforms;
+ struct rte_security_ipsec_xform *sec_xform;
+ };
+ enum rte_security_session_action_type type;
+ enum rte_security_ipsec_sa_direction direction;
+ uint16_t portid;
+ struct rte_security_ctx *security_ctx;
+ uint32_t ol_flags;
+
+#define MAX_RTE_FLOW_PATTERN (4)
+#define MAX_RTE_FLOW_ACTIONS (2)
+ struct rte_flow_item pattern[MAX_RTE_FLOW_PATTERN];
+ struct rte_flow_action action[MAX_RTE_FLOW_ACTIONS];
+ struct rte_flow_attr attr;
+ union {
+ struct rte_flow_item_ipv4 ipv4_spec;
+ struct rte_flow_item_ipv6 ipv6_spec;
+ };
+ struct rte_flow_item_esp esp_spec;
+ struct rte_flow *flow;
+ struct rte_security_session_conf sess_conf;
} __rte_cache_aligned;
struct ipsec_mbuf_metadata {
@@ -133,6 +159,8 @@ struct cdev_qp {
uint16_t in_flight;
uint16_t len;
struct rte_crypto_op *buf[MAX_PKT_BURST] __rte_aligned(sizeof(void *));
+ struct rte_mbuf *ol_pkts[MAX_PKT_BURST] __rte_aligned(sizeof(void *));
+ uint16_t ol_pkts_cnt;
};
struct ipsec_ctx {
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index ef94475..d8ee47b 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -41,16 +41,20 @@
#include <rte_memzone.h>
#include <rte_crypto.h>
+#include <rte_security.h>
#include <rte_cryptodev.h>
#include <rte_byteorder.h>
#include <rte_errno.h>
#include <rte_ip.h>
#include <rte_random.h>
+#include <rte_ethdev.h>
#include "ipsec.h"
#include "esp.h"
#include "parser.h"
+#define IPDEFTTL 64
+
struct supported_cipher_algo {
const char *keyword;
enum rte_crypto_cipher_algorithm algo;
@@ -238,6 +242,8 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
uint32_t src_p = 0;
uint32_t dst_p = 0;
uint32_t mode_p = 0;
+ uint32_t type_p = 0;
+ uint32_t portid_p = 0;
if (strcmp(tokens[0], "in") == 0) {
ri = &nb_sa_in;
@@ -550,6 +556,52 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
continue;
}
+ if (strcmp(tokens[ti], "type") == 0) {
+ APP_CHECK_PRESENCE(type_p, tokens[ti], status);
+ if (status->status < 0)
+ return;
+
+ INCREMENT_TOKEN_INDEX(ti, n_tokens, status);
+ if (status->status < 0)
+ return;
+
+ if (strcmp(tokens[ti], "inline-crypto-offload") == 0)
+ rule->type =
+ RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO;
+ else if (strcmp(tokens[ti],
+ "inline-protocol-offload") == 0)
+ rule->type =
+ RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL;
+ else if (strcmp(tokens[ti],
+ "lookaside-protocol-offload") == 0)
+ rule->type =
+ RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL;
+ else if (strcmp(tokens[ti], "no-offload") == 0)
+ rule->type = RTE_SECURITY_ACTION_TYPE_NONE;
+ else {
+ APP_CHECK(0, status, "Invalid input \"%s\"",
+ tokens[ti]);
+ return;
+ }
+
+ type_p = 1;
+ continue;
+ }
+
+ if (strcmp(tokens[ti], "port_id") == 0) {
+ APP_CHECK_PRESENCE(portid_p, tokens[ti], status);
+ if (status->status < 0)
+ return;
+ INCREMENT_TOKEN_INDEX(ti, n_tokens, status);
+ if (status->status < 0)
+ return;
+ rule->portid = atoi(tokens[ti]);
+ if (status->status < 0)
+ return;
+ portid_p = 1;
+ continue;
+ }
+
/* unrecognizeable input */
APP_CHECK(0, status, "unrecognized input \"%s\"",
tokens[ti]);
@@ -580,6 +632,14 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
if (status->status < 0)
return;
+ if ((rule->type != RTE_SECURITY_ACTION_TYPE_NONE) && (portid_p == 0))
+ printf("Missing portid option, falling back to non-offload\n");
+
+ if (!type_p || !portid_p) {
+ rule->type = RTE_SECURITY_ACTION_TYPE_NONE;
+ rule->portid = -1;
+ }
+
*ri = *ri + 1;
}
@@ -647,9 +707,11 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound)
struct sa_ctx {
struct ipsec_sa sa[IPSEC_SA_MAX_ENTRIES];
- struct {
- struct rte_crypto_sym_xform a;
- struct rte_crypto_sym_xform b;
+ union {
+ struct {
+ struct rte_crypto_sym_xform a;
+ struct rte_crypto_sym_xform b;
+ };
} xf[IPSEC_SA_MAX_ENTRIES];
};
@@ -682,6 +744,33 @@ sa_create(const char *name, int32_t socket_id)
}
static int
+check_eth_dev_caps(uint16_t portid, uint32_t inbound)
+{
+ struct rte_eth_dev_info dev_info;
+
+ rte_eth_dev_info_get(portid, &dev_info);
+
+ if (inbound) {
+ if ((dev_info.rx_offload_capa &
+ DEV_RX_OFFLOAD_SECURITY) == 0) {
+ RTE_LOG(WARNING, PORT,
+ "hardware RX IPSec offload is not supported\n");
+ return -EINVAL;
+ }
+
+ } else { /* outbound */
+ if ((dev_info.tx_offload_capa &
+ DEV_TX_OFFLOAD_SECURITY) == 0) {
+ RTE_LOG(WARNING, PORT,
+ "hardware TX IPSec offload is not supported\n");
+ return -EINVAL;
+ }
+ }
+ return 0;
+}
+
+
+static int
sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
uint32_t nb_entries, uint32_t inbound)
{
@@ -700,6 +789,16 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
*sa = entries[i];
sa->seq = 0;
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL ||
+ sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) {
+ if (check_eth_dev_caps(sa->portid, inbound))
+ return -EINVAL;
+ }
+
+ sa->direction = (inbound == 1) ?
+ RTE_SECURITY_IPSEC_SA_DIR_INGRESS :
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+
switch (sa->flags) {
case IP4_TUNNEL:
sa->src.ip.ip4 = rte_cpu_to_be_32(sa->src.ip.ip4);
@@ -709,37 +808,21 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
if (sa->aead_algo == RTE_CRYPTO_AEAD_AES_GCM) {
iv_length = 16;
- if (inbound) {
- sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_AEAD;
- sa_ctx->xf[idx].a.aead.algo = sa->aead_algo;
- sa_ctx->xf[idx].a.aead.key.data = sa->cipher_key;
- sa_ctx->xf[idx].a.aead.key.length =
- sa->cipher_key_len;
- sa_ctx->xf[idx].a.aead.op =
- RTE_CRYPTO_AEAD_OP_DECRYPT;
- sa_ctx->xf[idx].a.next = NULL;
- sa_ctx->xf[idx].a.aead.iv.offset = IV_OFFSET;
- sa_ctx->xf[idx].a.aead.iv.length = iv_length;
- sa_ctx->xf[idx].a.aead.aad_length =
- sa->aad_len;
- sa_ctx->xf[idx].a.aead.digest_length =
- sa->digest_len;
- } else { /* outbound */
- sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_AEAD;
- sa_ctx->xf[idx].a.aead.algo = sa->aead_algo;
- sa_ctx->xf[idx].a.aead.key.data = sa->cipher_key;
- sa_ctx->xf[idx].a.aead.key.length =
- sa->cipher_key_len;
- sa_ctx->xf[idx].a.aead.op =
- RTE_CRYPTO_AEAD_OP_ENCRYPT;
- sa_ctx->xf[idx].a.next = NULL;
- sa_ctx->xf[idx].a.aead.iv.offset = IV_OFFSET;
- sa_ctx->xf[idx].a.aead.iv.length = iv_length;
- sa_ctx->xf[idx].a.aead.aad_length =
- sa->aad_len;
- sa_ctx->xf[idx].a.aead.digest_length =
- sa->digest_len;
- }
+ sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_AEAD;
+ sa_ctx->xf[idx].a.aead.algo = sa->aead_algo;
+ sa_ctx->xf[idx].a.aead.key.data = sa->cipher_key;
+ sa_ctx->xf[idx].a.aead.key.length =
+ sa->cipher_key_len;
+ sa_ctx->xf[idx].a.aead.op = (inbound == 1) ?
+ RTE_CRYPTO_AEAD_OP_DECRYPT :
+ RTE_CRYPTO_AEAD_OP_ENCRYPT;
+ sa_ctx->xf[idx].a.next = NULL;
+ sa_ctx->xf[idx].a.aead.iv.offset = IV_OFFSET;
+ sa_ctx->xf[idx].a.aead.iv.length = iv_length;
+ sa_ctx->xf[idx].a.aead.aad_length =
+ sa->aad_len;
+ sa_ctx->xf[idx].a.aead.digest_length =
+ sa->digest_len;
sa->xforms = &sa_ctx->xf[idx].a;
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v6 00/10] introduce security offload library
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 00/11] " Akhil Goyal
` (10 preceding siblings ...)
2017-10-24 14:15 ` [dpdk-dev] [PATCH v5 11/11] examples/ipsec-secgw: add support for security offload Akhil Goyal
@ 2017-10-25 15:07 ` Akhil Goyal
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 01/10] cryptodev: support security APIs Akhil Goyal
` (10 more replies)
11 siblings, 11 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-25 15:07 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
This patchset introduce the rte_security library in DPDK.
This also includes the sample implementation of drivers and
changes in ipsec gateway application to demonstrate its usage.
rte_security library is implemented on the idea proposed earlier [1],[2],[3]
to support IPsec Inline and look aside crypto offload. Though
the current focus is only on IPsec protocol, but the library is
not limited to IPsec, it can be extended to other security
protocols e.g. MACSEC, PDCP or DTLS.
In this library, crypto/ethernet devices can register itself to
the security library to support security offload.
The library support 3 modes of operation
1. full protocol offload using crypto devices.
(RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL)
2. inline ipsec using ethernet devices to perform crypto operations
(RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO)
3. full protocol offload using ethernet devices.
(RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
The details for each mode is documented in the patchset in
doc/guides/prog_guide/rte_security.rst
The modification in the application ipsec-secgw is also doocumented in
doc/guides/sample_app_ug/ipsec_secgw.rst
This patchset is also available at:
git://dpdk.org/draft/dpdk-draft-ipsec
branch: integration_v6
changes in v6:
1. fixed shared build
2. Incorporated comments from Thomas, Olivier and Shahaf
3. merged 8th patch of v5 to library patch.
4. moved cryptodev/net/mbuf/ethdev changes before the library patch so that
compilation can be done for each patch.
5. rebased over latest crypto-next.
changes in v5:
1. Incorporated comments from Shahaf, Konstantin and Thomas
2. Rebased over latest crypto-next tree(which is rebased over master) +
Aviad's v3 of ipsec-secgw fixes.
changes in v4:
1. Incorporated comments from Konstantin.
2. rebased over master
3. rebased over ipsec patches sent by Aviad
http://dpdk.org/ml/archives/dev/2017-October/079192.html
4. resolved multi process limitation
5. minor updates in documentation and drivers
changes in v3:
1. fixed compilation for FreeBSD
2. Incorporated comments from Pablo, John, Shahaf
3. Updated drivers for dpaa2_sec and ixgbe for some minor fixes
4. patch titles updated
5. fixed return type of rte_cryptodev_get_sec_id
changes in v2:
1. update documentation for rte_flow.
2. fixed API to unregister device to security library.
3. incorporated most of the comments from Jerin.
4. updated rte_security documentation as per the review comments from John.
5. Certain application updates for some cases.
6. updated changes in mbuf as per the comments from Olivier.
Future enhancements:
1. for full protocol offload - error handling and notification cases
2. add more security protocols
3. test application support
4. anti-replay support
5. SA time out support
6. Support Multi process use case
Reference:
[1] http://dpdk.org/ml/archives/dev/2017-July/070793.html
[2] http://dpdk.org/ml/archives/dev/2017-July/071893.html
[3] http://dpdk.org/ml/archives/dev/2017-August/072900.html
Akhil Goyal (5):
cryptodev: support security APIs
security: introduce security API and framework
doc: add details of rte security
crypto/dpaa2_sec: add support for protocol offload ipsec
examples/ipsec-secgw: add support for security offload
Boris Pismenny (3):
net: add ESP header to generic flow steering
mbuf: add security crypto flags and mbuf fields
ethdev: add rte flow action for crypto
Declan Doherty (1):
ethdev: support security APIs
Radu Nicolau (1):
net/ixgbe: enable inline ipsec
MAINTAINERS | 6 +
config/common_base | 5 +
doc/api/doxy-api-index.md | 2 +
doc/api/doxy-api.conf | 1 +
doc/guides/cryptodevs/features/default.ini | 1 +
doc/guides/cryptodevs/features/dpaa2_sec.ini | 1 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rte_flow.rst | 84 ++-
doc/guides/prog_guide/rte_security.rst | 564 +++++++++++++++++++
doc/guides/rel_notes/release_17_11.rst | 1 +
doc/guides/sample_app_ug/ipsec_secgw.rst | 52 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 422 +++++++++++++-
drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 62 +++
drivers/net/ixgbe/Makefile | 2 +-
drivers/net/ixgbe/base/ixgbe_osdep.h | 8 +
drivers/net/ixgbe/ixgbe_ethdev.c | 11 +
drivers/net/ixgbe/ixgbe_ethdev.h | 6 +-
drivers/net/ixgbe/ixgbe_flow.c | 47 ++
drivers/net/ixgbe/ixgbe_ipsec.c | 737 +++++++++++++++++++++++++
drivers/net/ixgbe/ixgbe_ipsec.h | 151 +++++
drivers/net/ixgbe/ixgbe_rxtx.c | 59 +-
drivers/net/ixgbe/ixgbe_rxtx.h | 11 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 57 ++
examples/ipsec-secgw/esp.c | 120 ++--
examples/ipsec-secgw/esp.h | 10 -
examples/ipsec-secgw/ipsec-secgw.c | 5 +
examples/ipsec-secgw/ipsec.c | 308 +++++++++--
examples/ipsec-secgw/ipsec.h | 32 +-
examples/ipsec-secgw/sa.c | 151 +++--
lib/Makefile | 4 +
lib/librte_cryptodev/rte_crypto.h | 3 +-
lib/librte_cryptodev/rte_crypto_sym.h | 2 +
lib/librte_cryptodev/rte_cryptodev.c | 10 +
lib/librte_cryptodev/rte_cryptodev.h | 8 +
lib/librte_cryptodev/rte_cryptodev_version.map | 1 +
lib/librte_ether/rte_ethdev.c | 13 +
lib/librte_ether/rte_ethdev.h | 9 +
lib/librte_ether/rte_ethdev_version.map | 1 +
lib/librte_ether/rte_flow.h | 65 +++
lib/librte_mbuf/rte_mbuf.c | 6 +
lib/librte_mbuf/rte_mbuf.h | 35 +-
lib/librte_mbuf/rte_mbuf_ptype.c | 1 +
lib/librte_mbuf/rte_mbuf_ptype.h | 11 +
lib/librte_net/Makefile | 2 +-
lib/librte_net/rte_esp.h | 60 ++
lib/librte_security/Makefile | 54 ++
lib/librte_security/rte_security.c | 149 +++++
lib/librte_security/rte_security.h | 529 ++++++++++++++++++
lib/librte_security/rte_security_driver.h | 156 ++++++
lib/librte_security/rte_security_version.map | 14 +
mk/rte.app.mk | 1 +
51 files changed, 3893 insertions(+), 158 deletions(-)
create mode 100644 doc/guides/prog_guide/rte_security.rst
create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.c
create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.h
create mode 100644 lib/librte_net/rte_esp.h
create mode 100644 lib/librte_security/Makefile
create mode 100644 lib/librte_security/rte_security.c
create mode 100644 lib/librte_security/rte_security.h
create mode 100644 lib/librte_security/rte_security_driver.h
create mode 100644 lib/librte_security/rte_security_version.map
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v6 01/10] cryptodev: support security APIs
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 00/10] introduce security offload library Akhil Goyal
@ 2017-10-25 15:07 ` Akhil Goyal
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 02/10] net: add ESP header to generic flow steering Akhil Goyal
` (9 subsequent siblings)
10 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-25 15:07 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
Security ops are added to crypto device to support
protocol offloaded security operations.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
doc/guides/cryptodevs/features/default.ini | 1 +
lib/librte_cryptodev/rte_crypto.h | 3 ++-
lib/librte_cryptodev/rte_crypto_sym.h | 2 ++
lib/librte_cryptodev/rte_cryptodev.c | 10 ++++++++++
lib/librte_cryptodev/rte_cryptodev.h | 8 ++++++++
lib/librte_cryptodev/rte_cryptodev_version.map | 1 +
6 files changed, 24 insertions(+), 1 deletion(-)
diff --git a/doc/guides/cryptodevs/features/default.ini b/doc/guides/cryptodevs/features/default.ini
index c98717a..18d66cb 100644
--- a/doc/guides/cryptodevs/features/default.ini
+++ b/doc/guides/cryptodevs/features/default.ini
@@ -10,6 +10,7 @@ Symmetric crypto =
Asymmetric crypto =
Sym operation chaining =
HW Accelerated =
+Protocol offload =
CPU SSE =
CPU AVX =
CPU AVX2 =
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
index 3ef9e41..eeed9ee 100644
--- a/lib/librte_cryptodev/rte_crypto.h
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -86,7 +86,8 @@ enum rte_crypto_op_status {
*/
enum rte_crypto_op_sess_type {
RTE_CRYPTO_OP_WITH_SESSION, /**< Session based crypto operation */
- RTE_CRYPTO_OP_SESSIONLESS /**< Session-less crypto operation */
+ RTE_CRYPTO_OP_SESSIONLESS, /**< Session-less crypto operation */
+ RTE_CRYPTO_OP_SECURITY_SESSION /**< Security session crypto operation */
};
/**
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index 0a0ea59..5992063 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -508,6 +508,8 @@ struct rte_crypto_sym_op {
/**< Handle for the initialised session context */
struct rte_crypto_sym_xform *xform;
/**< Session-less API crypto operation parameters */
+ struct rte_security_session *sec_session;
+ /**< Handle for the initialised security session context */
};
RTE_STD_C11
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index e48d562..b9fbe0a 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -488,6 +488,16 @@ rte_cryptodev_devices_get(const char *driver_name, uint8_t *devices,
return count;
}
+void *
+rte_cryptodev_get_sec_ctx(uint8_t dev_id)
+{
+ if (rte_crypto_devices[dev_id].feature_flags &
+ RTE_CRYPTODEV_FF_SECURITY)
+ return rte_crypto_devices[dev_id].security_ctx;
+
+ return NULL;
+}
+
int
rte_cryptodev_socket_id(uint8_t dev_id)
{
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index fd0e3f1..cdc12db 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -351,6 +351,8 @@ rte_cryptodev_get_aead_algo_enum(enum rte_crypto_aead_algorithm *algo_enum,
/**< Utilises CPU NEON instructions */
#define RTE_CRYPTODEV_FF_CPU_ARM_CE (1ULL << 11)
/**< Utilises ARM CPU Cryptographic Extensions */
+#define RTE_CRYPTODEV_FF_SECURITY (1ULL << 12)
+/**< Support Security Protocol Processing */
/**
@@ -769,11 +771,17 @@ struct rte_cryptodev {
struct rte_cryptodev_cb_list link_intr_cbs;
/**< User application callback for interrupts if present */
+ void *security_ctx;
+ /**< Context for security ops */
+
__extension__
uint8_t attached : 1;
/**< Flag indicating the device is attached */
} __rte_cache_aligned;
+void *
+rte_cryptodev_get_sec_ctx(uint8_t dev_id);
+
/**
*
* The data part, with no function pointers, associated with each device.
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
index 919b6cc..3df3018 100644
--- a/lib/librte_cryptodev/rte_cryptodev_version.map
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -83,6 +83,7 @@ DPDK_17.08 {
DPDK_17.11 {
global:
+ rte_cryptodev_get_sec_ctx;
rte_cryptodev_name_get;
} DPDK_17.08;
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v6 02/10] net: add ESP header to generic flow steering
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 00/10] introduce security offload library Akhil Goyal
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 01/10] cryptodev: support security APIs Akhil Goyal
@ 2017-10-25 15:07 ` Akhil Goyal
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 03/10] mbuf: add security crypto flags and mbuf fields Akhil Goyal
` (8 subsequent siblings)
10 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-25 15:07 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
From: Boris Pismenny <borisp@mellanox.com>
The ESP header is required for IPsec crypto actions.
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
---
doc/api/doxy-api-index.md | 1 +
lib/librte_ether/rte_flow.h | 26 ++++++++++++++++++++
lib/librte_net/Makefile | 2 +-
lib/librte_net/rte_esp.h | 60 +++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 88 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_net/rte_esp.h
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 5aef5b2..6ac9593 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -100,6 +100,7 @@ The public API headers are grouped by topics:
[ethernet] (@ref rte_ether.h),
[ARP] (@ref rte_arp.h),
[ICMP] (@ref rte_icmp.h),
+ [ESP] (@ref rte_esp.h),
[IP] (@ref rte_ip.h),
[SCTP] (@ref rte_sctp.h),
[TCP] (@ref rte_tcp.h),
diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
index 062e3ac..bd8274d 100644
--- a/lib/librte_ether/rte_flow.h
+++ b/lib/librte_ether/rte_flow.h
@@ -50,6 +50,7 @@
#include <rte_tcp.h>
#include <rte_udp.h>
#include <rte_byteorder.h>
+#include <rte_esp.h>
#ifdef __cplusplus
extern "C" {
@@ -336,6 +337,13 @@ enum rte_flow_item_type {
* See struct rte_flow_item_gtp.
*/
RTE_FLOW_ITEM_TYPE_GTPU,
+
+ /**
+ * Matches a ESP header.
+ *
+ * See struct rte_flow_item_esp.
+ */
+ RTE_FLOW_ITEM_TYPE_ESP,
};
/**
@@ -787,6 +795,24 @@ static const struct rte_flow_item_gtp rte_flow_item_gtp_mask = {
#endif
/**
+ * RTE_FLOW_ITEM_TYPE_ESP
+ *
+ * Matches an ESP header.
+ */
+struct rte_flow_item_esp {
+ struct esp_hdr hdr; /**< ESP header definition. */
+};
+
+/** Default mask for RTE_FLOW_ITEM_TYPE_ESP. */
+#ifndef __cplusplus
+static const struct rte_flow_item_esp rte_flow_item_esp_mask = {
+ .hdr = {
+ .spi = 0xffffffff,
+ },
+};
+#endif
+
+/**
* Matching pattern item definition.
*
* A pattern is formed by stacking items starting from the lowest protocol
diff --git a/lib/librte_net/Makefile b/lib/librte_net/Makefile
index cdaf0c7..50c358e 100644
--- a/lib/librte_net/Makefile
+++ b/lib/librte_net/Makefile
@@ -43,7 +43,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_NET) := rte_net.c
SRCS-$(CONFIG_RTE_LIBRTE_NET) += rte_net_crc.c
# install includes
-SYMLINK-$(CONFIG_RTE_LIBRTE_NET)-include := rte_ip.h rte_tcp.h rte_udp.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_NET)-include := rte_ip.h rte_tcp.h rte_udp.h rte_esp.h
SYMLINK-$(CONFIG_RTE_LIBRTE_NET)-include += rte_sctp.h rte_icmp.h rte_arp.h
SYMLINK-$(CONFIG_RTE_LIBRTE_NET)-include += rte_ether.h rte_gre.h rte_net.h
SYMLINK-$(CONFIG_RTE_LIBRTE_NET)-include += rte_net_crc.h
diff --git a/lib/librte_net/rte_esp.h b/lib/librte_net/rte_esp.h
new file mode 100644
index 0000000..e228af0
--- /dev/null
+++ b/lib/librte_net/rte_esp.h
@@ -0,0 +1,60 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright (c) 2016-2017, Mellanox Technologies. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_ESP_H_
+#define _RTE_ESP_H_
+
+/**
+ * @file
+ *
+ * ESP-related defines
+ */
+
+#include <stdint.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * ESP Header
+ */
+struct esp_hdr {
+ uint32_t spi; /**< Security Parameters Index */
+ uint32_t seq; /**< packet sequence number */
+} __attribute__((__packed__));
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* RTE_ESP_H_ */
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v6 03/10] mbuf: add security crypto flags and mbuf fields
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 00/10] introduce security offload library Akhil Goyal
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 01/10] cryptodev: support security APIs Akhil Goyal
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 02/10] net: add ESP header to generic flow steering Akhil Goyal
@ 2017-10-25 15:07 ` Akhil Goyal
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 04/10] ethdev: support security APIs Akhil Goyal
` (7 subsequent siblings)
10 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-25 15:07 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
From: Boris Pismenny <borisp@mellanox.com>
Add security crypto flags and update mbuf fields to support
IPsec crypto offload for transmitted packets, and to indicate
crypto result for received packets.
Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
lib/librte_mbuf/rte_mbuf.c | 6 ++++++
lib/librte_mbuf/rte_mbuf.h | 35 ++++++++++++++++++++++++++++++++---
lib/librte_mbuf/rte_mbuf_ptype.c | 1 +
lib/librte_mbuf/rte_mbuf_ptype.h | 11 +++++++++++
4 files changed, 50 insertions(+), 3 deletions(-)
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 0e18709..6659261 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -324,6 +324,8 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)
case PKT_RX_QINQ_STRIPPED: return "PKT_RX_QINQ_STRIPPED";
case PKT_RX_LRO: return "PKT_RX_LRO";
case PKT_RX_TIMESTAMP: return "PKT_RX_TIMESTAMP";
+ case PKT_RX_SEC_OFFLOAD: return "PKT_RX_SEC_OFFLOAD";
+ case PKT_RX_SEC_OFFLOAD_FAILED: return "PKT_RX_SEC_OFFLOAD_FAILED";
default: return NULL;
}
}
@@ -359,6 +361,8 @@ rte_get_rx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
{ PKT_RX_QINQ_STRIPPED, PKT_RX_QINQ_STRIPPED, NULL },
{ PKT_RX_LRO, PKT_RX_LRO, NULL },
{ PKT_RX_TIMESTAMP, PKT_RX_TIMESTAMP, NULL },
+ { PKT_RX_SEC_OFFLOAD, PKT_RX_SEC_OFFLOAD, NULL },
+ { PKT_RX_SEC_OFFLOAD_FAILED, PKT_RX_SEC_OFFLOAD_FAILED, NULL },
};
const char *name;
unsigned int i;
@@ -411,6 +415,7 @@ const char *rte_get_tx_ol_flag_name(uint64_t mask)
case PKT_TX_TUNNEL_GENEVE: return "PKT_TX_TUNNEL_GENEVE";
case PKT_TX_TUNNEL_MPLSINUDP: return "PKT_TX_TUNNEL_MPLSINUDP";
case PKT_TX_MACSEC: return "PKT_TX_MACSEC";
+ case PKT_TX_SEC_OFFLOAD: return "PKT_TX_SEC_OFFLOAD";
default: return NULL;
}
}
@@ -444,6 +449,7 @@ rte_get_tx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
{ PKT_TX_TUNNEL_MPLSINUDP, PKT_TX_TUNNEL_MASK,
"PKT_TX_TUNNEL_NONE" },
{ PKT_TX_MACSEC, PKT_TX_MACSEC, NULL },
+ { PKT_TX_SEC_OFFLOAD, PKT_TX_SEC_OFFLOAD, NULL },
};
const char *name;
unsigned int i;
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index cc38040..d88f8fe 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -189,11 +189,26 @@ extern "C" {
*/
#define PKT_RX_TIMESTAMP (1ULL << 17)
+/**
+ * Indicate that security offload processing was applied on the RX packet.
+ */
+#define PKT_RX_SEC_OFFLOAD (1ULL << 18)
+
+/**
+ * Indicate that security offload processing failed on the RX packet.
+ */
+#define PKT_RX_SEC_OFFLOAD_FAILED (1ULL << 19)
+
/* add new RX flags here */
/* add new TX flags here */
/**
+ * Request security offload processing on the TX packet.
+ */
+#define PKT_TX_SEC_OFFLOAD (1ULL << 43)
+
+/**
* Offload the MACsec. This flag must be set by the application to enable
* this offload feature for a packet to be transmitted.
*/
@@ -316,7 +331,8 @@ extern "C" {
PKT_TX_QINQ_PKT | \
PKT_TX_VLAN_PKT | \
PKT_TX_TUNNEL_MASK | \
- PKT_TX_MACSEC)
+ PKT_TX_MACSEC | \
+ PKT_TX_SEC_OFFLOAD)
#define __RESERVED (1ULL << 61) /**< reserved for future mbuf use */
@@ -456,8 +472,21 @@ struct rte_mbuf {
uint32_t l3_type:4; /**< (Outer) L3 type. */
uint32_t l4_type:4; /**< (Outer) L4 type. */
uint32_t tun_type:4; /**< Tunnel type. */
- uint32_t inner_l2_type:4; /**< Inner L2 type. */
- uint32_t inner_l3_type:4; /**< Inner L3 type. */
+ RTE_STD_C11
+ union {
+ uint8_t inner_esp_next_proto;
+ /**< ESP next protocol type, valid if
+ * RTE_PTYPE_TUNNEL_ESP tunnel type is set
+ * on both Tx and Rx.
+ */
+ __extension__
+ struct {
+ uint8_t inner_l2_type:4;
+ /**< Inner L2 type. */
+ uint8_t inner_l3_type:4;
+ /**< Inner L3 type. */
+ };
+ };
uint32_t inner_l4_type:4; /**< Inner L4 type. */
};
};
diff --git a/lib/librte_mbuf/rte_mbuf_ptype.c b/lib/librte_mbuf/rte_mbuf_ptype.c
index a450814..a623226 100644
--- a/lib/librte_mbuf/rte_mbuf_ptype.c
+++ b/lib/librte_mbuf/rte_mbuf_ptype.c
@@ -91,6 +91,7 @@ const char *rte_get_ptype_tunnel_name(uint32_t ptype)
case RTE_PTYPE_TUNNEL_GRENAT: return "TUNNEL_GRENAT";
case RTE_PTYPE_TUNNEL_GTPC: return "TUNNEL_GTPC";
case RTE_PTYPE_TUNNEL_GTPU: return "TUNNEL_GTPU";
+ case RTE_PTYPE_TUNNEL_ESP: return "TUNNEL_ESP";
default: return "TUNNEL_UNKNOWN";
}
}
diff --git a/lib/librte_mbuf/rte_mbuf_ptype.h b/lib/librte_mbuf/rte_mbuf_ptype.h
index 978c4a2..5c62435 100644
--- a/lib/librte_mbuf/rte_mbuf_ptype.h
+++ b/lib/librte_mbuf/rte_mbuf_ptype.h
@@ -415,6 +415,17 @@ extern "C" {
*/
#define RTE_PTYPE_TUNNEL_GTPU 0x00008000
/**
+ * ESP (IP Encapsulating Security Payload) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=51>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=51>
+ */
+#define RTE_PTYPE_TUNNEL_ESP 0x00009000
+/**
* Mask of tunneling packet types.
*/
#define RTE_PTYPE_TUNNEL_MASK 0x0000f000
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v6 04/10] ethdev: support security APIs
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 00/10] introduce security offload library Akhil Goyal
` (2 preceding siblings ...)
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 03/10] mbuf: add security crypto flags and mbuf fields Akhil Goyal
@ 2017-10-25 15:07 ` Akhil Goyal
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 05/10] ethdev: add rte flow action for crypto Akhil Goyal
` (6 subsequent siblings)
10 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-25 15:07 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
From: Declan Doherty <declan.doherty@intel.com>
rte_flow_action type and ethdev updated to support rte_security
sessions for crypto offload to ethernet device.
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
lib/librte_ether/rte_ethdev.c | 13 +++++++++++++
lib/librte_ether/rte_ethdev.h | 9 +++++++++
lib/librte_ether/rte_ethdev_version.map | 1 +
3 files changed, 23 insertions(+)
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 0b1e928..68b0318 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -301,6 +301,13 @@ rte_eth_dev_socket_id(uint16_t port_id)
return rte_eth_devices[port_id].data->numa_node;
}
+void *
+rte_eth_dev_get_sec_ctx(uint8_t port_id)
+{
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL);
+ return rte_eth_devices[port_id].security_ctx;
+}
+
uint16_t
rte_eth_dev_count(void)
{
@@ -712,6 +719,8 @@ rte_eth_convert_rx_offload_bitfield(const struct rte_eth_rxmode *rxmode,
offloads |= DEV_RX_OFFLOAD_TCP_LRO;
if (rxmode->hw_timestamp == 1)
offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ if (rxmode->security == 1)
+ offloads |= DEV_RX_OFFLOAD_SECURITY;
*rx_offloads = offloads;
}
@@ -764,6 +773,10 @@ rte_eth_convert_rx_offloads(const uint64_t rx_offloads,
rxmode->hw_timestamp = 1;
else
rxmode->hw_timestamp = 0;
+ if (rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+ rxmode->security = 1;
+ else
+ rxmode->security = 0;
}
int
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index b773589..028bf11 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -180,6 +180,8 @@ extern "C" {
#include <rte_dev.h>
#include <rte_devargs.h>
#include <rte_errno.h>
+#include <rte_common.h>
+
#include "rte_ether.h"
#include "rte_eth_ctrl.h"
#include "rte_dev_info.h"
@@ -370,6 +372,7 @@ struct rte_eth_rxmode {
enable_scatter : 1, /**< Enable scatter packets rx handler */
enable_lro : 1, /**< Enable LRO */
hw_timestamp : 1, /**< Enable HW timestamp */
+ security : 1, /**< Enable rte_security offloads */
/**
* When set the offload bitfield should be ignored.
* Instead per-port Rx offloads should be set on offloads
@@ -963,6 +966,7 @@ struct rte_eth_conf {
#define DEV_RX_OFFLOAD_CRC_STRIP 0x00001000
#define DEV_RX_OFFLOAD_SCATTER 0x00002000
#define DEV_RX_OFFLOAD_TIMESTAMP 0x00004000
+#define DEV_RX_OFFLOAD_SECURITY 0x00008000
#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
DEV_RX_OFFLOAD_UDP_CKSUM | \
DEV_RX_OFFLOAD_TCP_CKSUM)
@@ -998,6 +1002,7 @@ struct rte_eth_conf {
* When set application must guarantee that per-queue all mbufs comes from
* the same mempool and has refcnt = 1.
*/
+#define DEV_TX_OFFLOAD_SECURITY 0x00020000
struct rte_pci_device;
@@ -1741,8 +1746,12 @@ struct rte_eth_dev {
*/
struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
enum rte_eth_dev_state state; /**< Flag indicating the port state */
+ void *security_ctx; /**< Context for security ops */
} __rte_cache_aligned;
+void *
+rte_eth_dev_get_sec_ctx(uint8_t port_id);
+
struct rte_eth_dev_sriov {
uint8_t active; /**< SRIOV is active with 16, 32 or 64 pools */
uint8_t nb_q_per_pool; /**< rx queue number per pool */
diff --git a/lib/librte_ether/rte_ethdev_version.map b/lib/librte_ether/rte_ethdev_version.map
index 57d9b54..e9681ac 100644
--- a/lib/librte_ether/rte_ethdev_version.map
+++ b/lib/librte_ether/rte_ethdev_version.map
@@ -191,6 +191,7 @@ DPDK_17.08 {
DPDK_17.11 {
global:
+ rte_eth_dev_get_sec_ctx;
rte_eth_dev_pool_ops_supported;
rte_eth_dev_reset;
rte_flow_error_set;
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v6 05/10] ethdev: add rte flow action for crypto
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 00/10] introduce security offload library Akhil Goyal
` (3 preceding siblings ...)
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 04/10] ethdev: support security APIs Akhil Goyal
@ 2017-10-25 15:07 ` Akhil Goyal
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 06/10] security: introduce security API and framework Akhil Goyal
` (5 subsequent siblings)
10 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-25 15:07 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
From: Boris Pismenny <borisp@mellanox.com>
The crypto action is specified by an application to request
crypto offload for a flow.
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
Reviewed-by: John McNamara <john.mcnamara@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
doc/guides/prog_guide/rte_flow.rst | 84 +++++++++++++++++++++++++++++++++++++-
lib/librte_ether/rte_flow.h | 39 ++++++++++++++++++
2 files changed, 121 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index bcb438e..d158be5 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -187,7 +187,7 @@ Pattern item
Pattern items fall in two categories:
- Matching protocol headers and packet data (ANY, RAW, ETH, VLAN, IPV4,
- IPV6, ICMP, UDP, TCP, SCTP, VXLAN, MPLS, GRE and so on), usually
+ IPV6, ICMP, UDP, TCP, SCTP, VXLAN, MPLS, GRE, ESP and so on), usually
associated with a specification structure.
- Matching meta-data or affecting pattern processing (END, VOID, INVERT, PF,
@@ -972,6 +972,14 @@ flow rules.
- ``teid``: tunnel endpoint identifier.
- Default ``mask`` matches teid only.
+Item: ``ESP``
+^^^^^^^^^^^^^
+
+Matches an ESP header.
+
+- ``hdr``: ESP header definition (``rte_esp.h``).
+- Default ``mask`` matches SPI only.
+
Actions
~~~~~~~
@@ -989,7 +997,7 @@ They fall in three categories:
additional processing by subsequent flow rules.
- Other non-terminating meta actions that do not affect the fate of packets
- (END, VOID, MARK, FLAG, COUNT).
+ (END, VOID, MARK, FLAG, COUNT, SECURITY).
When several actions are combined in a flow rule, they should all have
different types (e.g. dropping a packet twice is not possible).
@@ -1394,6 +1402,78 @@ the rte_mtr* API.
| ``mtr_id`` | MTR object ID |
+--------------+---------------+
+Action: ``SECURITY``
+^^^^^^^^^^^^^^^^^^^^
+
+Perform the security action on flows matched by the pattern items
+according to the configuration of the security session.
+
+This action modifies the payload of matched flows. For INLINE_CRYPTO, the
+security protocol headers and IV are fully provided by the application as
+specified in the flow pattern. The payload of matching packets is
+encrypted on egress, and decrypted and authenticated on ingress.
+For INLINE_PROTOCOL, the security protocol is fully offloaded to HW,
+providing full encapsulation and decapsulation of packets in security
+protocols. The flow pattern specifies both the outer security header fields
+and the inner packet fields. The security session specified in the action
+must match the pattern parameters.
+
+The security session specified in the action must be created on the same
+port as the flow action that is being specified.
+
+The ingress/egress flow attribute should match that specified in the
+security session if the security session supports the definition of the
+direction.
+
+Multiple flows can be configured to use the same security session.
+
+- Non-terminating by default.
+
+.. _table_rte_flow_action_security:
+
+.. table:: SECURITY
+
+ +----------------------+--------------------------------------+
+ | Field | Value |
+ +======================+======================================+
+ | ``security_session`` | security session to apply |
+ +----------------------+--------------------------------------+
+
+The following is an example of configuring IPsec inline using the
+INLINE_CRYPTO security session:
+
+The encryption algorithm, keys and salt are part of the opaque
+``rte_security_session``. The SA is identified according to the IP and ESP
+fields in the pattern items.
+
+.. _table_rte_flow_item_esp_inline_example:
+
+.. table:: IPsec inline crypto flow pattern items.
+
+ +-------+----------+
+ | Index | Item |
+ +=======+==========+
+ | 0 | Ethernet |
+ +-------+----------+
+ | 1 | IPv4 |
+ +-------+----------+
+ | 2 | ESP |
+ +-------+----------+
+ | 3 | END |
+ +-------+----------+
+
+.. _table_rte_flow_action_esp_inline_example:
+
+.. table:: IPsec inline flow actions.
+
+ +-------+----------+
+ | Index | Action |
+ +=======+==========+
+ | 0 | SECURITY |
+ +-------+----------+
+ | 1 | END |
+ +-------+----------+
+
Negative types
~~~~~~~~~~~~~~
diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
index bd8274d..47c88ea 100644
--- a/lib/librte_ether/rte_flow.h
+++ b/lib/librte_ether/rte_flow.h
@@ -1001,6 +1001,14 @@ enum rte_flow_action_type {
* See file rte_mtr.h for MTR object configuration.
*/
RTE_FLOW_ACTION_TYPE_METER,
+
+ /**
+ * Redirects packets to security engine of current device for security
+ * processing as specified by security session.
+ *
+ * See struct rte_flow_action_security.
+ */
+ RTE_FLOW_ACTION_TYPE_SECURITY
};
/**
@@ -1108,6 +1116,37 @@ struct rte_flow_action_meter {
};
/**
+ * RTE_FLOW_ACTION_TYPE_SECURITY
+ *
+ * Perform the security action on flows matched by the pattern items
+ * according to the configuration of the security session.
+ *
+ * This action modifies the payload of matched flows. For INLINE_CRYPTO, the
+ * security protocol headers and IV are fully provided by the application as
+ * specified in the flow pattern. The payload of matching packets is
+ * encrypted on egress, and decrypted and authenticated on ingress.
+ * For INLINE_PROTOCOL, the security protocol is fully offloaded to HW,
+ * providing full encapsulation and decapsulation of packets in security
+ * protocols. The flow pattern specifies both the outer security header fields
+ * and the inner packet fields. The security session specified in the action
+ * must match the pattern parameters.
+ *
+ * The security session specified in the action must be created on the same
+ * port as the flow action that is being specified.
+ *
+ * The ingress/egress flow attribute should match that specified in the
+ * security session if the security session supports the definition of the
+ * direction.
+ *
+ * Multiple flows can be configured to use the same security session.
+ *
+ * Non-terminating by default.
+ */
+struct rte_flow_action_security {
+ void *security_session; /**< Pointer to security session structure. */
+};
+
+/**
* Definition of a single action.
*
* A list of actions is terminated by a END action.
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v6 06/10] security: introduce security API and framework
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 00/10] introduce security offload library Akhil Goyal
` (4 preceding siblings ...)
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 05/10] ethdev: add rte flow action for crypto Akhil Goyal
@ 2017-10-25 15:07 ` Akhil Goyal
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 07/10] doc: add details of rte security Akhil Goyal
` (4 subsequent siblings)
10 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-25 15:07 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
rte_security library provides APIs for security session
create/free for protocol offload or offloaded crypto
operation to ethernet device.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
---
MAINTAINERS | 6 +
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf | 1 +
doc/guides/rel_notes/release_17_11.rst | 1 +
lib/Makefile | 4 +
lib/librte_security/Makefile | 54 +++
lib/librte_security/rte_security.c | 149 ++++++++
lib/librte_security/rte_security.h | 529 +++++++++++++++++++++++++++
lib/librte_security/rte_security_driver.h | 156 ++++++++
lib/librte_security/rte_security_version.map | 14 +
mk/rte.app.mk | 1 +
12 files changed, 921 insertions(+)
create mode 100644 lib/librte_security/Makefile
create mode 100644 lib/librte_security/rte_security.c
create mode 100644 lib/librte_security/rte_security.h
create mode 100644 lib/librte_security/rte_security_driver.h
create mode 100644 lib/librte_security/rte_security_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 99e001d..a4e72bd 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -280,6 +280,12 @@ T: git://dpdk.org/next/dpdk-next-eventdev
F: lib/librte_eventdev/*eth_rx_adapter*
F: test/test/test_event_eth_rx_adapter.c
+Security API - EXPERIMENTAL
+M: Akhil Goyal <akhil.goyal@nxp.com>
+M: Declan Doherty <declan.doherty@intel.com>
+F: lib/librte_security/
+F: doc/guides/prog_guide/rte_security.rst
+
Networking Drivers
------------------
diff --git a/config/common_base b/config/common_base
index 4ddde59..75aa0e1 100644
--- a/config/common_base
+++ b/config/common_base
@@ -548,6 +548,11 @@ CONFIG_RTE_LIBRTE_PMD_MRVL_CRYPTO=n
CONFIG_RTE_LIBRTE_PMD_MRVL_CRYPTO_DEBUG=n
#
+# Compile generic security library
+#
+CONFIG_RTE_LIBRTE_SECURITY=y
+
+#
# Compile generic event device library
#
CONFIG_RTE_LIBRTE_EVENTDEV=y
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 6ac9593..9e95380 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -43,6 +43,7 @@ The public API headers are grouped by topics:
[rte_tm] (@ref rte_tm.h),
[rte_mtr] (@ref rte_mtr.h),
[cryptodev] (@ref rte_cryptodev.h),
+ [security] (@ref rte_security.h),
[eventdev] (@ref rte_eventdev.h),
[event_eth_rx_adapter] (@ref rte_event_eth_rx_adapter.h),
[metrics] (@ref rte_metrics.h),
diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf
index 9edb6fd..65549dc 100644
--- a/doc/api/doxy-api.conf
+++ b/doc/api/doxy-api.conf
@@ -71,6 +71,7 @@ INPUT = doc/api/doxy-api-index.md \
lib/librte_reorder \
lib/librte_ring \
lib/librte_sched \
+ lib/librte_security \
lib/librte_table \
lib/librte_timer \
lib/librte_vhost
diff --git a/doc/guides/rel_notes/release_17_11.rst b/doc/guides/rel_notes/release_17_11.rst
index 3298ef5..8f08d8c 100644
--- a/doc/guides/rel_notes/release_17_11.rst
+++ b/doc/guides/rel_notes/release_17_11.rst
@@ -389,6 +389,7 @@ The libraries prepended with a plus sign were incremented in this version.
librte_reorder.so.1
librte_ring.so.1
librte_sched.so.1
+ + librte_security.so.1
+ librte_table.so.3
librte_timer.so.1
librte_vhost.so.3
diff --git a/lib/Makefile b/lib/Makefile
index 6e45700..8693cda 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -50,6 +50,10 @@ DEPDIRS-librte_ether += librte_mbuf
DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
DEPDIRS-librte_cryptodev := librte_eal librte_mempool librte_ring librte_mbuf
DEPDIRS-librte_cryptodev += librte_kvargs
+DIRS-$(CONFIG_RTE_LIBRTE_SECURITY) += librte_security
+DEPDIRS-librte_security := librte_eal librte_mempool librte_ring librte_mbuf
+DEPDIRS-librte_security += librte_ether
+DEPDIRS-librte_security += librte_cryptodev
DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += librte_eventdev
DEPDIRS-librte_eventdev := librte_eal librte_ring librte_ether librte_hash
DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
diff --git a/lib/librte_security/Makefile b/lib/librte_security/Makefile
new file mode 100644
index 0000000..bb93ec3
--- /dev/null
+++ b/lib/librte_security/Makefile
@@ -0,0 +1,54 @@
+# BSD LICENSE
+#
+# Copyright(c) 2017 Intel Corporation. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of Intel Corporation nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_security.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+LDLIBS += -lrte_eal -lrte_mempool
+
+# library source files
+SRCS-y += rte_security.c
+
+# export include files
+SYMLINK-y-include += rte_security.h
+SYMLINK-y-include += rte_security_driver.h
+
+# versioning export map
+EXPORT_MAP := rte_security_version.map
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_security/rte_security.c b/lib/librte_security/rte_security.c
new file mode 100644
index 0000000..1227fca
--- /dev/null
+++ b/lib/librte_security/rte_security.c
@@ -0,0 +1,149 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright 2017 NXP.
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of NXP nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_malloc.h>
+#include <rte_dev.h>
+
+#include "rte_security.h"
+#include "rte_security_driver.h"
+
+struct rte_security_session *
+rte_security_session_create(struct rte_security_ctx *instance,
+ struct rte_security_session_conf *conf,
+ struct rte_mempool *mp)
+{
+ struct rte_security_session *sess = NULL;
+
+ if (conf == NULL)
+ return NULL;
+
+ RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->session_create, NULL);
+
+ if (rte_mempool_get(mp, (void *)&sess))
+ return NULL;
+
+ if (instance->ops->session_create(instance->device, conf, sess, mp)) {
+ rte_mempool_put(mp, (void *)sess);
+ return NULL;
+ }
+ instance->sess_cnt++;
+
+ return sess;
+}
+
+int
+rte_security_session_update(struct rte_security_ctx *instance,
+ struct rte_security_session *sess,
+ struct rte_security_session_conf *conf)
+{
+ RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->session_update, -ENOTSUP);
+ return instance->ops->session_update(instance->device, sess, conf);
+}
+
+int
+rte_security_session_stats_get(struct rte_security_ctx *instance,
+ struct rte_security_session *sess,
+ struct rte_security_stats *stats)
+{
+ RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->session_stats_get, -ENOTSUP);
+ return instance->ops->session_stats_get(instance->device, sess, stats);
+}
+
+int
+rte_security_session_destroy(struct rte_security_ctx *instance,
+ struct rte_security_session *sess)
+{
+ int ret;
+ struct rte_mempool *mp = rte_mempool_from_obj(sess);
+
+ RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->session_destroy, -ENOTSUP);
+
+ if (instance->sess_cnt)
+ instance->sess_cnt--;
+
+ ret = instance->ops->session_destroy(instance->device, sess);
+ if (!ret)
+ rte_mempool_put(mp, (void *)sess);
+
+ return ret;
+}
+
+int
+rte_security_set_pkt_metadata(struct rte_security_ctx *instance,
+ struct rte_security_session *sess,
+ struct rte_mbuf *m, void *params)
+{
+ RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->set_pkt_metadata, -ENOTSUP);
+ return instance->ops->set_pkt_metadata(instance->device,
+ sess, m, params);
+}
+
+const struct rte_security_capability *
+rte_security_capabilities_get(struct rte_security_ctx *instance)
+{
+ RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->capabilities_get, NULL);
+ return instance->ops->capabilities_get(instance->device);
+}
+
+const struct rte_security_capability *
+rte_security_capability_get(struct rte_security_ctx *instance,
+ struct rte_security_capability_idx *idx)
+{
+ const struct rte_security_capability *capabilities;
+ const struct rte_security_capability *capability;
+ uint16_t i = 0;
+
+ RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->capabilities_get, NULL);
+ capabilities = instance->ops->capabilities_get(instance->device);
+
+ if (capabilities == NULL)
+ return NULL;
+
+ while ((capability = &capabilities[i++])->action
+ != RTE_SECURITY_ACTION_TYPE_NONE) {
+ if (capability->action == idx->action &&
+ capability->protocol == idx->protocol) {
+ if (idx->protocol == RTE_SECURITY_PROTOCOL_IPSEC) {
+ if (capability->ipsec.proto ==
+ idx->ipsec.proto &&
+ capability->ipsec.mode ==
+ idx->ipsec.mode &&
+ capability->ipsec.direction ==
+ idx->ipsec.direction)
+ return capability;
+ }
+ }
+ }
+
+ return NULL;
+}
diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
new file mode 100644
index 0000000..7e687d2
--- /dev/null
+++ b/lib/librte_security/rte_security.h
@@ -0,0 +1,529 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright 2017 NXP.
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of NXP nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_SECURITY_H_
+#define _RTE_SECURITY_H_
+
+/**
+ * @file rte_security.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE Security Common Definitions
+ *
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <sys/types.h>
+
+#include <netinet/in.h>
+#include <netinet/ip.h>
+#include <netinet/ip6.h>
+
+#include <rte_common.h>
+#include <rte_crypto.h>
+#include <rte_mbuf.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
+
+/** IPSec protocol mode */
+enum rte_security_ipsec_sa_mode {
+ RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
+ /**< IPSec Transport mode */
+ RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ /**< IPSec Tunnel mode */
+};
+
+/** IPSec Protocol */
+enum rte_security_ipsec_sa_protocol {
+ RTE_SECURITY_IPSEC_SA_PROTO_AH,
+ /**< AH protocol */
+ RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ /**< ESP protocol */
+};
+
+/** IPSEC tunnel type */
+enum rte_security_ipsec_tunnel_type {
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4,
+ /**< Outer header is IPv4 */
+ RTE_SECURITY_IPSEC_TUNNEL_IPV6,
+ /**< Outer header is IPv6 */
+};
+
+/**
+ * Security context for crypto/eth devices
+ *
+ * Security instance for each driver to register security operations.
+ * The application can get the security context from the crypto/eth device id
+ * using the APIs rte_cryptodev_get_sec_ctx()/rte_eth_dev_get_sec_ctx()
+ * This structure is used to identify the device(crypto/eth) for which the
+ * security operations need to be performed.
+ */
+struct rte_security_ctx {
+ void *device;
+ /**< Crypto/ethernet device attached */
+ struct rte_security_ops *ops;
+ /**< Pointer to security ops for the device */
+ uint16_t sess_cnt;
+ /**< Number of sessions attached to this context */
+};
+
+/**
+ * IPSEC tunnel parameters
+ *
+ * These parameters are used to build outbound tunnel headers.
+ */
+struct rte_security_ipsec_tunnel_param {
+ enum rte_security_ipsec_tunnel_type type;
+ /**< Tunnel type: IPv4 or IPv6 */
+ RTE_STD_C11
+ union {
+ struct {
+ struct in_addr src_ip;
+ /**< IPv4 source address */
+ struct in_addr dst_ip;
+ /**< IPv4 destination address */
+ uint8_t dscp;
+ /**< IPv4 Differentiated Services Code Point */
+ uint8_t df;
+ /**< IPv4 Don't Fragment bit */
+ uint8_t ttl;
+ /**< IPv4 Time To Live */
+ } ipv4;
+ /**< IPv4 header parameters */
+ struct {
+ struct in6_addr src_addr;
+ /**< IPv6 source address */
+ struct in6_addr dst_addr;
+ /**< IPv6 destination address */
+ uint8_t dscp;
+ /**< IPv6 Differentiated Services Code Point */
+ uint32_t flabel;
+ /**< IPv6 flow label */
+ uint8_t hlimit;
+ /**< IPv6 hop limit */
+ } ipv6;
+ /**< IPv6 header parameters */
+ };
+};
+
+/**
+ * IPsec Security Association option flags
+ */
+struct rte_security_ipsec_sa_options {
+ /**< Extended Sequence Numbers (ESN)
+ *
+ * * 1: Use extended (64 bit) sequence numbers
+ * * 0: Use normal sequence numbers
+ */
+ uint32_t esn : 1;
+
+ /**< UDP encapsulation
+ *
+ * * 1: Do UDP encapsulation/decapsulation so that IPSEC packets can
+ * traverse through NAT boxes.
+ * * 0: No UDP encapsulation
+ */
+ uint32_t udp_encap : 1;
+
+ /**< Copy DSCP bits
+ *
+ * * 1: Copy IPv4 or IPv6 DSCP bits from inner IP header to
+ * the outer IP header in encapsulation, and vice versa in
+ * decapsulation.
+ * * 0: Do not change DSCP field.
+ */
+ uint32_t copy_dscp : 1;
+
+ /**< Copy IPv6 Flow Label
+ *
+ * * 1: Copy IPv6 flow label from inner IPv6 header to the
+ * outer IPv6 header.
+ * * 0: Outer header is not modified.
+ */
+ uint32_t copy_flabel : 1;
+
+ /**< Copy IPv4 Don't Fragment bit
+ *
+ * * 1: Copy the DF bit from the inner IPv4 header to the outer
+ * IPv4 header.
+ * * 0: Outer header is not modified.
+ */
+ uint32_t copy_df : 1;
+
+ /**< Decrement inner packet Time To Live (TTL) field
+ *
+ * * 1: In tunnel mode, decrement inner packet IPv4 TTL or
+ * IPv6 Hop Limit after tunnel decapsulation, or before tunnel
+ * encapsulation.
+ * * 0: Inner packet is not modified.
+ */
+ uint32_t dec_ttl : 1;
+};
+
+/** IPSec security association direction */
+enum rte_security_ipsec_sa_direction {
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ /**< Encrypt and generate digest */
+ RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+ /**< Verify digest and decrypt */
+};
+
+/**
+ * IPsec security association configuration data.
+ *
+ * This structure contains data required to create an IPsec SA security session.
+ */
+struct rte_security_ipsec_xform {
+ uint32_t spi;
+ /**< SA security parameter index */
+ uint32_t salt;
+ /**< SA salt */
+ struct rte_security_ipsec_sa_options options;
+ /**< various SA options */
+ enum rte_security_ipsec_sa_direction direction;
+ /**< IPSec SA Direction - Egress/Ingress */
+ enum rte_security_ipsec_sa_protocol proto;
+ /**< IPsec SA Protocol - AH/ESP */
+ enum rte_security_ipsec_sa_mode mode;
+ /**< IPsec SA Mode - transport/tunnel */
+ struct rte_security_ipsec_tunnel_param tunnel;
+ /**< Tunnel parameters, NULL for transport mode */
+};
+
+/**
+ * MACsec security session configuration
+ */
+struct rte_security_macsec_xform {
+ /** To be Filled */
+};
+
+/**
+ * Security session action type.
+ */
+enum rte_security_session_action_type {
+ RTE_SECURITY_ACTION_TYPE_NONE,
+ /**< No security actions */
+ RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+ /**< Crypto processing for security protocol is processed inline
+ * during transmission
+ */
+ RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+ /**< All security protocol processing is performed inline during
+ * transmission
+ */
+ RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+ /**< All security protocol processing including crypto is performed
+ * on a lookaside accelerator
+ */
+};
+
+/** Security session protocol definition */
+enum rte_security_session_protocol {
+ RTE_SECURITY_PROTOCOL_IPSEC,
+ /**< IPsec Protocol */
+ RTE_SECURITY_PROTOCOL_MACSEC,
+ /**< MACSec Protocol */
+};
+
+/**
+ * Security session configuration
+ */
+struct rte_security_session_conf {
+ enum rte_security_session_action_type action_type;
+ /**< Type of action to be performed on the session */
+ enum rte_security_session_protocol protocol;
+ /**< Security protocol to be configured */
+ union {
+ struct rte_security_ipsec_xform ipsec;
+ struct rte_security_macsec_xform macsec;
+ };
+ /**< Configuration parameters for security session */
+ struct rte_crypto_sym_xform *crypto_xform;
+ /**< Security Session Crypto Transformations */
+};
+
+struct rte_security_session {
+ void *sess_private_data;
+ /**< Private session material */
+};
+
+/**
+ * Create security session as specified by the session configuration
+ *
+ * @param instance security instance
+ * @param conf session configuration parameters
+ * @param mp mempool to allocate session objects from
+ * @return
+ * - On success, pointer to session
+ * - On failure, NULL
+ */
+struct rte_security_session *
+rte_security_session_create(struct rte_security_ctx *instance,
+ struct rte_security_session_conf *conf,
+ struct rte_mempool *mp);
+
+/**
+ * Update security session as specified by the session configuration
+ *
+ * @param instance security instance
+ * @param sess session to update parameters
+ * @param conf update configuration parameters
+ * @return
+ * - On success returns 0
+ * - On failure return errno
+ */
+int
+rte_security_session_update(struct rte_security_ctx *instance,
+ struct rte_security_session *sess,
+ struct rte_security_session_conf *conf);
+
+/**
+ * Free security session header and the session private data and
+ * return it to its original mempool.
+ *
+ * @param instance security instance
+ * @param sess security session to freed
+ *
+ * @return
+ * - 0 if successful.
+ * - -EINVAL if session is NULL.
+ * - -EBUSY if not all device private data has been freed.
+ */
+int
+rte_security_session_destroy(struct rte_security_ctx *instance,
+ struct rte_security_session *sess);
+
+/**
+ * Updates the buffer with device-specific defined metadata
+ *
+ * @param instance security instance
+ * @param sess security session
+ * @param mb packet mbuf to set metadata on.
+ * @param params device-specific defined parameters
+ * required for metadata
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+int
+rte_security_set_pkt_metadata(struct rte_security_ctx *instance,
+ struct rte_security_session *sess,
+ struct rte_mbuf *mb, void *params);
+
+/**
+ * Attach a session to a symmetric crypto operation
+ *
+ * @param sym_op crypto operation
+ * @param sess security session
+ */
+static inline int
+__rte_security_attach_session(struct rte_crypto_sym_op *sym_op,
+ struct rte_security_session *sess)
+{
+ sym_op->sec_session = sess;
+
+ return 0;
+}
+
+static inline void *
+get_sec_session_private_data(const struct rte_security_session *sess)
+{
+ return sess->sess_private_data;
+}
+
+static inline void
+set_sec_session_private_data(struct rte_security_session *sess,
+ void *private_data)
+{
+ sess->sess_private_data = private_data;
+}
+
+/**
+ * Attach a session to a crypto operation.
+ * This API is needed only in case of RTE_SECURITY_SESS_CRYPTO_PROTO_OFFLOAD
+ * For other rte_security_session_action_type, ol_flags in rte_mbuf may be
+ * defined to perform security operations.
+ *
+ * @param op crypto operation
+ * @param sess security session
+ */
+static inline int
+rte_security_attach_session(struct rte_crypto_op *op,
+ struct rte_security_session *sess)
+{
+ if (unlikely(op->type != RTE_CRYPTO_OP_TYPE_SYMMETRIC))
+ return -EINVAL;
+
+ op->sess_type = RTE_CRYPTO_OP_SECURITY_SESSION;
+
+ return __rte_security_attach_session(op->sym, sess);
+}
+
+struct rte_security_macsec_stats {
+ uint64_t reserved;
+};
+
+struct rte_security_ipsec_stats {
+ uint64_t reserved;
+
+};
+
+struct rte_security_stats {
+ enum rte_security_session_protocol protocol;
+ /**< Security protocol to be configured */
+
+ union {
+ struct rte_security_macsec_stats macsec;
+ struct rte_security_ipsec_stats ipsec;
+ };
+};
+
+/**
+ * Get security session statistics
+ *
+ * @param instance security instance
+ * @param sess security session
+ * @param stats statistics
+ * @return
+ * - On success return 0
+ * - On failure errno
+ */
+int
+rte_security_session_stats_get(struct rte_security_ctx *instance,
+ struct rte_security_session *sess,
+ struct rte_security_stats *stats);
+
+/**
+ * Security capability definition
+ */
+struct rte_security_capability {
+ enum rte_security_session_action_type action;
+ /**< Security action type*/
+ enum rte_security_session_protocol protocol;
+ /**< Security protocol */
+ RTE_STD_C11
+ union {
+ struct {
+ enum rte_security_ipsec_sa_protocol proto;
+ /**< IPsec SA protocol */
+ enum rte_security_ipsec_sa_mode mode;
+ /**< IPsec SA mode */
+ enum rte_security_ipsec_sa_direction direction;
+ /**< IPsec SA direction */
+ struct rte_security_ipsec_sa_options options;
+ /**< IPsec SA supported options */
+ } ipsec;
+ /**< IPsec capability */
+ struct {
+ /* To be Filled */
+ } macsec;
+ /**< MACsec capability */
+ };
+
+ const struct rte_cryptodev_capabilities *crypto_capabilities;
+ /**< Corresponding crypto capabilities for security capability */
+
+ uint32_t ol_flags;
+ /**< Device offload flags */
+};
+
+#define RTE_SECURITY_TX_OLOAD_NEED_MDATA 0x00000001
+/**< HW needs metadata update, see rte_security_set_pkt_metadata().
+ */
+
+#define RTE_SECURITY_TX_HW_TRAILER_OFFLOAD 0x00000002
+/**< HW constructs trailer of packets
+ * Transmitted packets will have the trailer added to them
+ * by hardawre. The next protocol field will be based on
+ * the mbuf->inner_esp_next_proto field.
+ */
+#define RTE_SECURITY_RX_HW_TRAILER_OFFLOAD 0x00010000
+/**< HW removes trailer of packets
+ * Received packets have no trailer, the next protocol field
+ * is supplied in the mbuf->inner_esp_next_proto field.
+ * Inner packet is not modified.
+ */
+
+/**
+ * Security capability index used to query a security instance for a specific
+ * security capability
+ */
+struct rte_security_capability_idx {
+ enum rte_security_session_action_type action;
+ enum rte_security_session_protocol protocol;
+
+ union {
+ struct {
+ enum rte_security_ipsec_sa_protocol proto;
+ enum rte_security_ipsec_sa_mode mode;
+ enum rte_security_ipsec_sa_direction direction;
+ } ipsec;
+ };
+};
+
+/**
+ * Returns array of security instance capabilities
+ *
+ * @param instance Security instance.
+ *
+ * @return
+ * - Returns array of security capabilities.
+ * - Return NULL if no capabilities available.
+ */
+const struct rte_security_capability *
+rte_security_capabilities_get(struct rte_security_ctx *instance);
+
+/**
+ * Query if a specific capability is available on security instance
+ *
+ * @param instance security instance.
+ * @param idx security capability index to match against
+ *
+ * @return
+ * - Returns pointer to security capability on match of capability
+ * index criteria.
+ * - Return NULL if the capability not matched on security instance.
+ */
+const struct rte_security_capability *
+rte_security_capability_get(struct rte_security_ctx *instance,
+ struct rte_security_capability_idx *idx);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_SECURITY_H_ */
diff --git a/lib/librte_security/rte_security_driver.h b/lib/librte_security/rte_security_driver.h
new file mode 100644
index 0000000..997fbe7
--- /dev/null
+++ b/lib/librte_security/rte_security_driver.h
@@ -0,0 +1,156 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ * Copyright 2017 NXP.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_SECURITY_DRIVER_H_
+#define _RTE_SECURITY_DRIVER_H_
+
+/**
+ * @file rte_security_driver.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE Security Common Definitions
+ *
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "rte_security.h"
+
+/**
+ * Configure a security session on a device.
+ *
+ * @param device Crypto/eth device pointer
+ * @param conf Security session configuration
+ * @param sess Pointer to Security private session structure
+ * @param mp Mempool where the private session is allocated
+ *
+ * @return
+ * - Returns 0 if private session structure have been created successfully.
+ * - Returns -EINVAL if input parameters are invalid.
+ * - Returns -ENOTSUP if crypto device does not support the crypto transform.
+ * - Returns -ENOMEM if the private session could not be allocated.
+ */
+typedef int (*security_session_create_t)(void *device,
+ struct rte_security_session_conf *conf,
+ struct rte_security_session *sess,
+ struct rte_mempool *mp);
+
+/**
+ * Free driver private session data.
+ *
+ * @param dev Crypto/eth device pointer
+ * @param sess Security session structure
+ */
+typedef int (*security_session_destroy_t)(void *device,
+ struct rte_security_session *sess);
+
+/**
+ * Update driver private session data.
+ *
+ * @param device Crypto/eth device pointer
+ * @param sess Pointer to Security private session structure
+ * @param conf Security session configuration
+ *
+ * @return
+ * - Returns 0 if private session structure have been updated successfully.
+ * - Returns -EINVAL if input parameters are invalid.
+ * - Returns -ENOTSUP if crypto device does not support the crypto transform.
+ */
+typedef int (*security_session_update_t)(void *device,
+ struct rte_security_session *sess,
+ struct rte_security_session_conf *conf);
+/**
+ * Get stats from the PMD.
+ *
+ * @param device Crypto/eth device pointer
+ * @param sess Pointer to Security private session structure
+ * @param stats Security stats of the driver
+ *
+ * @return
+ * - Returns 0 if private session structure have been updated successfully.
+ * - Returns -EINVAL if session parameters are invalid.
+ */
+typedef int (*security_session_stats_get_t)(void *device,
+ struct rte_security_session *sess,
+ struct rte_security_stats *stats);
+
+/**
+ * Update the mbuf with provided metadata.
+ *
+ * @param sess Security session structure
+ * @param mb Packet buffer
+ * @param mt Metadata
+ *
+ * @return
+ * - Returns 0 if metadata updated successfully.
+ * - Returns -ve value for errors.
+ */
+typedef int (*security_set_pkt_metadata_t)(void *device,
+ struct rte_security_session *sess, struct rte_mbuf *m,
+ void *params);
+
+/**
+ * Get security capabilities of the device.
+ *
+ * @param device crypto/eth device pointer
+ *
+ * @return
+ * - Returns rte_security_capability pointer on success.
+ * - Returns NULL on error.
+ */
+typedef const struct rte_security_capability *(*security_capabilities_get_t)(
+ void *device);
+
+/** Security operations function pointer table */
+struct rte_security_ops {
+ security_session_create_t session_create;
+ /**< Configure a security session. */
+ security_session_update_t session_update;
+ /**< Update a security session. */
+ security_session_stats_get_t session_stats_get;
+ /**< Get security session statistics. */
+ security_session_destroy_t session_destroy;
+ /**< Clear a security sessions private data. */
+ security_set_pkt_metadata_t set_pkt_metadata;
+ /**< Update mbuf metadata. */
+ security_capabilities_get_t capabilities_get;
+ /**< Get security capabilities. */
+};
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_SECURITY_DRIVER_H_ */
diff --git a/lib/librte_security/rte_security_version.map b/lib/librte_security/rte_security_version.map
new file mode 100644
index 0000000..e12c04b
--- /dev/null
+++ b/lib/librte_security/rte_security_version.map
@@ -0,0 +1,14 @@
+EXPERIMENTAL {
+ global:
+
+ rte_security_attach_session;
+ rte_security_capabilities_get;
+ rte_security_capability_get;
+ rte_security_session_create;
+ rte_security_session_destroy;
+ rte_security_session_stats_get;
+ rte_security_session_update;
+ rte_security_set_pkt_metadata;
+
+ local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 482656c..fbb4bc6 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -94,6 +94,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF) += -lrte_mbuf
_LDLIBS-$(CONFIG_RTE_LIBRTE_NET) += -lrte_net
_LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER) += -lrte_ethdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += -lrte_cryptodev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -lrte_security
_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v6 07/10] doc: add details of rte security
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 00/10] introduce security offload library Akhil Goyal
` (5 preceding siblings ...)
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 06/10] security: introduce security API and framework Akhil Goyal
@ 2017-10-25 15:07 ` Akhil Goyal
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 08/10] net/ixgbe: enable inline ipsec Akhil Goyal
` (3 subsequent siblings)
10 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-25 15:07 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rte_security.rst | 564 +++++++++++++++++++++++++++++++++
2 files changed, 565 insertions(+)
create mode 100644 doc/guides/prog_guide/rte_security.rst
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index fbd2a72..9759264 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -47,6 +47,7 @@ Programmer's Guide
traffic_metering_and_policing
traffic_management
cryptodev_lib
+ rte_security
link_bonding_poll_mode_drv_lib
timer_lib
hash_lib
diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
new file mode 100644
index 0000000..71be036
--- /dev/null
+++ b/doc/guides/prog_guide/rte_security.rst
@@ -0,0 +1,564 @@
+.. BSD LICENSE
+ Copyright 2017 NXP.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of NXP nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+Security Library
+================
+
+The security library provides a framework for management and provisioning
+of security protocol operations offloaded to hardware based devices. The
+library defines generic APIs to create and free security sessions which can
+support full protocol offload as well as inline crypto operation with
+NIC or crypto devices. The framework currently only supports the IPSec protocol
+and associated operations, other protocols will be added in future.
+
+Design Principles
+-----------------
+
+The security library provides an additional offload capability to an existing
+crypto device and/or ethernet device.
+
+.. code-block:: console
+
+ +---------------+
+ | rte_security |
+ +---------------+
+ \ /
+ +-----------+ +--------------+
+ | NIC PMD | | CRYPTO PMD |
+ +-----------+ +--------------+
+
+.. note::
+
+ Currently, the security library does not support the case of multi-process.
+ It will be updated in the future releases.
+
+The supported offload types are explained in the sections below.
+
+Inline Crypto
+~~~~~~~~~~~~~
+
+RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO:
+The crypto processing for security protocol (e.g. IPSec) is processed
+inline during receive and transmission on NIC port. The flow based
+security action should be configured on the port.
+
+Ingress Data path - The packet is decrypted in RX path and relevant
+crypto status is set in Rx descriptors. After the successful inline
+crypto processing the packet is presented to host as a regular Rx packet
+however all security protocol related headers are still attached to the
+packet. e.g. In case of IPSec, the IPSec tunnel headers (if any),
+ESP/AH headers will remain in the packet but the received packet
+contains the decrypted data where the encrypted data was when the packet
+arrived. The driver Rx path check the descriptors and and based on the
+crypto status sets additional flags in the rte_mbuf.ol_flags field.
+
+.. note::
+
+ The underlying device may not support crypto processing for all ingress packet
+ matching to a particular flow (e.g. fragmented packets), such packets will
+ be passed as encrypted packets. It is the responsibility of application to
+ process such encrypted packets using other crypto driver instance.
+
+Egress Data path - The software prepares the egress packet by adding
+relevant security protocol headers. Only the data will not be
+encrypted by the software. The driver will accordingly configure the
+tx descriptors. The hardware device will encrypt the data before sending the
+the packet out.
+
+.. note::
+
+ The underlying device may support post encryption TSO.
+
+.. code-block:: console
+
+ Egress Data Path
+ |
+ +--------|--------+
+ | egress IPsec |
+ | | |
+ | +------V------+ |
+ | | SADB lookup | |
+ | +------|------+ |
+ | +------V------+ |
+ | | Tunnel | | <------ Add tunnel header to packet
+ | +------|------+ |
+ | +------V------+ |
+ | | ESP | | <------ Add ESP header without trailer to packet
+ | | | | <------ Mark packet to be offloaded, add trailer
+ | +------|------+ | meta-data to mbuf
+ +--------V--------+
+ |
+ +--------V--------+
+ | L2 Stack |
+ +--------|--------+
+ |
+ +--------V--------+
+ | |
+ | NIC PMD | <------ Set hw context for inline crypto offload
+ | |
+ +--------|--------+
+ |
+ +--------|--------+
+ | HW ACCELERATED | <------ Packet Encryption and
+ | NIC | Authentication happens inline
+ | |
+ +-----------------+
+
+
+Inline protocol offload
+~~~~~~~~~~~~~~~~~~~~~~~
+
+RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+The crypto and protocol processing for security protocol (e.g. IPSec)
+is processed inline during receive and transmission. The flow based
+security action should be configured on the port.
+
+Ingress Data path - The packet is decrypted in the RX path and relevant
+crypto status is set in the Rx descriptors. After the successful inline
+crypto processing the packet is presented to the host as a regular Rx packet
+but all security protocol related headers are optionally removed from the
+packet. e.g. in the case of IPSec, the IPSec tunnel headers (if any),
+ESP/AH headers will be removed from the packet and the received packet
+will contains the decrypted packet only. The driver Rx path checks the
+descriptors and based on the crypto status sets additional flags in
+``rte_mbuf.ol_flags`` field.
+
+.. note::
+
+ The underlying device in this case is stateful. It is expected that
+ the device shall support crypto processing for all kind of packets matching
+ to a given flow, this includes fragmented packets (post reassembly).
+ E.g. in case of IPSec the device may internally manage anti-replay etc.
+ It will provide a configuration option for anti-replay behavior i.e. to drop
+ the packets or pass them to driver with error flags set in the descriptor.
+
+Egress Data path - The software will send the plain packet without any
+security protocol headers added to the packet. The driver will configure
+the security index and other requirement in tx descriptors.
+The hardware device will do security processing on the packet that includes
+adding the relevant protocol headers and encrypting the data before sending
+the packet out. The software should make sure that the buffer
+has required head room and tail room for any protocol header addition. The
+software may also do early fragmentation if the resultant packet is expected
+to cross the MTU size.
+
+
+.. note::
+
+ The underlying device will manage state information required for egress
+ processing. E.g. in case of IPSec, the seq number will be added to the
+ packet, however the device shall provide indication when the sequence number
+ is about to overflow. The underlying device may support post encryption TSO.
+
+.. code-block:: console
+
+ Egress Data Path
+ |
+ +--------|--------+
+ | egress IPsec |
+ | | |
+ | +------V------+ |
+ | | SADB lookup | |
+ | +------|------+ |
+ | +------V------+ |
+ | | Desc | | <------ Mark packet to be offloaded
+ | +------|------+ |
+ +--------V--------+
+ |
+ +--------V--------+
+ | L2 Stack |
+ +--------|--------+
+ |
+ +--------V--------+
+ | |
+ | NIC PMD | <------ Set hw context for inline crypto offload
+ | |
+ +--------|--------+
+ |
+ +--------|--------+
+ | HW ACCELERATED | <------ Add tunnel, ESP header etc header to
+ | NIC | packet. Packet Encryption and
+ | | Authentication happens inline.
+ +-----------------+
+
+
+Lookaside protocol offload
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
+This extends librte_cryptodev to support the programming of IPsec
+Security Association (SA) as part of a crypto session creation including
+the definition. In addition to standard crypto processing, as defined by
+the cryptodev, the security protocol processing is also offloaded to the
+crypto device.
+
+Decryption: The packet is sent to the crypto device for security
+protocol processing. The device will decrypt the packet and it will also
+optionally remove additional security headers from the packet.
+E.g. in case of IPSec, IPSec tunnel headers (if any), ESP/AH headers
+will be removed from the packet and the decrypted packet may contain
+plain data only.
+
+.. note::
+
+ In case of IPSec the device may internally manage anti-replay etc.
+ It will provide a configuration option for anti-replay behavior i.e. to drop
+ the packets or pass them to driver with error flags set in descriptor.
+
+Encryption: The software will submit the packet to cryptodev as usual
+for encryption, the hardware device in this case will also add the relevant
+security protocol header along with encrypting the packet. The software
+should make sure that the buffer has required head room and tail room
+for any protocol header addition.
+
+.. note::
+
+ In the case of IPSec, the seq number will be added to the packet,
+ It shall provide an indication when the sequence number is about to
+ overflow.
+
+.. code-block:: console
+
+ Egress Data Path
+ |
+ +--------|--------+
+ | egress IPsec |
+ | | |
+ | +------V------+ |
+ | | SADB lookup | | <------ SA maps to cryptodev session
+ | +------|------+ |
+ | +------|------+ |
+ | | \--------------------\
+ | | Crypto | | | <- Crypto processing through
+ | | /----------------\ | inline crypto PMD
+ | +------|------+ | | |
+ +--------V--------+ | |
+ | | |
+ +--------V--------+ | | create <-- SA is added to hw
+ | L2 Stack | | | inline using existing create
+ +--------|--------+ | | session sym session APIs
+ | | | |
+ +--------V--------+ +---|---|----V---+
+ | | | \---/ | | <--- Add tunnel, ESP header etc
+ | NIC PMD | | INLINE | | header to packet.Packet
+ | | | CRYPTO PMD | | Encryption/Decryption and
+ +--------|--------+ +----------------+ Authentication happens
+ | inline.
+ +--------|--------+
+ | NIC |
+ +--------|--------+
+ V
+
+Device Features and Capabilities
+---------------------------------
+
+Device Capabilities For Security Operations
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The device (crypto or ethernet) capabilities which support security operations,
+are defined by the security action type, security protocol, protocol
+capabilities and corresponding crypto capabilities for security. For the full
+scope of the Security capability see definition of rte_security_capability
+structure in the *DPDK API Reference*.
+
+.. code-block:: c
+
+ struct rte_security_capability;
+
+Each driver (crypto or ethernet) defines its own private array of capabilities
+for the operations it supports. Below is an example of the capabilities for a
+PMD which supports the IPSec protocol.
+
+.. code-block:: c
+
+ static const struct rte_security_capability pmd_security_capabilities[] = {
+ { /* IPsec Lookaside Protocol offload ESP Tunnel Egress */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ .options = { 0 }
+ },
+ .crypto_capabilities = pmd_capabilities
+ },
+ { /* IPsec Lookaside Protocol offload ESP Tunnel Ingress */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+ .options = { 0 }
+ },
+ .crypto_capabilities = pmd_capabilities
+ },
+ {
+ .action = RTE_SECURITY_ACTION_TYPE_NONE
+ }
+ };
+ static const struct rte_cryptodev_capabilities pmd_capabilities[] = {
+ { /* SHA1 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ .sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ .auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 12,
+ .max = 12,
+ .increment = 0
+ },
+ .aad_size = { 0 },
+ .iv_size = { 0 }
+ }
+ }
+ },
+ { /* AES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ .sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ .cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_CBC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }
+ }
+ }
+ }
+
+
+Capabilities Discovery
+~~~~~~~~~~~~~~~~~~~~~~
+
+Discovering the features and capabilities of a driver (crypto/ethernet)
+is achieved through the ``rte_security_capabilities_get()`` function.
+
+.. code-block:: c
+
+ const struct rte_security_capability *rte_security_capabilities_get(uint16_t id);
+
+This allows the user to query a specific driver and get all device
+security capabilities. It returns an array of ``rte_security_capability`` structures
+which contains all the capabilities for that device.
+
+Security Session Create/Free
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Security Sessions are created to store the immutable fields of a particular Security
+Association for a particular protocol which is defined by a security session
+configuration structure which is used in the operation processing of a packet flow.
+Sessions are used to manage protocol specific information as well as crypto parameters.
+Security sessions cache this immutable data in a optimal way for the underlying PMD
+and this allows further acceleration of the offload of Crypto workloads.
+
+The Security framework provides APIs to create and free sessions for crypto/ethernet
+devices, where sessions are mempool objects. It is the application's responsibility
+to create and manage the session mempools. The mempool object size should be able to
+accommodate the driver's private data of security session.
+
+Once the session mempools have been created, ``rte_security_session_create()``
+is used to allocate and initialize a session for the required crypto/ethernet device.
+
+Session APIs need a parameter ``rte_security_ctx`` to identify the crypto/ethernet
+security ops. This parameter can be retrieved using the APIs
+``rte_cryptodev_get_sec_ctx()`` (for crypto device) or ``rte_eth_dev_get_sec_ctx``
+(for ethernet port).
+
+Sessions already created can be updated with ``rte_security_session_update()``.
+
+When a session is no longer used, the user must call ``rte_security_session_destroy()``
+to free the driver private session data and return the memory back to the mempool.
+
+For look aside protocol offload to hardware crypto device, the ``rte_crypto_op``
+created by the application is attached to the security session by the API
+``rte_security_attach_session()``.
+
+For Inline Crypto and Inline protocol offload, device specific defined metadata is
+updated in the mbuf using ``rte_security_set_pkt_metadata()`` if
+``DEV_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
+
+Security session configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Security Session configuration structure is defined as ``rte_security_session_conf``
+
+.. code-block:: c
+
+ struct rte_security_session_conf {
+ enum rte_security_session_action_type action_type;
+ /**< Type of action to be performed on the session */
+ enum rte_security_session_protocol protocol;
+ /**< Security protocol to be configured */
+ union {
+ struct rte_security_ipsec_xform ipsec;
+ struct rte_security_macsec_xform macsec;
+ };
+ /**< Configuration parameters for security session */
+ struct rte_crypto_sym_xform *crypto_xform;
+ /**< Security Session Crypto Transformations */
+ };
+
+The configuration structure reuses the ``rte_crypto_sym_xform`` struct for crypto related
+configuration. The ``rte_security_session_action_type`` struct is used to specify whether the
+session is configured for Lookaside Protocol offload or Inline Crypto or Inline Protocol
+Offload.
+
+.. code-block:: c
+
+ enum rte_security_session_action_type {
+ RTE_SECURITY_ACTION_TYPE_NONE,
+ /**< No security actions */
+ RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+ /**< Crypto processing for security protocol is processed inline
+ * during transmission */
+ RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+ /**< All security protocol processing is performed inline during
+ * transmission */
+ RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+ /**< All security protocol processing including crypto is performed
+ * on a lookaside accelerator */
+ };
+
+The ``rte_security_session_protocol`` is defined as
+
+.. code-block:: c
+
+ enum rte_security_session_protocol {
+ RTE_SECURITY_PROTOCOL_IPSEC,
+ /**< IPsec Protocol */
+ RTE_SECURITY_PROTOCOL_MACSEC,
+ /**< MACSec Protocol */
+ };
+
+Currently the library defines configuration parameters for IPSec only. For other
+protocols like MACSec, structures and enums are defined as place holders which
+will be updated in the future.
+
+IPsec related configuration parameters are defined in ``rte_security_ipsec_xform``
+
+.. code-block:: c
+
+ struct rte_security_ipsec_xform {
+ uint32_t spi;
+ /**< SA security parameter index */
+ uint32_t salt;
+ /**< SA salt */
+ struct rte_security_ipsec_sa_options options;
+ /**< various SA options */
+ enum rte_security_ipsec_sa_direction direction;
+ /**< IPSec SA Direction - Egress/Ingress */
+ enum rte_security_ipsec_sa_protocol proto;
+ /**< IPsec SA Protocol - AH/ESP */
+ enum rte_security_ipsec_sa_mode mode;
+ /**< IPsec SA Mode - transport/tunnel */
+ struct rte_security_ipsec_tunnel_param tunnel;
+ /**< Tunnel parameters, NULL for transport mode */
+ };
+
+
+Security API
+~~~~~~~~~~~~
+
+The rte_security Library API is described in the *DPDK API Reference* document.
+
+Flow based Security Session
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In the case of NIC based offloads, the security session specified in the
+'rte_flow_action_security' must be created on the same port as the
+flow action that is being specified.
+
+The ingress/egress flow attribute should match that specified in the security
+session if the security session supports the definition of the direction.
+
+Multiple flows can be configured to use the same security session. For
+example if the security session specifies an egress IPsec SA, then multiple
+flows can be specified to that SA. In the case of an ingress IPsec SA then
+it is only valid to have a single flow to map to that security session.
+
+.. code-block:: console
+
+ Configuration Path
+ |
+ +--------|--------+
+ | Add/Remove |
+ | IPsec SA | <------ Build security flow action of
+ | | | ipsec transform
+ |--------|--------|
+ |
+ +--------V--------+
+ | Flow API |
+ +--------|--------+
+ |
+ +--------V--------+
+ | |
+ | NIC PMD | <------ Add/Remove SA to/from hw context
+ | |
+ +--------|--------+
+ |
+ +--------|--------+
+ | HW ACCELERATED |
+ | NIC |
+ | |
+ +--------|--------+
+
+* Add/Delete SA flow:
+ To add a new inline SA construct a rte_flow_item for Ethernet + IP + ESP
+ using the SA selectors and the ``rte_crypto_ipsec_xform`` as the ``rte_flow_action``.
+ Note that any rte_flow_items may be empty, which means it is not checked.
+
+.. code-block:: console
+
+ In its most basic form, IPsec flow specification is as follows:
+ +-------+ +----------+ +--------+ +-----+
+ | Eth | -> | IP4/6 | -> | ESP | -> | END |
+ +-------+ +----------+ +--------+ +-----+
+
+ However, the API can represent, IPsec crypto offload with any encapsulation:
+ +-------+ +--------+ +-----+
+ | Eth | -> ... -> | ESP | -> | END |
+ +-------+ +--------+ +-----+
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v6 08/10] net/ixgbe: enable inline ipsec
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 00/10] introduce security offload library Akhil Goyal
` (6 preceding siblings ...)
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 07/10] doc: add details of rte security Akhil Goyal
@ 2017-10-25 15:07 ` Akhil Goyal
2017-10-26 7:09 ` David Marchand
2017-11-01 19:58 ` Thomas Monjalon
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 09/10] crypto/dpaa2_sec: add support for protocol offload ipsec Akhil Goyal
` (2 subsequent siblings)
10 siblings, 2 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-25 15:07 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
From: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
drivers/net/ixgbe/Makefile | 2 +-
drivers/net/ixgbe/base/ixgbe_osdep.h | 8 +
drivers/net/ixgbe/ixgbe_ethdev.c | 11 +
drivers/net/ixgbe/ixgbe_ethdev.h | 6 +-
drivers/net/ixgbe/ixgbe_flow.c | 47 +++
drivers/net/ixgbe/ixgbe_ipsec.c | 737 +++++++++++++++++++++++++++++++++
drivers/net/ixgbe/ixgbe_ipsec.h | 151 +++++++
drivers/net/ixgbe/ixgbe_rxtx.c | 59 ++-
drivers/net/ixgbe/ixgbe_rxtx.h | 11 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 57 +++
10 files changed, 1082 insertions(+), 7 deletions(-)
create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.c
create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.h
diff --git a/drivers/net/ixgbe/Makefile b/drivers/net/ixgbe/Makefile
index 6a144e7..f03c426 100644
--- a/drivers/net/ixgbe/Makefile
+++ b/drivers/net/ixgbe/Makefile
@@ -120,11 +120,11 @@ SRCS-$(CONFIG_RTE_IXGBE_INC_VECTOR) += ixgbe_rxtx_vec_neon.c
else
SRCS-$(CONFIG_RTE_IXGBE_INC_VECTOR) += ixgbe_rxtx_vec_sse.c
endif
-
ifeq ($(CONFIG_RTE_LIBRTE_IXGBE_BYPASS),y)
SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_bypass.c
SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_82599_bypass.c
endif
+SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_ipsec.c
SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += rte_pmd_ixgbe.c
SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_tm.c
diff --git a/drivers/net/ixgbe/base/ixgbe_osdep.h b/drivers/net/ixgbe/base/ixgbe_osdep.h
index 4aab278..bb5dfd2 100644
--- a/drivers/net/ixgbe/base/ixgbe_osdep.h
+++ b/drivers/net/ixgbe/base/ixgbe_osdep.h
@@ -161,4 +161,12 @@ static inline uint32_t ixgbe_read_addr(volatile void* addr)
#define IXGBE_WRITE_REG_ARRAY(hw, reg, index, value) \
IXGBE_PCI_REG_WRITE(IXGBE_PCI_REG_ARRAY_ADDR((hw), (reg), (index)), (value))
+#define IXGBE_WRITE_REG_THEN_POLL_MASK(hw, reg, val, mask, poll_ms) \
+do { \
+ uint32_t cnt = poll_ms; \
+ IXGBE_WRITE_REG(hw, (reg), (val)); \
+ while (((IXGBE_READ_REG(hw, (reg))) & (mask)) && (cnt--)) \
+ rte_delay_ms(1); \
+} while (0)
+
#endif /* _IXGBE_OS_H_ */
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 14b9c53..10bf486 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -61,6 +61,7 @@
#include <rte_random.h>
#include <rte_dev.h>
#include <rte_hash_crc.h>
+#include <rte_security_driver.h>
#include "ixgbe_logs.h"
#include "base/ixgbe_api.h"
@@ -1167,6 +1168,11 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
return 0;
}
+ /* Initialize security_ctx only for primary process*/
+ eth_dev->security_ctx = ixgbe_ipsec_ctx_create(eth_dev);
+ if (eth_dev->security_ctx == NULL)
+ return -ENOMEM;
+
rte_eth_copy_pci_info(eth_dev, pci_dev);
eth_dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE;
@@ -1401,6 +1407,8 @@ eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev)
/* Remove all Traffic Manager configuration */
ixgbe_tm_conf_uninit(eth_dev);
+ rte_free(eth_dev->security_ctx);
+
return 0;
}
@@ -3695,6 +3703,9 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
hw->mac.type == ixgbe_mac_X550EM_a)
dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+ dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_SECURITY;
+ dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
.pthresh = IXGBE_DEFAULT_RX_PTHRESH,
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index e28c856..f5b52c4 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -38,6 +38,7 @@
#include "base/ixgbe_dcb_82599.h"
#include "base/ixgbe_dcb_82598.h"
#include "ixgbe_bypass.h"
+#include "ixgbe_ipsec.h"
#include <rte_time.h>
#include <rte_hash.h>
#include <rte_pci.h>
@@ -486,7 +487,7 @@ struct ixgbe_adapter {
struct ixgbe_filter_info filter;
struct ixgbe_l2_tn_info l2_tn;
struct ixgbe_bw_conf bw_conf;
-
+ struct ixgbe_ipsec ipsec;
bool rx_bulk_alloc_allowed;
bool rx_vec_allowed;
struct rte_timecounter systime_tc;
@@ -543,6 +544,9 @@ struct ixgbe_adapter {
#define IXGBE_DEV_PRIVATE_TO_TM_CONF(adapter) \
(&((struct ixgbe_adapter *)adapter)->tm_conf)
+#define IXGBE_DEV_PRIVATE_TO_IPSEC(adapter)\
+ (&((struct ixgbe_adapter *)adapter)->ipsec)
+
/*
* RX/TX function prototypes
*/
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 904c146..13c8243 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -187,6 +187,9 @@ const struct rte_flow_action *next_no_void_action(
* END
* other members in mask and spec should set to 0x00.
* item->last should be NULL.
+ *
+ * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY.
+ *
*/
static int
cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
@@ -226,6 +229,41 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
return -rte_errno;
}
+ /**
+ * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
+ */
+ act = next_no_void_action(actions, NULL);
+ if (act->type == RTE_FLOW_ACTION_TYPE_SECURITY) {
+ const void *conf = act->conf;
+ /* check if the next not void item is END */
+ act = next_no_void_action(actions, act);
+ if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+ memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "Not supported action.");
+ return -rte_errno;
+ }
+
+ /* get the IP pattern*/
+ item = next_no_void_pattern(pattern, NULL);
+ while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+ item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
+ if (item->last ||
+ item->type == RTE_FLOW_ITEM_TYPE_END) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "IP pattern missing.");
+ return -rte_errno;
+ }
+ item = next_no_void_pattern(pattern, item);
+ }
+
+ filter->proto = IPPROTO_ESP;
+ return ixgbe_crypto_add_ingress_sa_from_flow(conf, item->spec,
+ item->type == RTE_FLOW_ITEM_TYPE_IPV6);
+ }
+
/* the first not void item can be MAC or IPv4 */
item = next_no_void_pattern(pattern, NULL);
@@ -519,6 +557,10 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
+ /* ESP flow not really a flow*/
+ if (filter->proto == IPPROTO_ESP)
+ return 0;
+
/* Ixgbe doesn't support tcp flags. */
if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -2758,6 +2800,11 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
actions, &ntuple_filter, error);
+
+ /* ESP flow not really a flow*/
+ if (ntuple_filter.proto == IPPROTO_ESP)
+ return flow;
+
if (!ret) {
ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
if (!ret) {
diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c
new file mode 100644
index 0000000..99c0a73
--- /dev/null
+++ b/drivers/net/ixgbe/ixgbe_ipsec.c
@@ -0,0 +1,737 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_ethdev.h>
+#include <rte_ethdev_pci.h>
+#include <rte_ip.h>
+#include <rte_jhash.h>
+#include <rte_security_driver.h>
+#include <rte_cryptodev.h>
+#include <rte_flow.h>
+
+#include "base/ixgbe_type.h"
+#include "base/ixgbe_api.h"
+#include "ixgbe_ethdev.h"
+#include "ixgbe_ipsec.h"
+
+#define RTE_IXGBE_REGISTER_POLL_WAIT_5_MS 5
+
+#define IXGBE_WAIT_RREAD \
+ IXGBE_WRITE_REG_THEN_POLL_MASK(hw, IXGBE_IPSRXIDX, reg_val, \
+ IPSRXIDX_READ, RTE_IXGBE_REGISTER_POLL_WAIT_5_MS)
+#define IXGBE_WAIT_RWRITE \
+ IXGBE_WRITE_REG_THEN_POLL_MASK(hw, IXGBE_IPSRXIDX, reg_val, \
+ IPSRXIDX_WRITE, RTE_IXGBE_REGISTER_POLL_WAIT_5_MS)
+#define IXGBE_WAIT_TREAD \
+ IXGBE_WRITE_REG_THEN_POLL_MASK(hw, IXGBE_IPSTXIDX, reg_val, \
+ IPSRXIDX_READ, RTE_IXGBE_REGISTER_POLL_WAIT_5_MS)
+#define IXGBE_WAIT_TWRITE \
+ IXGBE_WRITE_REG_THEN_POLL_MASK(hw, IXGBE_IPSTXIDX, reg_val, \
+ IPSRXIDX_WRITE, RTE_IXGBE_REGISTER_POLL_WAIT_5_MS)
+
+#define CMP_IP(a, b) (\
+ (a).ipv6[0] == (b).ipv6[0] && \
+ (a).ipv6[1] == (b).ipv6[1] && \
+ (a).ipv6[2] == (b).ipv6[2] && \
+ (a).ipv6[3] == (b).ipv6[3])
+
+
+static void
+ixgbe_crypto_clear_ipsec_tables(struct rte_eth_dev *dev)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ int i = 0;
+
+ /* clear Rx IP table*/
+ for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
+ uint16_t index = i << 3;
+ uint32_t reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_IP | index;
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3), 0);
+ IXGBE_WAIT_RWRITE;
+ }
+
+ /* clear Rx SPI and Rx/Tx SA tables*/
+ for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+ uint32_t index = i << 3;
+ uint32_t reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_SPI | index;
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI, 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX, 0);
+ IXGBE_WAIT_RWRITE;
+ reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_KEY | index;
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT, 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD, 0);
+ IXGBE_WAIT_RWRITE;
+ reg_val = IPSRXIDX_WRITE | index;
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT, 0);
+ IXGBE_WAIT_TWRITE;
+ }
+}
+
+static int
+ixgbe_crypto_add_sa(struct ixgbe_crypto_session *ic_session)
+{
+ struct rte_eth_dev *dev = ic_session->dev;
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ixgbe_ipsec *priv = IXGBE_DEV_PRIVATE_TO_IPSEC(
+ dev->data->dev_private);
+ uint32_t reg_val;
+ int sa_index = -1;
+
+ if (ic_session->op == IXGBE_OP_AUTHENTICATED_DECRYPTION) {
+ int i, ip_index = -1;
+
+ /* Find a match in the IP table*/
+ for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
+ if (CMP_IP(priv->rx_ip_tbl[i].ip,
+ ic_session->dst_ip)) {
+ ip_index = i;
+ break;
+ }
+ }
+ /* If no match, find a free entry in the IP table*/
+ if (ip_index < 0) {
+ for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
+ if (priv->rx_ip_tbl[i].ref_count == 0) {
+ ip_index = i;
+ break;
+ }
+ }
+ }
+
+ /* Fail if no match and no free entries*/
+ if (ip_index < 0) {
+ PMD_DRV_LOG(ERR,
+ "No free entry left in the Rx IP table\n");
+ return -1;
+ }
+
+ /* Find a free entry in the SA table*/
+ for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+ if (priv->rx_sa_tbl[i].used == 0) {
+ sa_index = i;
+ break;
+ }
+ }
+ /* Fail if no free entries*/
+ if (sa_index < 0) {
+ PMD_DRV_LOG(ERR,
+ "No free entry left in the Rx SA table\n");
+ return -1;
+ }
+
+ priv->rx_ip_tbl[ip_index].ip.ipv6[0] =
+ ic_session->dst_ip.ipv6[0];
+ priv->rx_ip_tbl[ip_index].ip.ipv6[1] =
+ ic_session->dst_ip.ipv6[1];
+ priv->rx_ip_tbl[ip_index].ip.ipv6[2] =
+ ic_session->dst_ip.ipv6[2];
+ priv->rx_ip_tbl[ip_index].ip.ipv6[3] =
+ ic_session->dst_ip.ipv6[3];
+ priv->rx_ip_tbl[ip_index].ref_count++;
+
+ priv->rx_sa_tbl[sa_index].spi =
+ rte_cpu_to_be_32(ic_session->spi);
+ priv->rx_sa_tbl[sa_index].ip_index = ip_index;
+ priv->rx_sa_tbl[sa_index].key[3] =
+ rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[0]);
+ priv->rx_sa_tbl[sa_index].key[2] =
+ rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[4]);
+ priv->rx_sa_tbl[sa_index].key[1] =
+ rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[8]);
+ priv->rx_sa_tbl[sa_index].key[0] =
+ rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[12]);
+ priv->rx_sa_tbl[sa_index].salt =
+ rte_cpu_to_be_32(ic_session->salt);
+ priv->rx_sa_tbl[sa_index].mode = IPSRXMOD_VALID;
+ if (ic_session->op == IXGBE_OP_AUTHENTICATED_DECRYPTION)
+ priv->rx_sa_tbl[sa_index].mode |=
+ (IPSRXMOD_PROTO | IPSRXMOD_DECRYPT);
+ if (ic_session->dst_ip.type == IPv6)
+ priv->rx_sa_tbl[sa_index].mode |= IPSRXMOD_IPV6;
+ priv->rx_sa_tbl[sa_index].used = 1;
+
+ /* write IP table entry*/
+ reg_val = IPSRXIDX_RX_EN | IPSRXIDX_WRITE |
+ IPSRXIDX_TABLE_IP | (ip_index << 3);
+ if (priv->rx_ip_tbl[ip_index].ip.type == IPv4) {
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3),
+ priv->rx_ip_tbl[ip_index].ip.ipv4);
+ } else {
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0),
+ priv->rx_ip_tbl[ip_index].ip.ipv6[0]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1),
+ priv->rx_ip_tbl[ip_index].ip.ipv6[1]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2),
+ priv->rx_ip_tbl[ip_index].ip.ipv6[2]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3),
+ priv->rx_ip_tbl[ip_index].ip.ipv6[3]);
+ }
+ IXGBE_WAIT_RWRITE;
+
+ /* write SPI table entry*/
+ reg_val = IPSRXIDX_RX_EN | IPSRXIDX_WRITE |
+ IPSRXIDX_TABLE_SPI | (sa_index << 3);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI,
+ priv->rx_sa_tbl[sa_index].spi);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX,
+ priv->rx_sa_tbl[sa_index].ip_index);
+ IXGBE_WAIT_RWRITE;
+
+ /* write Key table entry*/
+ reg_val = IPSRXIDX_RX_EN | IPSRXIDX_WRITE |
+ IPSRXIDX_TABLE_KEY | (sa_index << 3);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0),
+ priv->rx_sa_tbl[sa_index].key[0]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1),
+ priv->rx_sa_tbl[sa_index].key[1]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2),
+ priv->rx_sa_tbl[sa_index].key[2]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3),
+ priv->rx_sa_tbl[sa_index].key[3]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT,
+ priv->rx_sa_tbl[sa_index].salt);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD,
+ priv->rx_sa_tbl[sa_index].mode);
+ IXGBE_WAIT_RWRITE;
+
+ } else { /* sess->dir == RTE_CRYPTO_OUTBOUND */
+ int i;
+
+ /* Find a free entry in the SA table*/
+ for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+ if (priv->tx_sa_tbl[i].used == 0) {
+ sa_index = i;
+ break;
+ }
+ }
+ /* Fail if no free entries*/
+ if (sa_index < 0) {
+ PMD_DRV_LOG(ERR,
+ "No free entry left in the Tx SA table\n");
+ return -1;
+ }
+
+ priv->tx_sa_tbl[sa_index].spi =
+ rte_cpu_to_be_32(ic_session->spi);
+ priv->tx_sa_tbl[sa_index].key[3] =
+ rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[0]);
+ priv->tx_sa_tbl[sa_index].key[2] =
+ rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[4]);
+ priv->tx_sa_tbl[sa_index].key[1] =
+ rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[8]);
+ priv->tx_sa_tbl[sa_index].key[0] =
+ rte_cpu_to_be_32(*(uint32_t *)&ic_session->key[12]);
+ priv->tx_sa_tbl[sa_index].salt =
+ rte_cpu_to_be_32(ic_session->salt);
+
+ reg_val = IPSRXIDX_RX_EN | IPSRXIDX_WRITE | (sa_index << 3);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0),
+ priv->tx_sa_tbl[sa_index].key[0]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1),
+ priv->tx_sa_tbl[sa_index].key[1]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2),
+ priv->tx_sa_tbl[sa_index].key[2]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3),
+ priv->tx_sa_tbl[sa_index].key[3]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT,
+ priv->tx_sa_tbl[sa_index].salt);
+ IXGBE_WAIT_TWRITE;
+
+ priv->tx_sa_tbl[i].used = 1;
+ ic_session->sa_index = sa_index;
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_crypto_remove_sa(struct rte_eth_dev *dev,
+ struct ixgbe_crypto_session *ic_session)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ixgbe_ipsec *priv =
+ IXGBE_DEV_PRIVATE_TO_IPSEC(dev->data->dev_private);
+ uint32_t reg_val;
+ int sa_index = -1;
+
+ if (ic_session->op == IXGBE_OP_AUTHENTICATED_DECRYPTION) {
+ int i, ip_index = -1;
+
+ /* Find a match in the IP table*/
+ for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
+ if (CMP_IP(priv->rx_ip_tbl[i].ip, ic_session->dst_ip)) {
+ ip_index = i;
+ break;
+ }
+ }
+
+ /* Fail if no match*/
+ if (ip_index < 0) {
+ PMD_DRV_LOG(ERR,
+ "Entry not found in the Rx IP table\n");
+ return -1;
+ }
+
+ /* Find a free entry in the SA table*/
+ for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+ if (priv->rx_sa_tbl[i].spi ==
+ rte_cpu_to_be_32(ic_session->spi)) {
+ sa_index = i;
+ break;
+ }
+ }
+ /* Fail if no match*/
+ if (sa_index < 0) {
+ PMD_DRV_LOG(ERR,
+ "Entry not found in the Rx SA table\n");
+ return -1;
+ }
+
+ /* Disable and clear Rx SPI and key table table entryes*/
+ reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_SPI | (sa_index << 3);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI, 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX, 0);
+ IXGBE_WAIT_RWRITE;
+ reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_KEY | (sa_index << 3);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT, 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD, 0);
+ IXGBE_WAIT_RWRITE;
+ priv->rx_sa_tbl[sa_index].used = 0;
+
+ /* If last used then clear the IP table entry*/
+ priv->rx_ip_tbl[ip_index].ref_count--;
+ if (priv->rx_ip_tbl[ip_index].ref_count == 0) {
+ reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_IP |
+ (ip_index << 3);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3), 0);
+ }
+ } else { /* session->dir == RTE_CRYPTO_OUTBOUND */
+ int i;
+
+ /* Find a match in the SA table*/
+ for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+ if (priv->tx_sa_tbl[i].spi ==
+ rte_cpu_to_be_32(ic_session->spi)) {
+ sa_index = i;
+ break;
+ }
+ }
+ /* Fail if no match entries*/
+ if (sa_index < 0) {
+ PMD_DRV_LOG(ERR,
+ "Entry not found in the Tx SA table\n");
+ return -1;
+ }
+ reg_val = IPSRXIDX_WRITE | (sa_index << 3);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT, 0);
+ IXGBE_WAIT_TWRITE;
+
+ priv->tx_sa_tbl[sa_index].used = 0;
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_crypto_create_session(void *device,
+ struct rte_security_session_conf *conf,
+ struct rte_security_session *session,
+ struct rte_mempool *mempool)
+{
+ struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
+ struct ixgbe_crypto_session *ic_session = NULL;
+ struct rte_crypto_aead_xform *aead_xform;
+ struct rte_eth_conf *dev_conf = ð_dev->data->dev_conf;
+
+ if (rte_mempool_get(mempool, (void **)&ic_session)) {
+ PMD_DRV_LOG(ERR, "Cannot get object from ic_session mempool");
+ return -ENOMEM;
+ }
+
+ if (conf->crypto_xform->type != RTE_CRYPTO_SYM_XFORM_AEAD ||
+ conf->crypto_xform->aead.algo !=
+ RTE_CRYPTO_AEAD_AES_GCM) {
+ PMD_DRV_LOG(ERR, "Unsupported crypto transformation mode\n");
+ return -ENOTSUP;
+ }
+ aead_xform = &conf->crypto_xform->aead;
+
+ if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
+ if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+ ic_session->op = IXGBE_OP_AUTHENTICATED_DECRYPTION;
+ } else {
+ PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
+ return -ENOTSUP;
+ }
+ } else {
+ if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+ ic_session->op = IXGBE_OP_AUTHENTICATED_ENCRYPTION;
+ } else {
+ PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
+ return -ENOTSUP;
+ }
+ }
+
+ ic_session->key = aead_xform->key.data;
+ memcpy(&ic_session->salt,
+ &aead_xform->key.data[aead_xform->key.length], 4);
+ ic_session->spi = conf->ipsec.spi;
+ ic_session->dev = eth_dev;
+
+ set_sec_session_private_data(session, ic_session);
+
+ if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
+ if (ixgbe_crypto_add_sa(ic_session)) {
+ PMD_DRV_LOG(ERR, "Failed to add SA\n");
+ return -EPERM;
+ }
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_crypto_remove_session(void *device,
+ struct rte_security_session *session)
+{
+ struct rte_eth_dev *eth_dev = device;
+ struct ixgbe_crypto_session *ic_session =
+ (struct ixgbe_crypto_session *)
+ get_sec_session_private_data(session);
+ struct rte_mempool *mempool = rte_mempool_from_obj(ic_session);
+
+ if (eth_dev != ic_session->dev) {
+ PMD_DRV_LOG(ERR, "Session not bound to this device\n");
+ return -ENODEV;
+ }
+
+ if (ixgbe_crypto_remove_sa(eth_dev, ic_session)) {
+ PMD_DRV_LOG(ERR, "Failed to remove session\n");
+ return -EFAULT;
+ }
+
+ rte_mempool_put(mempool, (void *)ic_session);
+
+ return 0;
+}
+
+static inline uint8_t
+ixgbe_crypto_compute_pad_len(struct rte_mbuf *m)
+{
+ if (m->nb_segs == 1) {
+ /* 16 bytes ICV + 2 bytes ESP trailer + payload padding size
+ * payload padding size is stored at <pkt_len - 18>
+ */
+ uint8_t *esp_pad_len = rte_pktmbuf_mtod_offset(m, uint8_t *,
+ rte_pktmbuf_pkt_len(m) -
+ (ESP_TRAILER_SIZE + ESP_ICV_SIZE));
+ return *esp_pad_len + ESP_TRAILER_SIZE + ESP_ICV_SIZE;
+ }
+ return 0;
+}
+
+static int
+ixgbe_crypto_update_mb(void *device __rte_unused,
+ struct rte_security_session *session,
+ struct rte_mbuf *m, void *params __rte_unused)
+{
+ struct ixgbe_crypto_session *ic_session =
+ get_sec_session_private_data(session);
+ if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
+ union ixgbe_crypto_tx_desc_md *mdata =
+ (union ixgbe_crypto_tx_desc_md *)&m->udata64;
+ mdata->enc = 1;
+ mdata->sa_idx = ic_session->sa_index;
+ mdata->pad_len = ixgbe_crypto_compute_pad_len(m);
+ }
+ return 0;
+}
+
+
+static const struct rte_security_capability *
+ixgbe_crypto_capabilities_get(void *device __rte_unused)
+{
+ static const struct rte_cryptodev_capabilities
+ aes_gcm_gmac_crypto_capabilities[] = {
+ { /* AES GMAC (128-bit) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_GMAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 12,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ { /* AES GCM (128-bit) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ {.aead = {
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 65535,
+ .increment = 1
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 12,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ .op = RTE_CRYPTO_OP_TYPE_UNDEFINED,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_NOT_SPECIFIED
+ }, }
+ },
+ };
+
+ static const struct rte_security_capability
+ ixgbe_security_capabilities[] = {
+ { /* IPsec Inline Crypto ESP Transport Egress */
+ .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ .options = { 0 }
+ },
+ .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
+ .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+ },
+ { /* IPsec Inline Crypto ESP Transport Ingress */
+ .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+ .options = { 0 }
+ },
+ .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
+ .ol_flags = 0
+ },
+ { /* IPsec Inline Crypto ESP Tunnel Egress */
+ .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ .options = { 0 }
+ },
+ .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
+ .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+ },
+ { /* IPsec Inline Crypto ESP Tunnel Ingress */
+ .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+ .options = { 0 }
+ },
+ .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
+ .ol_flags = 0
+ },
+ {
+ .action = RTE_SECURITY_ACTION_TYPE_NONE
+ }
+ };
+
+ return ixgbe_security_capabilities;
+}
+
+
+int
+ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ uint32_t reg;
+
+ /* sanity checks */
+ if (dev->data->dev_conf.rxmode.enable_lro) {
+ PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
+ return -1;
+ }
+ if (!dev->data->dev_conf.rxmode.hw_strip_crc) {
+ PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
+ return -1;
+ }
+
+
+ /* Set IXGBE_SECTXBUFFAF to 0x15 as required in the datasheet*/
+ IXGBE_WRITE_REG(hw, IXGBE_SECTXBUFFAF, 0x15);
+
+ /* IFG needs to be set to 3 when we are using security. Otherwise a Tx
+ * hang will occur with heavy traffic.
+ */
+ reg = IXGBE_READ_REG(hw, IXGBE_SECTXMINIFG);
+ reg = (reg & 0xFFFFFFF0) | 0x3;
+ IXGBE_WRITE_REG(hw, IXGBE_SECTXMINIFG, reg);
+
+ reg = IXGBE_READ_REG(hw, IXGBE_HLREG0);
+ reg |= IXGBE_HLREG0_TXCRCEN | IXGBE_HLREG0_RXCRCSTRP;
+ IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
+
+ if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+ IXGBE_WRITE_REG(hw, IXGBE_SECRXCTRL, 0);
+ reg = IXGBE_READ_REG(hw, IXGBE_SECRXCTRL);
+ if (reg != 0) {
+ PMD_DRV_LOG(ERR, "Error enabling Rx Crypto");
+ return -1;
+ }
+ }
+ if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+ IXGBE_WRITE_REG(hw, IXGBE_SECTXCTRL,
+ IXGBE_SECTXCTRL_STORE_FORWARD);
+ reg = IXGBE_READ_REG(hw, IXGBE_SECTXCTRL);
+ if (reg != IXGBE_SECTXCTRL_STORE_FORWARD) {
+ PMD_DRV_LOG(ERR, "Error enabling Rx Crypto");
+ return -1;
+ }
+ }
+
+ ixgbe_crypto_clear_ipsec_tables(dev);
+
+ return 0;
+}
+
+int
+ixgbe_crypto_add_ingress_sa_from_flow(const void *sess,
+ const void *ip_spec,
+ uint8_t is_ipv6)
+{
+ struct ixgbe_crypto_session *ic_session
+ = get_sec_session_private_data(sess);
+
+ if (ic_session->op == IXGBE_OP_AUTHENTICATED_DECRYPTION) {
+ if (is_ipv6) {
+ const struct rte_flow_item_ipv6 *ipv6 = ip_spec;
+ ic_session->src_ip.type = IPv6;
+ ic_session->dst_ip.type = IPv6;
+ rte_memcpy(ic_session->src_ip.ipv6,
+ ipv6->hdr.src_addr, 16);
+ rte_memcpy(ic_session->dst_ip.ipv6,
+ ipv6->hdr.dst_addr, 16);
+ } else {
+ const struct rte_flow_item_ipv4 *ipv4 = ip_spec;
+ ic_session->src_ip.type = IPv4;
+ ic_session->dst_ip.type = IPv4;
+ ic_session->src_ip.ipv4 = ipv4->hdr.src_addr;
+ ic_session->dst_ip.ipv4 = ipv4->hdr.dst_addr;
+ }
+ return ixgbe_crypto_add_sa(ic_session);
+ }
+
+ return 0;
+}
+
+static struct rte_security_ops ixgbe_security_ops = {
+ .session_create = ixgbe_crypto_create_session,
+ .session_update = NULL,
+ .session_stats_get = NULL,
+ .session_destroy = ixgbe_crypto_remove_session,
+ .set_pkt_metadata = ixgbe_crypto_update_mb,
+ .capabilities_get = ixgbe_crypto_capabilities_get
+};
+
+struct rte_security_ctx *
+ixgbe_ipsec_ctx_create(struct rte_eth_dev *dev)
+{
+ struct rte_security_ctx *ctx = rte_malloc("rte_security_instances_ops",
+ sizeof(struct rte_security_ctx), 0);
+ if (ctx) {
+ ctx->device = (void *)dev;
+ ctx->ops = &ixgbe_security_ops;
+ ctx->sess_cnt = 0;
+ }
+ return ctx;
+}
diff --git a/drivers/net/ixgbe/ixgbe_ipsec.h b/drivers/net/ixgbe/ixgbe_ipsec.h
new file mode 100644
index 0000000..fb8fefc
--- /dev/null
+++ b/drivers/net/ixgbe/ixgbe_ipsec.h
@@ -0,0 +1,151 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef IXGBE_IPSEC_H_
+#define IXGBE_IPSEC_H_
+
+#include <rte_security.h>
+
+#define IPSRXIDX_RX_EN 0x00000001
+#define IPSRXIDX_TABLE_IP 0x00000002
+#define IPSRXIDX_TABLE_SPI 0x00000004
+#define IPSRXIDX_TABLE_KEY 0x00000006
+#define IPSRXIDX_WRITE 0x80000000
+#define IPSRXIDX_READ 0x40000000
+#define IPSRXMOD_VALID 0x00000001
+#define IPSRXMOD_PROTO 0x00000004
+#define IPSRXMOD_DECRYPT 0x00000008
+#define IPSRXMOD_IPV6 0x00000010
+#define IXGBE_ADVTXD_POPTS_IPSEC 0x00000400
+#define IXGBE_ADVTXD_TUCMD_IPSEC_TYPE_ESP 0x00002000
+#define IXGBE_ADVTXD_TUCMD_IPSEC_ENCRYPT_EN 0x00004000
+#define IXGBE_RXDADV_IPSEC_STATUS_SECP 0x00020000
+#define IXGBE_RXDADV_IPSEC_ERROR_BIT_MASK 0x18000000
+#define IXGBE_RXDADV_IPSEC_ERROR_INVALID_PROTOCOL 0x08000000
+#define IXGBE_RXDADV_IPSEC_ERROR_INVALID_LENGTH 0x10000000
+#define IXGBE_RXDADV_IPSEC_ERROR_AUTHENTICATION_FAILED 0x18000000
+
+#define IPSEC_MAX_RX_IP_COUNT 128
+#define IPSEC_MAX_SA_COUNT 1024
+
+#define ESP_ICV_SIZE 16
+#define ESP_TRAILER_SIZE 2
+
+enum ixgbe_operation {
+ IXGBE_OP_AUTHENTICATED_ENCRYPTION,
+ IXGBE_OP_AUTHENTICATED_DECRYPTION
+};
+
+enum ixgbe_gcm_key {
+ IXGBE_GCM_KEY_128,
+ IXGBE_GCM_KEY_256
+};
+
+/**
+ * Generic IP address structure
+ * TODO: Find better location for this rte_net.h possibly.
+ **/
+struct ipaddr {
+ enum ipaddr_type {
+ IPv4,
+ IPv6
+ } type;
+ /**< IP Address Type - IPv4/IPv6 */
+
+ union {
+ uint32_t ipv4;
+ uint32_t ipv6[4];
+ };
+};
+
+/** inline crypto crypto private session structure */
+struct ixgbe_crypto_session {
+ enum ixgbe_operation op;
+ uint8_t *key;
+ uint32_t salt;
+ uint32_t sa_index;
+ uint32_t spi;
+ struct ipaddr src_ip;
+ struct ipaddr dst_ip;
+ struct rte_eth_dev *dev;
+} __rte_cache_aligned;
+
+struct ixgbe_crypto_rx_ip_table {
+ struct ipaddr ip;
+ uint16_t ref_count;
+};
+struct ixgbe_crypto_rx_sa_table {
+ uint32_t spi;
+ uint32_t ip_index;
+ uint32_t key[4];
+ uint32_t salt;
+ uint8_t mode;
+ uint8_t used;
+};
+
+struct ixgbe_crypto_tx_sa_table {
+ uint32_t spi;
+ uint32_t key[4];
+ uint32_t salt;
+ uint8_t used;
+};
+
+union ixgbe_crypto_tx_desc_md {
+ uint64_t data;
+ struct {
+ /**< SA table index */
+ uint32_t sa_idx;
+ /**< ICV and ESP trailer length */
+ uint8_t pad_len;
+ /**< enable encryption */
+ uint8_t enc;
+ };
+};
+
+struct ixgbe_ipsec {
+ struct ixgbe_crypto_rx_ip_table rx_ip_tbl[IPSEC_MAX_RX_IP_COUNT];
+ struct ixgbe_crypto_rx_sa_table rx_sa_tbl[IPSEC_MAX_SA_COUNT];
+ struct ixgbe_crypto_tx_sa_table tx_sa_tbl[IPSEC_MAX_SA_COUNT];
+};
+
+
+struct rte_security_ctx *
+ixgbe_ipsec_ctx_create(struct rte_eth_dev *dev);
+int ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev);
+int ixgbe_crypto_add_ingress_sa_from_flow(const void *sess,
+ const void *ip_spec,
+ uint8_t is_ipv6);
+
+
+
+#endif /*IXGBE_IPSEC_H_*/
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 0038dfb..38a014a 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -93,6 +93,7 @@
PKT_TX_TCP_SEG | \
PKT_TX_MACSEC | \
PKT_TX_OUTER_IP_CKSUM | \
+ PKT_TX_SEC_OFFLOAD | \
IXGBE_TX_IEEE1588_TMST)
#define IXGBE_TX_OFFLOAD_NOTSUP_MASK \
@@ -395,7 +396,8 @@ ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
static inline void
ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
volatile struct ixgbe_adv_tx_context_desc *ctx_txd,
- uint64_t ol_flags, union ixgbe_tx_offload tx_offload)
+ uint64_t ol_flags, union ixgbe_tx_offload tx_offload,
+ union ixgbe_crypto_tx_desc_md *mdata)
{
uint32_t type_tucmd_mlhl;
uint32_t mss_l4len_idx = 0;
@@ -479,6 +481,17 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
seqnum_seed |= tx_offload.l2_len
<< IXGBE_ADVTXD_TUNNEL_LEN;
}
+ if (ol_flags & PKT_TX_SEC_OFFLOAD) {
+ seqnum_seed |=
+ (IXGBE_ADVTXD_IPSEC_SA_INDEX_MASK & mdata->sa_idx);
+ type_tucmd_mlhl |= mdata->enc ?
+ (IXGBE_ADVTXD_TUCMD_IPSEC_TYPE_ESP |
+ IXGBE_ADVTXD_TUCMD_IPSEC_ENCRYPT_EN) : 0;
+ type_tucmd_mlhl |=
+ (mdata->pad_len & IXGBE_ADVTXD_IPSEC_ESP_LEN_MASK);
+ tx_offload_mask.sa_idx |= ~0;
+ tx_offload_mask.sec_pad_len |= ~0;
+ }
txq->ctx_cache[ctx_idx].flags = ol_flags;
txq->ctx_cache[ctx_idx].tx_offload.data[0] =
@@ -657,6 +670,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint32_t ctx = 0;
uint32_t new_ctx;
union ixgbe_tx_offload tx_offload;
+ uint8_t use_ipsec;
tx_offload.data[0] = 0;
tx_offload.data[1] = 0;
@@ -684,6 +698,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* are needed for offload functionality.
*/
ol_flags = tx_pkt->ol_flags;
+ use_ipsec = txq->using_ipsec && (ol_flags & PKT_TX_SEC_OFFLOAD);
/* If hardware offload required */
tx_ol_req = ol_flags & IXGBE_TX_OFFLOAD_MASK;
@@ -695,6 +710,13 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.tso_segsz = tx_pkt->tso_segsz;
tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
+ if (use_ipsec) {
+ union ixgbe_crypto_tx_desc_md *ipsec_mdata =
+ (union ixgbe_crypto_tx_desc_md *)
+ &tx_pkt->udata64;
+ tx_offload.sa_idx = ipsec_mdata->sa_idx;
+ tx_offload.sec_pad_len = ipsec_mdata->pad_len;
+ }
/* If new context need be built or reuse the exist ctx. */
ctx = what_advctx_update(txq, tx_ol_req,
@@ -855,7 +877,9 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
}
ixgbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req,
- tx_offload);
+ tx_offload,
+ (union ixgbe_crypto_tx_desc_md *)
+ &tx_pkt->udata64);
txe->last_id = tx_last;
tx_id = txe->next_id;
@@ -873,6 +897,8 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
}
olinfo_status |= (pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT);
+ if (use_ipsec)
+ olinfo_status |= IXGBE_ADVTXD_POPTS_IPSEC;
m_seg = tx_pkt;
do {
@@ -1447,6 +1473,12 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status)
pkt_flags |= PKT_RX_EIP_CKSUM_BAD;
}
+ if (rx_status & IXGBE_RXD_STAT_SECP) {
+ pkt_flags |= PKT_RX_SEC_OFFLOAD;
+ if (rx_status & IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG)
+ pkt_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+ }
+
return pkt_flags;
}
@@ -2364,8 +2396,10 @@ void __attribute__((cold))
ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ixgbe_tx_queue *txq)
{
/* Use a simple Tx queue (no offloads, no multi segs) if possible */
- if (((txq->txq_flags & IXGBE_SIMPLE_FLAGS) == IXGBE_SIMPLE_FLAGS)
- && (txq->tx_rs_thresh >= RTE_PMD_IXGBE_TX_MAX_BURST)) {
+ if (((txq->txq_flags & IXGBE_SIMPLE_FLAGS) == IXGBE_SIMPLE_FLAGS) &&
+ (txq->tx_rs_thresh >= RTE_PMD_IXGBE_TX_MAX_BURST) &&
+ !(dev->data->dev_conf.txmode.offloads
+ & DEV_TX_OFFLOAD_SECURITY)) {
PMD_INIT_LOG(DEBUG, "Using simple tx code path");
dev->tx_pkt_prepare = NULL;
#ifdef RTE_IXGBE_INC_VECTOR
@@ -2535,6 +2569,8 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->txq_flags = tx_conf->txq_flags;
txq->ops = &def_txq_ops;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
+ txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
+ DEV_TX_OFFLOAD_SECURITY);
/*
* Modification to set VFTDT for virtual function if vf is detected
@@ -4519,6 +4555,8 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev)
struct ixgbe_rx_queue *rxq = dev->data->rx_queues[i];
rxq->rx_using_sse = rx_using_sse;
+ rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
+ DEV_RX_OFFLOAD_SECURITY);
}
}
@@ -5006,6 +5044,19 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
dev->data->dev_conf.lpbk_mode == IXGBE_LPBK_82599_TX_RX)
ixgbe_setup_loopback_link_82599(hw);
+ if ((dev->data->dev_conf.rxmode.offloads &
+ DEV_RX_OFFLOAD_SECURITY) ||
+ (dev->data->dev_conf.txmode.offloads &
+ DEV_TX_OFFLOAD_SECURITY)) {
+ ret = ixgbe_crypto_enable_ipsec(dev);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR,
+ "ixgbe_crypto_enable_ipsec fails with %d.",
+ ret);
+ return ret;
+ }
+ }
+
return 0;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 81c527f..4017831 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -138,8 +138,10 @@ struct ixgbe_rx_queue {
uint16_t rx_nb_avail; /**< nr of staged pkts ready to ret to app */
uint16_t rx_next_avail; /**< idx of next staged pkt to ret to app */
uint16_t rx_free_trigger; /**< triggers rx buffer allocation */
- uint16_t rx_using_sse;
+ uint8_t rx_using_sse;
/**< indicates that vector RX is in use */
+ uint8_t using_ipsec;
+ /**< indicates that IPsec RX feature is in use */
#ifdef RTE_IXGBE_INC_VECTOR
uint16_t rxrearm_nb; /**< number of remaining to be re-armed */
uint16_t rxrearm_start; /**< the idx we start the re-arming from */
@@ -183,6 +185,10 @@ union ixgbe_tx_offload {
/* fields for TX offloading of tunnels */
uint64_t outer_l3_len:8; /**< Outer L3 (IP) Hdr Length. */
uint64_t outer_l2_len:8; /**< Outer L2 (MAC) Hdr Length. */
+
+ /* inline ipsec related*/
+ uint64_t sa_idx:8; /**< TX SA database entry index */
+ uint64_t sec_pad_len:4; /**< padding length */
};
};
@@ -247,6 +253,9 @@ struct ixgbe_tx_queue {
struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
const struct ixgbe_txq_ops *ops; /**< txq ops */
uint8_t tx_deferred_start; /**< not in global dev start. */
+ uint8_t using_ipsec;
+ /**< indicates that IPsec TX feature is in use */
+
};
struct ixgbe_txq_ops {
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index e704a7f..b65220f 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -123,6 +123,59 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq)
}
static inline void
+desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
+{
+ __m128i sterr0, sterr1, sterr2, sterr3;
+ __m128i tmp1, tmp2, tmp3, tmp4;
+ __m128i rearm0, rearm1, rearm2, rearm3;
+
+ const __m128i ipsec_sterr_msk = _mm_set_epi32(
+ 0, IXGBE_RXDADV_IPSEC_STATUS_SECP |
+ IXGBE_RXDADV_IPSEC_ERROR_AUTH_FAILED,
+ 0, 0);
+ const __m128i ipsec_proc_msk = _mm_set_epi32(
+ 0, IXGBE_RXDADV_IPSEC_STATUS_SECP, 0, 0);
+ const __m128i ipsec_err_flag = _mm_set_epi32(
+ 0, PKT_RX_SEC_OFFLOAD_FAILED | PKT_RX_SEC_OFFLOAD,
+ 0, 0);
+ const __m128i ipsec_proc_flag = _mm_set_epi32(
+ 0, PKT_RX_SEC_OFFLOAD, 0, 0);
+
+ rearm0 = _mm_load_si128((__m128i *)&rx_pkts[0]->rearm_data);
+ rearm1 = _mm_load_si128((__m128i *)&rx_pkts[1]->rearm_data);
+ rearm2 = _mm_load_si128((__m128i *)&rx_pkts[2]->rearm_data);
+ rearm3 = _mm_load_si128((__m128i *)&rx_pkts[3]->rearm_data);
+ sterr0 = _mm_and_si128(descs[0], ipsec_sterr_msk);
+ sterr1 = _mm_and_si128(descs[1], ipsec_sterr_msk);
+ sterr2 = _mm_and_si128(descs[2], ipsec_sterr_msk);
+ sterr3 = _mm_and_si128(descs[3], ipsec_sterr_msk);
+ tmp1 = _mm_cmpeq_epi32(sterr0, ipsec_sterr_msk);
+ tmp2 = _mm_cmpeq_epi32(sterr0, ipsec_proc_msk);
+ tmp3 = _mm_cmpeq_epi32(sterr1, ipsec_sterr_msk);
+ tmp4 = _mm_cmpeq_epi32(sterr1, ipsec_proc_msk);
+ sterr0 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
+ _mm_and_si128(tmp2, ipsec_proc_flag));
+ sterr1 = _mm_or_si128(_mm_and_si128(tmp3, ipsec_err_flag),
+ _mm_and_si128(tmp4, ipsec_proc_flag));
+ tmp1 = _mm_cmpeq_epi32(sterr2, ipsec_sterr_msk);
+ tmp2 = _mm_cmpeq_epi32(sterr2, ipsec_proc_msk);
+ tmp3 = _mm_cmpeq_epi32(sterr3, ipsec_sterr_msk);
+ tmp4 = _mm_cmpeq_epi32(sterr3, ipsec_proc_msk);
+ sterr2 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
+ _mm_and_si128(tmp2, ipsec_proc_flag));
+ sterr3 = _mm_or_si128(_mm_and_si128(tmp3, ipsec_err_flag),
+ _mm_and_si128(tmp4, ipsec_proc_flag));
+ rearm0 = _mm_or_si128(rearm0, sterr0);
+ rearm1 = _mm_or_si128(rearm1, sterr1);
+ rearm2 = _mm_or_si128(rearm2, sterr2);
+ rearm3 = _mm_or_si128(rearm3, sterr3);
+ _mm_store_si128((__m128i *)&rx_pkts[0]->rearm_data, rearm0);
+ _mm_store_si128((__m128i *)&rx_pkts[1]->rearm_data, rearm1);
+ _mm_store_si128((__m128i *)&rx_pkts[2]->rearm_data, rearm2);
+ _mm_store_si128((__m128i *)&rx_pkts[3]->rearm_data, rearm3);
+}
+
+static inline void
desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
struct rte_mbuf **rx_pkts)
{
@@ -310,6 +363,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
volatile union ixgbe_adv_rx_desc *rxdp;
struct ixgbe_rx_entry *sw_ring;
uint16_t nb_pkts_recd;
+ uint8_t use_ipsec = rxq->using_ipsec;
int pos;
uint64_t var;
__m128i shuf_msk;
@@ -473,6 +527,9 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* set ol_flags with vlan packet type */
desc_to_olflags_v(descs, mbuf_init, vlan_flags, &rx_pkts[pos]);
+ if (unlikely(use_ipsec))
+ desc_to_olflags_v_ipsec(descs, rx_pkts);
+
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
pkt_mb4 = _mm_add_epi16(pkt_mb4, crc_adjust);
pkt_mb3 = _mm_add_epi16(pkt_mb3, crc_adjust);
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v6 08/10] net/ixgbe: enable inline ipsec
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 08/10] net/ixgbe: enable inline ipsec Akhil Goyal
@ 2017-10-26 7:09 ` David Marchand
2017-10-26 7:19 ` David Marchand
2017-11-01 19:58 ` Thomas Monjalon
1 sibling, 1 reply; 195+ messages in thread
From: David Marchand @ 2017-10-26 7:09 UTC (permalink / raw)
To: radu.nicolau, Declan Doherty
Cc: dev, Pablo de Lara, Hemant Agrawal, borisp, aviadye,
Thomas Monjalon, sandeep.malik, Jerin Jacob, Mcnamara, John,
Ananyev, Konstantin, shahafs, Olivier Matz, Akhil Goyal
Hello Radu, Declan,
On Wed, Oct 25, 2017 at 5:07 PM, Akhil Goyal <akhil.goyal@nxp.com> wrote:
> From: Radu Nicolau <radu.nicolau@intel.com>
>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> ---
> drivers/net/ixgbe/Makefile | 2 +-
> drivers/net/ixgbe/base/ixgbe_osdep.h | 8 +
> drivers/net/ixgbe/ixgbe_ethdev.c | 11 +
> drivers/net/ixgbe/ixgbe_ethdev.h | 6 +-
> drivers/net/ixgbe/ixgbe_flow.c | 47 +++
> drivers/net/ixgbe/ixgbe_ipsec.c | 737 +++++++++++++++++++++++++++++++++
> drivers/net/ixgbe/ixgbe_ipsec.h | 151 +++++++
> drivers/net/ixgbe/ixgbe_rxtx.c | 59 ++-
> drivers/net/ixgbe/ixgbe_rxtx.h | 11 +-
> drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 57 +++
> 10 files changed, 1082 insertions(+), 7 deletions(-)
> create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.c
> create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.h
This patch breaks ixgbe pmd compilation when the rte_security library
is disabled.
--
David Marchand
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v6 08/10] net/ixgbe: enable inline ipsec
2017-10-26 7:09 ` David Marchand
@ 2017-10-26 7:19 ` David Marchand
0 siblings, 0 replies; 195+ messages in thread
From: David Marchand @ 2017-10-26 7:19 UTC (permalink / raw)
To: radu.nicolau, Declan Doherty
Cc: dev, Pablo de Lara, Hemant Agrawal, borisp, aviadye,
Thomas Monjalon, sandeep.malik, Jerin Jacob, Mcnamara, John,
Ananyev, Konstantin, shahafs, Olivier Matz, Akhil Goyal
On Thu, Oct 26, 2017 at 9:09 AM, David Marchand
<david.marchand@6wind.com> wrote:
> Hello Radu, Declan,
>
> On Wed, Oct 25, 2017 at 5:07 PM, Akhil Goyal <akhil.goyal@nxp.com> wrote:
>> From: Radu Nicolau <radu.nicolau@intel.com>
>>
>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>> ---
>> drivers/net/ixgbe/Makefile | 2 +-
>> drivers/net/ixgbe/base/ixgbe_osdep.h | 8 +
>> drivers/net/ixgbe/ixgbe_ethdev.c | 11 +
>> drivers/net/ixgbe/ixgbe_ethdev.h | 6 +-
>> drivers/net/ixgbe/ixgbe_flow.c | 47 +++
>> drivers/net/ixgbe/ixgbe_ipsec.c | 737 +++++++++++++++++++++++++++++++++
>> drivers/net/ixgbe/ixgbe_ipsec.h | 151 +++++++
>> drivers/net/ixgbe/ixgbe_rxtx.c | 59 ++-
>> drivers/net/ixgbe/ixgbe_rxtx.h | 11 +-
>> drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 57 +++
>> 10 files changed, 1082 insertions(+), 7 deletions(-)
>> create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.c
>> create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.h
>
> This patch breaks ixgbe pmd compilation when the rte_security library
> is disabled.
With some logs :
CC ixgbe_rxtx.o
In file included from .../dpdk-upstream/drivers/net/ixgbe/ixgbe_ethdev.h:41:0,
from .../dpdk-upstream/drivers/net/ixgbe/ixgbe_rxtx.c:78:
.../dpdk-upstream/drivers/net/ixgbe/ixgbe_ipsec.h:37:26: fatal error:
rte_security.h: No such file or directory
compilation terminated.
.../dpdk-upstream/mk/internal/rte.compile-pre.mk:138: recipe for
target 'ixgbe_rxtx.o' failed
make[6]: *** [ixgbe_rxtx.o] Error 1
.../dpdk-upstream/mk/rte.subdir.mk:63: recipe for target 'ixgbe' failed
make[5]: *** [ixgbe] Error 2
.../dpdk-upstream/mk/rte.subdir.mk:63: recipe for target 'net' failed
make[4]: *** [net] Error 2
.../dpdk-upstream/mk/rte.sdkbuild.mk:76: recipe for target 'drivers' failed
make[3]: *** [drivers] Error 2
.../dpdk-upstream/mk/rte.sdkroot.mk:128: recipe for target 'all' failed
make[2]: *** [all] Error 2
--
David Marchand
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v6 08/10] net/ixgbe: enable inline ipsec
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 08/10] net/ixgbe: enable inline ipsec Akhil Goyal
2017-10-26 7:09 ` David Marchand
@ 2017-11-01 19:58 ` Thomas Monjalon
2017-11-01 20:10 ` Ferruh Yigit
1 sibling, 1 reply; 195+ messages in thread
From: Thomas Monjalon @ 2017-11-01 19:58 UTC (permalink / raw)
To: declan.doherty, radu.nicolau, konstantin.ananyev
Cc: dev, Akhil Goyal, pablo.de.lara.guarch, hemant.agrawal, borisp,
aviadye, sandeep.malik, jerin.jacob, john.mcnamara, shahafs,
olivier.matz
Hi, there is compilation error with GCC 4.8.5.
It may be a false positive strict aliasing check.
Please could you check it below?
25/10/2017 17:07, Akhil Goyal:
> From: Radu Nicolau <radu.nicolau@intel.com>
>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
[...]
> --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
> +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
> @@ -123,6 +123,59 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq)
> }
>
> static inline void
> +desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
> +{
> + __m128i sterr0, sterr1, sterr2, sterr3;
> + __m128i tmp1, tmp2, tmp3, tmp4;
> + __m128i rearm0, rearm1, rearm2, rearm3;
> +
> + const __m128i ipsec_sterr_msk = _mm_set_epi32(
> + 0, IXGBE_RXDADV_IPSEC_STATUS_SECP |
> + IXGBE_RXDADV_IPSEC_ERROR_AUTH_FAILED,
> + 0, 0);
> + const __m128i ipsec_proc_msk = _mm_set_epi32(
> + 0, IXGBE_RXDADV_IPSEC_STATUS_SECP, 0, 0);
> + const __m128i ipsec_err_flag = _mm_set_epi32(
> + 0, PKT_RX_SEC_OFFLOAD_FAILED | PKT_RX_SEC_OFFLOAD,
> + 0, 0);
> + const __m128i ipsec_proc_flag = _mm_set_epi32(
> + 0, PKT_RX_SEC_OFFLOAD, 0, 0);
> +
> + rearm0 = _mm_load_si128((__m128i *)&rx_pkts[0]->rearm_data);
> + rearm1 = _mm_load_si128((__m128i *)&rx_pkts[1]->rearm_data);
> + rearm2 = _mm_load_si128((__m128i *)&rx_pkts[2]->rearm_data);
> + rearm3 = _mm_load_si128((__m128i *)&rx_pkts[3]->rearm_data);
> + sterr0 = _mm_and_si128(descs[0], ipsec_sterr_msk);
> + sterr1 = _mm_and_si128(descs[1], ipsec_sterr_msk);
> + sterr2 = _mm_and_si128(descs[2], ipsec_sterr_msk);
> + sterr3 = _mm_and_si128(descs[3], ipsec_sterr_msk);
> + tmp1 = _mm_cmpeq_epi32(sterr0, ipsec_sterr_msk);
> + tmp2 = _mm_cmpeq_epi32(sterr0, ipsec_proc_msk);
> + tmp3 = _mm_cmpeq_epi32(sterr1, ipsec_sterr_msk);
> + tmp4 = _mm_cmpeq_epi32(sterr1, ipsec_proc_msk);
> + sterr0 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
> + _mm_and_si128(tmp2, ipsec_proc_flag));
> + sterr1 = _mm_or_si128(_mm_and_si128(tmp3, ipsec_err_flag),
> + _mm_and_si128(tmp4, ipsec_proc_flag));
> + tmp1 = _mm_cmpeq_epi32(sterr2, ipsec_sterr_msk);
> + tmp2 = _mm_cmpeq_epi32(sterr2, ipsec_proc_msk);
> + tmp3 = _mm_cmpeq_epi32(sterr3, ipsec_sterr_msk);
> + tmp4 = _mm_cmpeq_epi32(sterr3, ipsec_proc_msk);
> + sterr2 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
> + _mm_and_si128(tmp2, ipsec_proc_flag));
> + sterr3 = _mm_or_si128(_mm_and_si128(tmp3, ipsec_err_flag),
> + _mm_and_si128(tmp4, ipsec_proc_flag));
> + rearm0 = _mm_or_si128(rearm0, sterr0);
> + rearm1 = _mm_or_si128(rearm1, sterr1);
> + rearm2 = _mm_or_si128(rearm2, sterr2);
> + rearm3 = _mm_or_si128(rearm3, sterr3);
> + _mm_store_si128((__m128i *)&rx_pkts[0]->rearm_data, rearm0);
> + _mm_store_si128((__m128i *)&rx_pkts[1]->rearm_data, rearm1);
> + _mm_store_si128((__m128i *)&rx_pkts[2]->rearm_data, rearm2);
> + _mm_store_si128((__m128i *)&rx_pkts[3]->rearm_data, rearm3);
> +}
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c: In function desc_to_olflags_v_ipsec:
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c:140:2: error:
dereferencing type-punned pointer will break strict-aliasing rules [-Werror=strict-aliasing]
rearm = _mm_set_epi32(((uint32_t *)rx_pkts[0]->rearm_data)[2],
^
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c:141:10: error:
dereferencing type-punned pointer will break strict-aliasing rules [-Werror=strict-aliasing]
((uint32_t *)rx_pkts[1]->rearm_data)[2],
^
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c:142:10: error:
dereferencing type-punned pointer will break strict-aliasing rules [-Werror=strict-aliasing]
((uint32_t *)rx_pkts[2]->rearm_data)[2],
^
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c:143:10: error:
dereferencing type-punned pointer will break strict-aliasing rules [-Werror=strict-aliasing]
((uint32_t *)rx_pkts[3]->rearm_data)[2]);
^
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c:154:2: error:
dereferencing type-punned pointer will break strict-aliasing rules [-Werror=strict-aliasing]
((uint32_t *)rx_pkts[0]->rearm_data)[2] = _mm_extract_epi32(rearm, 3);
^
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c:155:2: error:
dereferencing type-punned pointer will break strict-aliasing rules [-Werror=strict-aliasing]
((uint32_t *)rx_pkts[1]->rearm_data)[2] = _mm_extract_epi32(rearm, 2);
^
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c:156:2: error:
dereferencing type-punned pointer will break strict-aliasing rules [-Werror=strict-aliasing]
((uint32_t *)rx_pkts[2]->rearm_data)[2] = _mm_extract_epi32(rearm, 1);
^
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c:157:2: error:
dereferencing type-punned pointer will break strict-aliasing rules [-Werror=strict-aliasing]
((uint32_t *)rx_pkts[3]->rearm_data)[2] = _mm_extract_epi32(rearm, 0);
^
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v6 08/10] net/ixgbe: enable inline ipsec
2017-11-01 19:58 ` Thomas Monjalon
@ 2017-11-01 20:10 ` Ferruh Yigit
0 siblings, 0 replies; 195+ messages in thread
From: Ferruh Yigit @ 2017-11-01 20:10 UTC (permalink / raw)
To: Thomas Monjalon, declan.doherty, radu.nicolau, konstantin.ananyev
Cc: dev, Akhil Goyal, pablo.de.lara.guarch, hemant.agrawal, borisp,
aviadye, sandeep.malik, jerin.jacob, john.mcnamara, shahafs,
olivier.matz
On 11/1/2017 12:58 PM, Thomas Monjalon wrote:
> Hi, there is compilation error with GCC 4.8.5.
> It may be a false positive strict aliasing check.
> Please could you check it below?
I just get a patch into next-net [1], that should be fixing this build error.
[1]
http://dpdk.org/browse/next/dpdk-next-net/commit/?id=8bb0ee234e49d1486d4fb63673500c615dbbea1d
>
>
> 25/10/2017 17:07, Akhil Goyal:
>> From: Radu Nicolau <radu.nicolau@intel.com>
>>
>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> [...]
>> --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
>> +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
>> @@ -123,6 +123,59 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq)
>> }
>>
>> static inline void
>> +desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
>> +{
>> + __m128i sterr0, sterr1, sterr2, sterr3;
>> + __m128i tmp1, tmp2, tmp3, tmp4;
>> + __m128i rearm0, rearm1, rearm2, rearm3;
>> +
>> + const __m128i ipsec_sterr_msk = _mm_set_epi32(
>> + 0, IXGBE_RXDADV_IPSEC_STATUS_SECP |
>> + IXGBE_RXDADV_IPSEC_ERROR_AUTH_FAILED,
>> + 0, 0);
>> + const __m128i ipsec_proc_msk = _mm_set_epi32(
>> + 0, IXGBE_RXDADV_IPSEC_STATUS_SECP, 0, 0);
>> + const __m128i ipsec_err_flag = _mm_set_epi32(
>> + 0, PKT_RX_SEC_OFFLOAD_FAILED | PKT_RX_SEC_OFFLOAD,
>> + 0, 0);
>> + const __m128i ipsec_proc_flag = _mm_set_epi32(
>> + 0, PKT_RX_SEC_OFFLOAD, 0, 0);
>> +
>> + rearm0 = _mm_load_si128((__m128i *)&rx_pkts[0]->rearm_data);
>> + rearm1 = _mm_load_si128((__m128i *)&rx_pkts[1]->rearm_data);
>> + rearm2 = _mm_load_si128((__m128i *)&rx_pkts[2]->rearm_data);
>> + rearm3 = _mm_load_si128((__m128i *)&rx_pkts[3]->rearm_data);
>> + sterr0 = _mm_and_si128(descs[0], ipsec_sterr_msk);
>> + sterr1 = _mm_and_si128(descs[1], ipsec_sterr_msk);
>> + sterr2 = _mm_and_si128(descs[2], ipsec_sterr_msk);
>> + sterr3 = _mm_and_si128(descs[3], ipsec_sterr_msk);
>> + tmp1 = _mm_cmpeq_epi32(sterr0, ipsec_sterr_msk);
>> + tmp2 = _mm_cmpeq_epi32(sterr0, ipsec_proc_msk);
>> + tmp3 = _mm_cmpeq_epi32(sterr1, ipsec_sterr_msk);
>> + tmp4 = _mm_cmpeq_epi32(sterr1, ipsec_proc_msk);
>> + sterr0 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
>> + _mm_and_si128(tmp2, ipsec_proc_flag));
>> + sterr1 = _mm_or_si128(_mm_and_si128(tmp3, ipsec_err_flag),
>> + _mm_and_si128(tmp4, ipsec_proc_flag));
>> + tmp1 = _mm_cmpeq_epi32(sterr2, ipsec_sterr_msk);
>> + tmp2 = _mm_cmpeq_epi32(sterr2, ipsec_proc_msk);
>> + tmp3 = _mm_cmpeq_epi32(sterr3, ipsec_sterr_msk);
>> + tmp4 = _mm_cmpeq_epi32(sterr3, ipsec_proc_msk);
>> + sterr2 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
>> + _mm_and_si128(tmp2, ipsec_proc_flag));
>> + sterr3 = _mm_or_si128(_mm_and_si128(tmp3, ipsec_err_flag),
>> + _mm_and_si128(tmp4, ipsec_proc_flag));
>> + rearm0 = _mm_or_si128(rearm0, sterr0);
>> + rearm1 = _mm_or_si128(rearm1, sterr1);
>> + rearm2 = _mm_or_si128(rearm2, sterr2);
>> + rearm3 = _mm_or_si128(rearm3, sterr3);
>> + _mm_store_si128((__m128i *)&rx_pkts[0]->rearm_data, rearm0);
>> + _mm_store_si128((__m128i *)&rx_pkts[1]->rearm_data, rearm1);
>> + _mm_store_si128((__m128i *)&rx_pkts[2]->rearm_data, rearm2);
>> + _mm_store_si128((__m128i *)&rx_pkts[3]->rearm_data, rearm3);
>> +}
>
> drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c: In function desc_to_olflags_v_ipsec:
> drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c:140:2: error:
> dereferencing type-punned pointer will break strict-aliasing rules [-Werror=strict-aliasing]
> rearm = _mm_set_epi32(((uint32_t *)rx_pkts[0]->rearm_data)[2],
> ^
> drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c:141:10: error:
> dereferencing type-punned pointer will break strict-aliasing rules [-Werror=strict-aliasing]
> ((uint32_t *)rx_pkts[1]->rearm_data)[2],
> ^
> drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c:142:10: error:
> dereferencing type-punned pointer will break strict-aliasing rules [-Werror=strict-aliasing]
> ((uint32_t *)rx_pkts[2]->rearm_data)[2],
> ^
> drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c:143:10: error:
> dereferencing type-punned pointer will break strict-aliasing rules [-Werror=strict-aliasing]
> ((uint32_t *)rx_pkts[3]->rearm_data)[2]);
> ^
> drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c:154:2: error:
> dereferencing type-punned pointer will break strict-aliasing rules [-Werror=strict-aliasing]
> ((uint32_t *)rx_pkts[0]->rearm_data)[2] = _mm_extract_epi32(rearm, 3);
> ^
> drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c:155:2: error:
> dereferencing type-punned pointer will break strict-aliasing rules [-Werror=strict-aliasing]
> ((uint32_t *)rx_pkts[1]->rearm_data)[2] = _mm_extract_epi32(rearm, 2);
> ^
> drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c:156:2: error:
> dereferencing type-punned pointer will break strict-aliasing rules [-Werror=strict-aliasing]
> ((uint32_t *)rx_pkts[2]->rearm_data)[2] = _mm_extract_epi32(rearm, 1);
> ^
> drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c:157:2: error:
> dereferencing type-punned pointer will break strict-aliasing rules [-Werror=strict-aliasing]
> ((uint32_t *)rx_pkts[3]->rearm_data)[2] = _mm_extract_epi32(rearm, 0);
> ^
>
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v6 09/10] crypto/dpaa2_sec: add support for protocol offload ipsec
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 00/10] introduce security offload library Akhil Goyal
` (7 preceding siblings ...)
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 08/10] net/ixgbe: enable inline ipsec Akhil Goyal
@ 2017-10-25 15:07 ` Akhil Goyal
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 10/10] examples/ipsec-secgw: add support for security offload Akhil Goyal
2017-10-26 1:16 ` [dpdk-dev] [PATCH v6 00/10] introduce security offload library Thomas Monjalon
10 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-25 15:07 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
Driver implementation to support rte_security APIs
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
doc/guides/cryptodevs/features/dpaa2_sec.ini | 1 +
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 422 ++++++++++++++++++++++++++-
drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 62 ++++
3 files changed, 474 insertions(+), 11 deletions(-)
diff --git a/doc/guides/cryptodevs/features/dpaa2_sec.ini b/doc/guides/cryptodevs/features/dpaa2_sec.ini
index c3bb3dd..8fd07d6 100644
--- a/doc/guides/cryptodevs/features/dpaa2_sec.ini
+++ b/doc/guides/cryptodevs/features/dpaa2_sec.ini
@@ -7,6 +7,7 @@
Symmetric crypto = Y
Sym operation chaining = Y
HW Accelerated = Y
+Protocol offload = Y
;
; Supported crypto algorithms of the 'dpaa2_sec' crypto driver.
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index c67548e..2cdc8c1 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -36,6 +36,7 @@
#include <rte_mbuf.h>
#include <rte_cryptodev.h>
+#include <rte_security_driver.h>
#include <rte_malloc.h>
#include <rte_memcpy.h>
#include <rte_string_fns.h>
@@ -73,12 +74,44 @@
#define FLE_POOL_NUM_BUFS 32000
#define FLE_POOL_BUF_SIZE 256
#define FLE_POOL_CACHE_SIZE 512
+#define SEC_FLC_DHR_OUTBOUND -114
+#define SEC_FLC_DHR_INBOUND 0
enum rta_sec_era rta_sec_era = RTA_SEC_ERA_8;
static uint8_t cryptodev_driver_id;
static inline int
+build_proto_fd(dpaa2_sec_session *sess,
+ struct rte_crypto_op *op,
+ struct qbman_fd *fd, uint16_t bpid)
+{
+ struct rte_crypto_sym_op *sym_op = op->sym;
+ struct ctxt_priv *priv = sess->ctxt;
+ struct sec_flow_context *flc;
+ struct rte_mbuf *mbuf = sym_op->m_src;
+
+ if (likely(bpid < MAX_BPID))
+ DPAA2_SET_FD_BPID(fd, bpid);
+ else
+ DPAA2_SET_FD_IVP(fd);
+
+ /* Save the shared descriptor */
+ flc = &priv->flc_desc[0].flc;
+
+ DPAA2_SET_FD_ADDR(fd, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+ DPAA2_SET_FD_OFFSET(fd, sym_op->m_src->data_off);
+ DPAA2_SET_FD_LEN(fd, sym_op->m_src->pkt_len);
+ DPAA2_SET_FD_FLC(fd, ((uint64_t)flc));
+
+ /* save physical address of mbuf */
+ op->sym->aead.digest.phys_addr = mbuf->buf_physaddr;
+ mbuf->buf_physaddr = (uint64_t)op;
+
+ return 0;
+}
+
+static inline int
build_authenc_gcm_fd(dpaa2_sec_session *sess,
struct rte_crypto_op *op,
struct qbman_fd *fd, uint16_t bpid)
@@ -560,10 +593,11 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
}
static inline int
-build_sec_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
+build_sec_fd(struct rte_crypto_op *op,
struct qbman_fd *fd, uint16_t bpid)
{
int ret = -1;
+ dpaa2_sec_session *sess;
PMD_INIT_FUNC_TRACE();
/*
@@ -573,6 +607,16 @@ build_sec_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
op->status = RTE_CRYPTO_OP_STATUS_ERROR;
return -ENOTSUP;
}
+
+ if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
+ sess = (dpaa2_sec_session *)get_session_private_data(
+ op->sym->session, cryptodev_driver_id);
+ else if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION)
+ sess = (dpaa2_sec_session *)get_sec_session_private_data(
+ op->sym->sec_session);
+ else
+ return -1;
+
switch (sess->ctxt_type) {
case DPAA2_SEC_CIPHER:
ret = build_cipher_fd(sess, op, fd, bpid);
@@ -586,6 +630,9 @@ build_sec_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
case DPAA2_SEC_CIPHER_HASH:
ret = build_authenc_fd(sess, op, fd, bpid);
break;
+ case DPAA2_SEC_IPSEC:
+ ret = build_proto_fd(sess, op, fd, bpid);
+ break;
case DPAA2_SEC_HASH_CIPHER:
default:
RTE_LOG(ERR, PMD, "error: Unsupported session\n");
@@ -609,12 +656,11 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
/*todo - need to support multiple buffer pools */
uint16_t bpid;
struct rte_mempool *mb_pool;
- dpaa2_sec_session *sess;
if (unlikely(nb_ops == 0))
return 0;
- if (ops[0]->sess_type != RTE_CRYPTO_OP_WITH_SESSION) {
+ if (ops[0]->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
RTE_LOG(ERR, PMD, "sessionless crypto op not supported\n");
return 0;
}
@@ -639,13 +685,9 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
for (loop = 0; loop < frames_to_send; loop++) {
/*Clear the unused FD fields before sending*/
memset(&fd_arr[loop], 0, sizeof(struct qbman_fd));
- sess = (dpaa2_sec_session *)
- get_session_private_data(
- (*ops)->sym->session,
- cryptodev_driver_id);
mb_pool = (*ops)->sym->m_src->pool;
bpid = mempool_to_bpid(mb_pool);
- ret = build_sec_fd(sess, *ops, &fd_arr[loop], bpid);
+ ret = build_sec_fd(*ops, &fd_arr[loop], bpid);
if (ret) {
PMD_DRV_LOG(ERR, "error: Improper packet"
" contents for crypto operation\n");
@@ -670,13 +712,45 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
}
static inline struct rte_crypto_op *
-sec_fd_to_mbuf(const struct qbman_fd *fd)
+sec_simple_fd_to_mbuf(const struct qbman_fd *fd, __rte_unused uint8_t id)
+{
+ struct rte_crypto_op *op;
+ uint16_t len = DPAA2_GET_FD_LEN(fd);
+ uint16_t diff = 0;
+ dpaa2_sec_session *sess_priv;
+
+ struct rte_mbuf *mbuf = DPAA2_INLINE_MBUF_FROM_BUF(
+ DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd)),
+ rte_dpaa2_bpid_info[DPAA2_GET_FD_BPID(fd)].meta_data_size);
+
+ op = (struct rte_crypto_op *)mbuf->buf_physaddr;
+ mbuf->buf_physaddr = op->sym->aead.digest.phys_addr;
+ op->sym->aead.digest.phys_addr = 0L;
+
+ sess_priv = (dpaa2_sec_session *)get_sec_session_private_data(
+ op->sym->sec_session);
+ if (sess_priv->dir == DIR_ENC)
+ mbuf->data_off += SEC_FLC_DHR_OUTBOUND;
+ else
+ mbuf->data_off += SEC_FLC_DHR_INBOUND;
+ diff = len - mbuf->pkt_len;
+ mbuf->pkt_len += diff;
+ mbuf->data_len += diff;
+
+ return op;
+}
+
+static inline struct rte_crypto_op *
+sec_fd_to_mbuf(const struct qbman_fd *fd, uint8_t driver_id)
{
struct qbman_fle *fle;
struct rte_crypto_op *op;
struct ctxt_priv *priv;
struct rte_mbuf *dst, *src;
+ if (DPAA2_FD_GET_FORMAT(fd) == qbman_fd_single)
+ return sec_simple_fd_to_mbuf(fd, driver_id);
+
fle = (struct qbman_fle *)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd));
PMD_RX_LOG(DEBUG, "FLE addr = %x - %x, offset = %x",
@@ -730,6 +804,8 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
{
/* Function is responsible to receive frames for a given device and VQ*/
struct dpaa2_sec_qp *dpaa2_qp = (struct dpaa2_sec_qp *)qp;
+ struct rte_cryptodev *dev =
+ (struct rte_cryptodev *)(dpaa2_qp->rx_vq.dev);
struct qbman_result *dq_storage;
uint32_t fqid = dpaa2_qp->rx_vq.fqid;
int ret, num_rx = 0;
@@ -799,7 +875,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
}
fd = qbman_result_DQ_fd(dq_storage);
- ops[num_rx] = sec_fd_to_mbuf(fd);
+ ops[num_rx] = sec_fd_to_mbuf(fd, dev->driver_id);
if (unlikely(fd->simple.frc)) {
/* TODO Parse SEC errors */
@@ -1576,6 +1652,300 @@ dpaa2_sec_set_session_parameters(struct rte_cryptodev *dev,
}
static int
+dpaa2_sec_set_ipsec_session(struct rte_cryptodev *dev,
+ struct rte_security_session_conf *conf,
+ void *sess)
+{
+ struct rte_security_ipsec_xform *ipsec_xform = &conf->ipsec;
+ struct rte_crypto_auth_xform *auth_xform;
+ struct rte_crypto_cipher_xform *cipher_xform;
+ dpaa2_sec_session *session = (dpaa2_sec_session *)sess;
+ struct ctxt_priv *priv;
+ struct ipsec_encap_pdb encap_pdb;
+ struct ipsec_decap_pdb decap_pdb;
+ struct alginfo authdata, cipherdata;
+ unsigned int bufsize;
+ struct sec_flow_context *flc;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (ipsec_xform->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ cipher_xform = &conf->crypto_xform->cipher;
+ auth_xform = &conf->crypto_xform->next->auth;
+ } else {
+ auth_xform = &conf->crypto_xform->auth;
+ cipher_xform = &conf->crypto_xform->next->cipher;
+ }
+ priv = (struct ctxt_priv *)rte_zmalloc(NULL,
+ sizeof(struct ctxt_priv) +
+ sizeof(struct sec_flc_desc),
+ RTE_CACHE_LINE_SIZE);
+
+ if (priv == NULL) {
+ RTE_LOG(ERR, PMD, "\nNo memory for priv CTXT");
+ return -ENOMEM;
+ }
+
+ flc = &priv->flc_desc[0].flc;
+
+ session->ctxt_type = DPAA2_SEC_IPSEC;
+ session->cipher_key.data = rte_zmalloc(NULL,
+ cipher_xform->key.length,
+ RTE_CACHE_LINE_SIZE);
+ if (session->cipher_key.data == NULL &&
+ cipher_xform->key.length > 0) {
+ RTE_LOG(ERR, PMD, "No Memory for cipher key\n");
+ rte_free(priv);
+ return -ENOMEM;
+ }
+
+ session->cipher_key.length = cipher_xform->key.length;
+ session->auth_key.data = rte_zmalloc(NULL,
+ auth_xform->key.length,
+ RTE_CACHE_LINE_SIZE);
+ if (session->auth_key.data == NULL &&
+ auth_xform->key.length > 0) {
+ RTE_LOG(ERR, PMD, "No Memory for auth key\n");
+ rte_free(session->cipher_key.data);
+ rte_free(priv);
+ return -ENOMEM;
+ }
+ session->auth_key.length = auth_xform->key.length;
+ memcpy(session->cipher_key.data, cipher_xform->key.data,
+ cipher_xform->key.length);
+ memcpy(session->auth_key.data, auth_xform->key.data,
+ auth_xform->key.length);
+
+ authdata.key = (uint64_t)session->auth_key.data;
+ authdata.keylen = session->auth_key.length;
+ authdata.key_enc_flags = 0;
+ authdata.key_type = RTA_DATA_IMM;
+ switch (auth_xform->algo) {
+ case RTE_CRYPTO_AUTH_SHA1_HMAC:
+ authdata.algtype = OP_PCL_IPSEC_HMAC_SHA1_96;
+ authdata.algmode = OP_ALG_AAI_HMAC;
+ session->auth_alg = RTE_CRYPTO_AUTH_SHA1_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_MD5_HMAC:
+ authdata.algtype = OP_PCL_IPSEC_HMAC_MD5_96;
+ authdata.algmode = OP_ALG_AAI_HMAC;
+ session->auth_alg = RTE_CRYPTO_AUTH_MD5_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256_HMAC:
+ authdata.algtype = OP_PCL_IPSEC_HMAC_SHA2_256_128;
+ authdata.algmode = OP_ALG_AAI_HMAC;
+ session->auth_alg = RTE_CRYPTO_AUTH_SHA256_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA384_HMAC:
+ authdata.algtype = OP_PCL_IPSEC_HMAC_SHA2_384_192;
+ authdata.algmode = OP_ALG_AAI_HMAC;
+ session->auth_alg = RTE_CRYPTO_AUTH_SHA384_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA512_HMAC:
+ authdata.algtype = OP_PCL_IPSEC_HMAC_SHA2_512_256;
+ authdata.algmode = OP_ALG_AAI_HMAC;
+ session->auth_alg = RTE_CRYPTO_AUTH_SHA512_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_AES_CMAC:
+ authdata.algtype = OP_PCL_IPSEC_AES_CMAC_96;
+ session->auth_alg = RTE_CRYPTO_AUTH_AES_CMAC;
+ break;
+ case RTE_CRYPTO_AUTH_NULL:
+ authdata.algtype = OP_PCL_IPSEC_HMAC_NULL;
+ session->auth_alg = RTE_CRYPTO_AUTH_NULL;
+ break;
+ case RTE_CRYPTO_AUTH_SHA224_HMAC:
+ case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+ case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+ case RTE_CRYPTO_AUTH_SHA1:
+ case RTE_CRYPTO_AUTH_SHA256:
+ case RTE_CRYPTO_AUTH_SHA512:
+ case RTE_CRYPTO_AUTH_SHA224:
+ case RTE_CRYPTO_AUTH_SHA384:
+ case RTE_CRYPTO_AUTH_MD5:
+ case RTE_CRYPTO_AUTH_AES_GMAC:
+ case RTE_CRYPTO_AUTH_KASUMI_F9:
+ case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+ case RTE_CRYPTO_AUTH_ZUC_EIA3:
+ RTE_LOG(ERR, PMD, "Crypto: Unsupported auth alg %u\n",
+ auth_xform->algo);
+ goto out;
+ default:
+ RTE_LOG(ERR, PMD, "Crypto: Undefined Auth specified %u\n",
+ auth_xform->algo);
+ goto out;
+ }
+ cipherdata.key = (uint64_t)session->cipher_key.data;
+ cipherdata.keylen = session->cipher_key.length;
+ cipherdata.key_enc_flags = 0;
+ cipherdata.key_type = RTA_DATA_IMM;
+
+ switch (cipher_xform->algo) {
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ cipherdata.algtype = OP_PCL_IPSEC_AES_CBC;
+ cipherdata.algmode = OP_ALG_AAI_CBC;
+ session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CBC;
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_CBC:
+ cipherdata.algtype = OP_PCL_IPSEC_3DES;
+ cipherdata.algmode = OP_ALG_AAI_CBC;
+ session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CBC;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CTR:
+ cipherdata.algtype = OP_PCL_IPSEC_AES_CTR;
+ cipherdata.algmode = OP_ALG_AAI_CTR;
+ session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CTR;
+ break;
+ case RTE_CRYPTO_CIPHER_NULL:
+ cipherdata.algtype = OP_PCL_IPSEC_NULL;
+ break;
+ case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+ case RTE_CRYPTO_CIPHER_3DES_ECB:
+ case RTE_CRYPTO_CIPHER_AES_ECB:
+ case RTE_CRYPTO_CIPHER_KASUMI_F8:
+ RTE_LOG(ERR, PMD, "Crypto: Unsupported Cipher alg %u\n",
+ cipher_xform->algo);
+ goto out;
+ default:
+ RTE_LOG(ERR, PMD, "Crypto: Undefined Cipher specified %u\n",
+ cipher_xform->algo);
+ goto out;
+ }
+
+ if (ipsec_xform->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ struct ip ip4_hdr;
+
+ flc->dhr = SEC_FLC_DHR_OUTBOUND;
+ ip4_hdr.ip_v = IPVERSION;
+ ip4_hdr.ip_hl = 5;
+ ip4_hdr.ip_len = rte_cpu_to_be_16(sizeof(ip4_hdr));
+ ip4_hdr.ip_tos = ipsec_xform->tunnel.ipv4.dscp;
+ ip4_hdr.ip_id = 0;
+ ip4_hdr.ip_off = 0;
+ ip4_hdr.ip_ttl = ipsec_xform->tunnel.ipv4.ttl;
+ ip4_hdr.ip_p = 0x32;
+ ip4_hdr.ip_sum = 0;
+ ip4_hdr.ip_src = ipsec_xform->tunnel.ipv4.src_ip;
+ ip4_hdr.ip_dst = ipsec_xform->tunnel.ipv4.dst_ip;
+ ip4_hdr.ip_sum = calc_chksum((uint16_t *)(void *)&ip4_hdr,
+ sizeof(struct ip));
+
+ /* For Sec Proto only one descriptor is required. */
+ memset(&encap_pdb, 0, sizeof(struct ipsec_encap_pdb));
+ encap_pdb.options = (IPVERSION << PDBNH_ESP_ENCAP_SHIFT) |
+ PDBOPTS_ESP_OIHI_PDB_INL |
+ PDBOPTS_ESP_IVSRC |
+ PDBHMO_ESP_ENCAP_DTTL;
+ encap_pdb.spi = ipsec_xform->spi;
+ encap_pdb.ip_hdr_len = sizeof(struct ip);
+
+ session->dir = DIR_ENC;
+ bufsize = cnstr_shdsc_ipsec_new_encap(priv->flc_desc[0].desc,
+ 1, 0, &encap_pdb,
+ (uint8_t *)&ip4_hdr,
+ &cipherdata, &authdata);
+ } else if (ipsec_xform->direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
+ flc->dhr = SEC_FLC_DHR_INBOUND;
+ memset(&decap_pdb, 0, sizeof(struct ipsec_decap_pdb));
+ decap_pdb.options = sizeof(struct ip) << 16;
+ session->dir = DIR_DEC;
+ bufsize = cnstr_shdsc_ipsec_new_decap(priv->flc_desc[0].desc,
+ 1, 0, &decap_pdb, &cipherdata, &authdata);
+ } else
+ goto out;
+ flc->word1_sdl = (uint8_t)bufsize;
+
+ /* Enable the stashing control bit */
+ DPAA2_SET_FLC_RSC(flc);
+ flc->word2_rflc_31_0 = lower_32_bits(
+ (uint64_t)&(((struct dpaa2_sec_qp *)
+ dev->data->queue_pairs[0])->rx_vq) | 0x14);
+ flc->word3_rflc_63_32 = upper_32_bits(
+ (uint64_t)&(((struct dpaa2_sec_qp *)
+ dev->data->queue_pairs[0])->rx_vq));
+
+ /* Set EWS bit i.e. enable write-safe */
+ DPAA2_SET_FLC_EWS(flc);
+ /* Set BS = 1 i.e reuse input buffers as output buffers */
+ DPAA2_SET_FLC_REUSE_BS(flc);
+ /* Set FF = 10; reuse input buffers if they provide sufficient space */
+ DPAA2_SET_FLC_REUSE_FF(flc);
+
+ session->ctxt = priv;
+
+ return 0;
+out:
+ rte_free(session->auth_key.data);
+ rte_free(session->cipher_key.data);
+ rte_free(priv);
+ return -1;
+}
+
+static int
+dpaa2_sec_security_session_create(void *dev,
+ struct rte_security_session_conf *conf,
+ struct rte_security_session *sess,
+ struct rte_mempool *mempool)
+{
+ void *sess_private_data;
+ struct rte_cryptodev *cdev = (struct rte_cryptodev *)dev;
+ int ret;
+
+ if (rte_mempool_get(mempool, &sess_private_data)) {
+ CDEV_LOG_ERR(
+ "Couldn't get object from session mempool");
+ return -ENOMEM;
+ }
+
+ switch (conf->protocol) {
+ case RTE_SECURITY_PROTOCOL_IPSEC:
+ ret = dpaa2_sec_set_ipsec_session(cdev, conf,
+ sess_private_data);
+ break;
+ case RTE_SECURITY_PROTOCOL_MACSEC:
+ return -ENOTSUP;
+ default:
+ return -EINVAL;
+ }
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR,
+ "DPAA2 PMD: failed to configure session parameters");
+
+ /* Return session to mempool */
+ rte_mempool_put(mempool, sess_private_data);
+ return ret;
+ }
+
+ set_sec_session_private_data(sess, sess_private_data);
+
+ return ret;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static int
+dpaa2_sec_security_session_destroy(void *dev __rte_unused,
+ struct rte_security_session *sess)
+{
+ PMD_INIT_FUNC_TRACE();
+ void *sess_priv = get_sec_session_private_data(sess);
+
+ dpaa2_sec_session *s = (dpaa2_sec_session *)sess_priv;
+
+ if (sess_priv) {
+ struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
+
+ rte_free(s->ctxt);
+ rte_free(s->cipher_key.data);
+ rte_free(s->auth_key.data);
+ memset(sess, 0, sizeof(dpaa2_sec_session));
+ set_sec_session_private_data(sess, NULL);
+ rte_mempool_put(sess_mp, sess_priv);
+ }
+ return 0;
+}
+
+static int
dpaa2_sec_session_configure(struct rte_cryptodev *dev,
struct rte_crypto_sym_xform *xform,
struct rte_cryptodev_sym_session *sess,
@@ -1849,11 +2219,28 @@ static struct rte_cryptodev_ops crypto_ops = {
.session_clear = dpaa2_sec_session_clear,
};
+static const struct rte_security_capability *
+dpaa2_sec_capabilities_get(void *device __rte_unused)
+{
+ return dpaa2_sec_security_cap;
+}
+
+struct rte_security_ops dpaa2_sec_security_ops = {
+ .session_create = dpaa2_sec_security_session_create,
+ .session_update = NULL,
+ .session_stats_get = NULL,
+ .session_destroy = dpaa2_sec_security_session_destroy,
+ .set_pkt_metadata = NULL,
+ .capabilities_get = dpaa2_sec_capabilities_get
+};
+
static int
dpaa2_sec_uninit(const struct rte_cryptodev *dev)
{
struct dpaa2_sec_dev_private *internals = dev->data->dev_private;
+ rte_free(dev->security_ctx);
+
rte_mempool_free(internals->fle_pool);
PMD_INIT_LOG(INFO, "Closing DPAA2_SEC device %s on numa socket %u\n",
@@ -1868,6 +2255,7 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
struct dpaa2_sec_dev_private *internals;
struct rte_device *dev = cryptodev->device;
struct rte_dpaa2_device *dpaa2_dev;
+ struct rte_security_ctx *security_instance;
struct fsl_mc_io *dpseci;
uint16_t token;
struct dpseci_attr attr;
@@ -1889,7 +2277,8 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
cryptodev->dequeue_burst = dpaa2_sec_dequeue_burst;
cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
RTE_CRYPTODEV_FF_HW_ACCELERATED |
- RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
+ RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
+ RTE_CRYPTODEV_FF_SECURITY;
internals = cryptodev->data->dev_private;
internals->max_nb_sessions = RTE_DPAA2_SEC_PMD_MAX_NB_SESSIONS;
@@ -1903,6 +2292,17 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
PMD_INIT_LOG(DEBUG, "Device already init by primary process");
return 0;
}
+
+ /* Initialize security_ctx only for primary process*/
+ security_instance = rte_malloc("rte_security_instances_ops",
+ sizeof(struct rte_security_ctx), 0);
+ if (security_instance == NULL)
+ return -ENOMEM;
+ security_instance->device = (void *)cryptodev;
+ security_instance->ops = &dpaa2_sec_security_ops;
+ security_instance->sess_cnt = 0;
+ cryptodev->security_ctx = security_instance;
+
/*Open the rte device via MC and save the handle for further use*/
dpseci = (struct fsl_mc_io *)rte_calloc(NULL, 1,
sizeof(struct fsl_mc_io), 0);
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index 3849a05..14e71df 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -67,6 +67,11 @@ enum shr_desc_type {
#define DIR_ENC 1
#define DIR_DEC 0
+#define DPAA2_SET_FLC_EWS(flc) (flc->word1_bits23_16 |= 0x1)
+#define DPAA2_SET_FLC_RSC(flc) (flc->word1_bits31_24 |= 0x1)
+#define DPAA2_SET_FLC_REUSE_BS(flc) (flc->mode_bits |= 0x8000)
+#define DPAA2_SET_FLC_REUSE_FF(flc) (flc->mode_bits |= 0x2000)
+
/* SEC Flow Context Descriptor */
struct sec_flow_context {
/* word 0 */
@@ -411,4 +416,61 @@ static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
};
+
+static const struct rte_security_capability dpaa2_sec_security_cap[] = {
+ { /* IPsec Lookaside Protocol offload ESP Transport Egress */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ .options = { 0 }
+ },
+ .crypto_capabilities = dpaa2_sec_capabilities
+ },
+ { /* IPsec Lookaside Protocol offload ESP Tunnel Ingress */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+ .options = { 0 }
+ },
+ .crypto_capabilities = dpaa2_sec_capabilities
+ },
+ {
+ .action = RTE_SECURITY_ACTION_TYPE_NONE
+ }
+};
+
+/**
+ * Checksum
+ *
+ * @param buffer calculate chksum for buffer
+ * @param len buffer length
+ *
+ * @return checksum value in host cpu order
+ */
+static inline uint16_t
+calc_chksum(void *buffer, int len)
+{
+ uint16_t *buf = (uint16_t *)buffer;
+ uint32_t sum = 0;
+ uint16_t result;
+
+ for (sum = 0; len > 1; len -= 2)
+ sum += *buf++;
+
+ if (len == 1)
+ sum += *(unsigned char *)buf;
+
+ sum = (sum >> 16) + (sum & 0xFFFF);
+ sum += (sum >> 16);
+ result = ~sum;
+
+ return result;
+}
+
#endif /* _RTE_DPAA2_SEC_PMD_PRIVATE_H_ */
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* [dpdk-dev] [PATCH v6 10/10] examples/ipsec-secgw: add support for security offload
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 00/10] introduce security offload library Akhil Goyal
` (8 preceding siblings ...)
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 09/10] crypto/dpaa2_sec: add support for protocol offload ipsec Akhil Goyal
@ 2017-10-25 15:07 ` Akhil Goyal
2017-10-26 1:16 ` [dpdk-dev] [PATCH v6 00/10] introduce security offload library Thomas Monjalon
10 siblings, 0 replies; 195+ messages in thread
From: Akhil Goyal @ 2017-10-25 15:07 UTC (permalink / raw)
To: dev
Cc: declan.doherty, pablo.de.lara.guarch, hemant.agrawal,
radu.nicolau, borisp, aviadye, thomas, sandeep.malik,
jerin.jacob, john.mcnamara, konstantin.ananyev, shahafs,
olivier.matz
Ipsec-secgw application is modified so that it can support
following type of actions for crypto operations
1. full protocol offload using crypto devices.
2. inline ipsec using ethernet devices to perform crypto operations
3. full protocol offload using ethernet devices.
4. non protocol offload
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Aviad Yehezkel <aviadye@mellanox.com>
---
doc/guides/sample_app_ug/ipsec_secgw.rst | 52 +++++-
examples/ipsec-secgw/esp.c | 120 ++++++++----
examples/ipsec-secgw/esp.h | 10 -
examples/ipsec-secgw/ipsec-secgw.c | 5 +
examples/ipsec-secgw/ipsec.c | 308 ++++++++++++++++++++++++++-----
examples/ipsec-secgw/ipsec.h | 32 +++-
examples/ipsec-secgw/sa.c | 151 +++++++++++----
7 files changed, 545 insertions(+), 133 deletions(-)
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index a292859..358e763 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -52,13 +52,22 @@ The application classifies the ports as *Protected* and *Unprotected*.
Thus, traffic received on an Unprotected or Protected port is consider
Inbound or Outbound respectively.
+The application also supports complete IPSec protocol offload to hardware
+(Look aside crypto accelarator or using ethernet device). It also support
+inline ipsec processing by the supported ethernet device during transmission.
+These modes can be selected during the SA creation configuration.
+
+In case of complete protocol offload, the processing of headers(ESP and outer
+IP header) is done by the hardware and the application does not need to
+add/remove them during outbound/inbound processing.
+
The Path for IPsec Inbound traffic is:
* Read packets from the port.
* Classify packets between IPv4 and ESP.
* Perform Inbound SA lookup for ESP packets based on their SPI.
-* Perform Verification/Decryption.
-* Remove ESP and outer IP header
+* Perform Verification/Decryption (Not needed in case of inline ipsec).
+* Remove ESP and outer IP header (Not needed in case of protocol offload).
* Inbound SP check using ACL of decrypted packets and any other IPv4 packets.
* Routing.
* Write packet to port.
@@ -68,8 +77,8 @@ The Path for the IPsec Outbound traffic is:
* Read packets from the port.
* Perform Outbound SP check using ACL of all IPv4 traffic.
* Perform Outbound SA lookup for packets that need IPsec protection.
-* Add ESP and outer IP header.
-* Perform Encryption/Digest.
+* Add ESP and outer IP header (Not needed in case protocol offload).
+* Perform Encryption/Digest (Not needed in case of inline ipsec).
* Routing.
* Write packet to port.
@@ -389,7 +398,7 @@ The SA rule syntax is shown as follows:
.. code-block:: console
sa <dir> <spi> <cipher_algo> <cipher_key> <auth_algo> <auth_key>
- <mode> <src_ip> <dst_ip>
+ <mode> <src_ip> <dst_ip> <action_type> <port_id>
where each options means:
@@ -530,6 +539,34 @@ where each options means:
* *dst X.X.X.X* for IPv4
* *dst XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX* for IPv6
+``<type>``
+
+ * Action type to specify the security action. This option specify
+ the SA to be performed with look aside protocol offload to HW
+ accelerator or protocol offload on ethernet device or inline
+ crypto processing on the ethernet device during transmission.
+
+ * Optional: Yes, default type *no-offload*
+
+ * Available options:
+
+ * *lookaside-protocol-offload*: look aside protocol offload to HW accelerator
+ * *inline-protocol-offload*: inline protocol offload on ethernet device
+ * *inline-crypto-offload*: inline crypto processing on ethernet device
+ * *no-offload*: no offloading to hardware
+
+ ``<port_id>``
+
+ * Port/device ID of the ethernet/crypto accelerator for which the SA is
+ configured. This option is used when *type* is NOT *no-offload*
+
+ * Optional: No, if *type* is not *no-offload*
+
+ * Syntax:
+
+ * *port_id X* X is a valid device number in decimal
+
+
Example SA rules:
.. code-block:: console
@@ -549,6 +586,11 @@ Example SA rules:
aead_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
mode ipv4-tunnel src 172.16.2.5 dst 172.16.1.5
+ sa out 5 cipher_algo aes-128-cbc cipher_key 0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0 \
+ auth_algo sha1-hmac auth_key 0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0 \
+ mode ipv4-tunnel src 172.16.1.5 dst 172.16.2.5 \
+ type lookaside-protocol-offload port_id 4
+
Routing rule syntax
^^^^^^^^^^^^^^^^^^^
diff --git a/examples/ipsec-secgw/esp.c b/examples/ipsec-secgw/esp.c
index a63fb95..f7afe13 100644
--- a/examples/ipsec-secgw/esp.c
+++ b/examples/ipsec-secgw/esp.c
@@ -58,8 +58,11 @@ esp_inbound(struct rte_mbuf *m, struct ipsec_sa *sa,
struct rte_crypto_sym_op *sym_cop;
int32_t payload_len, ip_hdr_len;
- RTE_ASSERT(m != NULL);
RTE_ASSERT(sa != NULL);
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO)
+ return 0;
+
+ RTE_ASSERT(m != NULL);
RTE_ASSERT(cop != NULL);
ip4 = rte_pktmbuf_mtod(m, struct ip *);
@@ -175,29 +178,44 @@ esp_inbound_post(struct rte_mbuf *m, struct ipsec_sa *sa,
RTE_ASSERT(sa != NULL);
RTE_ASSERT(cop != NULL);
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) {
+ if (m->ol_flags & PKT_RX_SEC_OFFLOAD) {
+ if (m->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
+ cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ else
+ cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ } else
+ cop->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+ }
+
if (cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS) {
RTE_LOG(ERR, IPSEC_ESP, "failed crypto op\n");
return -1;
}
- nexthdr = rte_pktmbuf_mtod_offset(m, uint8_t*,
- rte_pktmbuf_pkt_len(m) - sa->digest_len - 1);
- pad_len = nexthdr - 1;
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO &&
+ sa->ol_flags & RTE_SECURITY_RX_HW_TRAILER_OFFLOAD) {
+ nexthdr = &m->inner_esp_next_proto;
+ } else {
+ nexthdr = rte_pktmbuf_mtod_offset(m, uint8_t*,
+ rte_pktmbuf_pkt_len(m) - sa->digest_len - 1);
+ pad_len = nexthdr - 1;
+
+ padding = pad_len - *pad_len;
+ for (i = 0; i < *pad_len; i++) {
+ if (padding[i] != i + 1) {
+ RTE_LOG(ERR, IPSEC_ESP, "invalid padding\n");
+ return -EINVAL;
+ }
+ }
- padding = pad_len - *pad_len;
- for (i = 0; i < *pad_len; i++) {
- if (padding[i] != i + 1) {
- RTE_LOG(ERR, IPSEC_ESP, "invalid padding\n");
+ if (rte_pktmbuf_trim(m, *pad_len + 2 + sa->digest_len)) {
+ RTE_LOG(ERR, IPSEC_ESP,
+ "failed to remove pad_len + digest\n");
return -EINVAL;
}
}
- if (rte_pktmbuf_trim(m, *pad_len + 2 + sa->digest_len)) {
- RTE_LOG(ERR, IPSEC_ESP,
- "failed to remove pad_len + digest\n");
- return -EINVAL;
- }
-
if (unlikely(sa->flags == TRANSPORT)) {
ip = rte_pktmbuf_mtod(m, struct ip *);
ip4 = (struct ip *)rte_pktmbuf_adj(m,
@@ -227,14 +245,13 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
struct ip *ip4;
struct ip6_hdr *ip6;
struct esp_hdr *esp = NULL;
- uint8_t *padding, *new_ip, nlp;
+ uint8_t *padding = NULL, *new_ip, nlp;
struct rte_crypto_sym_op *sym_cop;
int32_t i;
uint16_t pad_payload_len, pad_len, ip_hdr_len;
RTE_ASSERT(m != NULL);
RTE_ASSERT(sa != NULL);
- RTE_ASSERT(cop != NULL);
ip_hdr_len = 0;
@@ -284,12 +301,19 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
return -EINVAL;
}
- padding = (uint8_t *)rte_pktmbuf_append(m, pad_len + sa->digest_len);
- if (unlikely(padding == NULL)) {
- RTE_LOG(ERR, IPSEC_ESP, "not enough mbuf trailing space\n");
- return -ENOSPC;
+ /* Add trailer padding if it is not constructed by HW */
+ if (sa->type != RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO ||
+ (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO &&
+ !(sa->ol_flags & RTE_SECURITY_TX_HW_TRAILER_OFFLOAD))) {
+ padding = (uint8_t *)rte_pktmbuf_append(m, pad_len +
+ sa->digest_len);
+ if (unlikely(padding == NULL)) {
+ RTE_LOG(ERR, IPSEC_ESP,
+ "not enough mbuf trailing space\n");
+ return -ENOSPC;
+ }
+ rte_prefetch0(padding);
}
- rte_prefetch0(padding);
switch (sa->flags) {
case IP4_TUNNEL:
@@ -323,15 +347,46 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
esp->spi = rte_cpu_to_be_32(sa->spi);
esp->seq = rte_cpu_to_be_32((uint32_t)sa->seq);
+ /* set iv */
uint64_t *iv = (uint64_t *)(esp + 1);
+ if (sa->aead_algo == RTE_CRYPTO_AEAD_AES_GCM) {
+ *iv = rte_cpu_to_be_64(sa->seq);
+ } else {
+ switch (sa->cipher_algo) {
+ case RTE_CRYPTO_CIPHER_NULL:
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ memset(iv, 0, sa->iv_len);
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CTR:
+ *iv = rte_cpu_to_be_64(sa->seq);
+ break;
+ default:
+ RTE_LOG(ERR, IPSEC_ESP,
+ "unsupported cipher algorithm %u\n",
+ sa->cipher_algo);
+ return -EINVAL;
+ }
+ }
+
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) {
+ if (sa->ol_flags & RTE_SECURITY_TX_HW_TRAILER_OFFLOAD) {
+ /* Set the inner esp next protocol for HW trailer */
+ m->inner_esp_next_proto = nlp;
+ m->packet_type |= RTE_PTYPE_TUNNEL_ESP;
+ } else {
+ padding[pad_len - 2] = pad_len - 2;
+ padding[pad_len - 1] = nlp;
+ }
+ goto done;
+ }
+ RTE_ASSERT(cop != NULL);
sym_cop = get_sym_cop(cop);
sym_cop->m_src = m;
if (sa->aead_algo == RTE_CRYPTO_AEAD_AES_GCM) {
uint8_t *aad;
- *iv = rte_cpu_to_be_64(sa->seq);
sym_cop->aead.data.offset = ip_hdr_len +
sizeof(struct esp_hdr) + sa->iv_len;
sym_cop->aead.data.length = pad_payload_len;
@@ -361,13 +416,11 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
switch (sa->cipher_algo) {
case RTE_CRYPTO_CIPHER_NULL:
case RTE_CRYPTO_CIPHER_AES_CBC:
- memset(iv, 0, sa->iv_len);
sym_cop->cipher.data.offset = ip_hdr_len +
sizeof(struct esp_hdr);
sym_cop->cipher.data.length = pad_payload_len + sa->iv_len;
break;
case RTE_CRYPTO_CIPHER_AES_CTR:
- *iv = rte_cpu_to_be_64(sa->seq);
sym_cop->cipher.data.offset = ip_hdr_len +
sizeof(struct esp_hdr) + sa->iv_len;
sym_cop->cipher.data.length = pad_payload_len;
@@ -409,21 +462,26 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
rte_pktmbuf_pkt_len(m) - sa->digest_len);
}
+done:
return 0;
}
int
-esp_outbound_post(struct rte_mbuf *m __rte_unused,
- struct ipsec_sa *sa __rte_unused,
- struct rte_crypto_op *cop)
+esp_outbound_post(struct rte_mbuf *m,
+ struct ipsec_sa *sa,
+ struct rte_crypto_op *cop)
{
RTE_ASSERT(m != NULL);
RTE_ASSERT(sa != NULL);
- RTE_ASSERT(cop != NULL);
- if (cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS) {
- RTE_LOG(ERR, IPSEC_ESP, "Failed crypto op\n");
- return -1;
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) {
+ m->ol_flags |= PKT_TX_SEC_OFFLOAD;
+ } else {
+ RTE_ASSERT(cop != NULL);
+ if (cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS) {
+ RTE_LOG(ERR, IPSEC_ESP, "Failed crypto op\n");
+ return -1;
+ }
}
return 0;
diff --git a/examples/ipsec-secgw/esp.h b/examples/ipsec-secgw/esp.h
index fa5cc8a..23601e3 100644
--- a/examples/ipsec-secgw/esp.h
+++ b/examples/ipsec-secgw/esp.h
@@ -35,16 +35,6 @@
struct mbuf;
-/* RFC4303 */
-struct esp_hdr {
- uint32_t spi;
- uint32_t seq;
- /* Payload */
- /* Padding */
- /* Pad Length */
- /* Next Header */
- /* Integrity Check Value - ICV */
-};
int
esp_inbound(struct rte_mbuf *m, struct ipsec_sa *sa,
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 6abf852..6201d85 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -1390,6 +1390,11 @@ port_init(uint16_t portid)
port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
}
+ if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_SECURITY)
+ port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SECURITY;
+ if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_SECURITY)
+ port_conf.txmode.offloads |= DEV_TX_OFFLOAD_SECURITY;
+
ret = rte_eth_dev_configure(portid, nb_rx_queue, nb_tx_queue,
&port_conf);
if (ret < 0)
diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
index 36fb8c8..c24284d 100644
--- a/examples/ipsec-secgw/ipsec.c
+++ b/examples/ipsec-secgw/ipsec.c
@@ -37,7 +37,9 @@
#include <rte_branch_prediction.h>
#include <rte_log.h>
#include <rte_crypto.h>
+#include <rte_security.h>
#include <rte_cryptodev.h>
+#include <rte_ethdev.h>
#include <rte_mbuf.h>
#include <rte_hash.h>
@@ -49,7 +51,7 @@ create_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa)
{
struct rte_cryptodev_info cdev_info;
unsigned long cdev_id_qp = 0;
- int32_t ret;
+ int32_t ret = 0;
struct cdev_key key = { 0 };
key.lcore_id = (uint8_t)rte_lcore_id();
@@ -58,16 +60,19 @@ create_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa)
key.auth_algo = (uint8_t)sa->auth_algo;
key.aead_algo = (uint8_t)sa->aead_algo;
- ret = rte_hash_lookup_data(ipsec_ctx->cdev_map, &key,
- (void **)&cdev_id_qp);
- if (ret < 0) {
- RTE_LOG(ERR, IPSEC, "No cryptodev: core %u, cipher_algo %u, "
- "auth_algo %u, aead_algo %u\n",
- key.lcore_id,
- key.cipher_algo,
- key.auth_algo,
- key.aead_algo);
- return -1;
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_NONE) {
+ ret = rte_hash_lookup_data(ipsec_ctx->cdev_map, &key,
+ (void **)&cdev_id_qp);
+ if (ret < 0) {
+ RTE_LOG(ERR, IPSEC,
+ "No cryptodev: core %u, cipher_algo %u, "
+ "auth_algo %u, aead_algo %u\n",
+ key.lcore_id,
+ key.cipher_algo,
+ key.auth_algo,
+ key.aead_algo);
+ return -1;
+ }
}
RTE_LOG_DP(DEBUG, IPSEC, "Create session for SA spi %u on cryptodev "
@@ -75,23 +80,153 @@ create_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa)
ipsec_ctx->tbl[cdev_id_qp].id,
ipsec_ctx->tbl[cdev_id_qp].qp);
- sa->crypto_session = rte_cryptodev_sym_session_create(
- ipsec_ctx->session_pool);
- rte_cryptodev_sym_session_init(ipsec_ctx->tbl[cdev_id_qp].id,
- sa->crypto_session, sa->xforms,
- ipsec_ctx->session_pool);
-
- rte_cryptodev_info_get(ipsec_ctx->tbl[cdev_id_qp].id, &cdev_info);
- if (cdev_info.sym.max_nb_sessions_per_qp > 0) {
- ret = rte_cryptodev_queue_pair_attach_sym_session(
- ipsec_ctx->tbl[cdev_id_qp].id,
- ipsec_ctx->tbl[cdev_id_qp].qp,
- sa->crypto_session);
- if (ret < 0) {
- RTE_LOG(ERR, IPSEC,
- "Session cannot be attached to qp %u ",
- ipsec_ctx->tbl[cdev_id_qp].qp);
- return -1;
+ if (sa->type != RTE_SECURITY_ACTION_TYPE_NONE) {
+ struct rte_security_session_conf sess_conf = {
+ .action_type = sa->type,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = {
+ .spi = sa->spi,
+ .salt = sa->salt,
+ .options = { 0 },
+ .direction = sa->direction,
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = (sa->flags == IP4_TUNNEL ||
+ sa->flags == IP6_TUNNEL) ?
+ RTE_SECURITY_IPSEC_SA_MODE_TUNNEL :
+ RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
+ },
+ .crypto_xform = sa->xforms
+
+ };
+
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL) {
+ struct rte_security_ctx *ctx = (struct rte_security_ctx *)
+ rte_cryptodev_get_sec_ctx(
+ ipsec_ctx->tbl[cdev_id_qp].id);
+
+ if (sess_conf.ipsec.mode ==
+ RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
+ struct rte_security_ipsec_tunnel_param *tunnel =
+ &sess_conf.ipsec.tunnel;
+ if (sa->flags == IP4_TUNNEL) {
+ tunnel->type =
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+ tunnel->ipv4.ttl = IPDEFTTL;
+
+ memcpy((uint8_t *)&tunnel->ipv4.src_ip,
+ (uint8_t *)&sa->src.ip.ip4, 4);
+
+ memcpy((uint8_t *)&tunnel->ipv4.dst_ip,
+ (uint8_t *)&sa->dst.ip.ip4, 4);
+ }
+ /* TODO support for Transport and IPV6 tunnel */
+ }
+
+ sa->sec_session = rte_security_session_create(ctx,
+ &sess_conf, ipsec_ctx->session_pool);
+ if (sa->sec_session == NULL) {
+ RTE_LOG(ERR, IPSEC,
+ "SEC Session init failed: err: %d\n", ret);
+ return -1;
+ }
+ } else if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) {
+ struct rte_flow_error err;
+ struct rte_security_ctx *ctx = (struct rte_security_ctx *)
+ rte_eth_dev_get_sec_ctx(
+ sa->portid);
+ const struct rte_security_capability *sec_cap;
+
+ sa->sec_session = rte_security_session_create(ctx,
+ &sess_conf, ipsec_ctx->session_pool);
+ if (sa->sec_session == NULL) {
+ RTE_LOG(ERR, IPSEC,
+ "SEC Session init failed: err: %d\n", ret);
+ return -1;
+ }
+
+ sec_cap = rte_security_capabilities_get(ctx);
+
+ /* iterate until ESP tunnel*/
+ while (sec_cap->action !=
+ RTE_SECURITY_ACTION_TYPE_NONE) {
+
+ if (sec_cap->action == sa->type &&
+ sec_cap->protocol ==
+ RTE_SECURITY_PROTOCOL_IPSEC &&
+ sec_cap->ipsec.mode ==
+ RTE_SECURITY_IPSEC_SA_MODE_TUNNEL &&
+ sec_cap->ipsec.direction == sa->direction)
+ break;
+ sec_cap++;
+ }
+
+ if (sec_cap->action == RTE_SECURITY_ACTION_TYPE_NONE) {
+ RTE_LOG(ERR, IPSEC,
+ "No suitable security capability found\n");
+ return -1;
+ }
+
+ sa->ol_flags = sec_cap->ol_flags;
+ sa->security_ctx = ctx;
+ sa->pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH;
+
+ sa->pattern[1].type = RTE_FLOW_ITEM_TYPE_IPV4;
+ sa->pattern[1].mask = &rte_flow_item_ipv4_mask;
+ if (sa->flags & IP6_TUNNEL) {
+ sa->pattern[1].spec = &sa->ipv6_spec;
+ memcpy(sa->ipv6_spec.hdr.dst_addr,
+ sa->dst.ip.ip6.ip6_b, 16);
+ memcpy(sa->ipv6_spec.hdr.src_addr,
+ sa->src.ip.ip6.ip6_b, 16);
+ } else {
+ sa->pattern[1].spec = &sa->ipv4_spec;
+ sa->ipv4_spec.hdr.dst_addr = sa->dst.ip.ip4;
+ sa->ipv4_spec.hdr.src_addr = sa->src.ip.ip4;
+ }
+
+ sa->pattern[2].type = RTE_FLOW_ITEM_TYPE_ESP;
+ sa->pattern[2].spec = &sa->esp_spec;
+ sa->pattern[2].mask = &rte_flow_item_esp_mask;
+ sa->esp_spec.hdr.spi = sa->spi;
+
+ sa->pattern[3].type = RTE_FLOW_ITEM_TYPE_END;
+
+ sa->action[0].type = RTE_FLOW_ACTION_TYPE_SECURITY;
+ sa->action[0].conf = sa->sec_session;
+
+ sa->action[1].type = RTE_FLOW_ACTION_TYPE_END;
+
+ sa->attr.egress = (sa->direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS);
+ sa->flow = rte_flow_create(sa->portid,
+ &sa->attr, sa->pattern, sa->action, &err);
+ if (sa->flow == NULL) {
+ RTE_LOG(ERR, IPSEC,
+ "Failed to create ipsec flow msg: %s\n",
+ err.message);
+ return -1;
+ }
+ }
+ } else {
+ sa->crypto_session = rte_cryptodev_sym_session_create(
+ ipsec_ctx->session_pool);
+ rte_cryptodev_sym_session_init(ipsec_ctx->tbl[cdev_id_qp].id,
+ sa->crypto_session, sa->xforms,
+ ipsec_ctx->session_pool);
+
+ rte_cryptodev_info_get(ipsec_ctx->tbl[cdev_id_qp].id,
+ &cdev_info);
+ if (cdev_info.sym.max_nb_sessions_per_qp > 0) {
+ ret = rte_cryptodev_queue_pair_attach_sym_session(
+ ipsec_ctx->tbl[cdev_id_qp].id,
+ ipsec_ctx->tbl[cdev_id_qp].qp,
+ sa->crypto_session);
+ if (ret < 0) {
+ RTE_LOG(ERR, IPSEC,
+ "Session cannot be attached to qp %u\n",
+ ipsec_ctx->tbl[cdev_id_qp].qp);
+ return -1;
+ }
}
}
sa->cdev_id_qp = cdev_id_qp;
@@ -129,7 +264,9 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
{
int32_t ret = 0, i;
struct ipsec_mbuf_metadata *priv;
+ struct rte_crypto_sym_op *sym_cop;
struct ipsec_sa *sa;
+ struct cdev_qp *cqp;
for (i = 0; i < nb_pkts; i++) {
if (unlikely(sas[i] == NULL)) {
@@ -144,23 +281,76 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
sa = sas[i];
priv->sa = sa;
- priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
- priv->cop.status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
-
- rte_prefetch0(&priv->sym_cop);
-
- if ((unlikely(sa->crypto_session == NULL)) &&
- create_session(ipsec_ctx, sa)) {
- rte_pktmbuf_free(pkts[i]);
- continue;
- }
-
- rte_crypto_op_attach_sym_session(&priv->cop,
- sa->crypto_session);
-
- ret = xform_func(pkts[i], sa, &priv->cop);
- if (unlikely(ret)) {
- rte_pktmbuf_free(pkts[i]);
+ switch (sa->type) {
+ case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
+ priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ priv->cop.status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+
+ rte_prefetch0(&priv->sym_cop);
+
+ if ((unlikely(sa->sec_session == NULL)) &&
+ create_session(ipsec_ctx, sa)) {
+ rte_pktmbuf_free(pkts[i]);
+ continue;
+ }
+
+ sym_cop = get_sym_cop(&priv->cop);
+ sym_cop->m_src = pkts[i];
+
+ rte_security_attach_session(&priv->cop,
+ sa->sec_session);
+ break;
+ case RTE_SECURITY_ACTION_TYPE_NONE:
+
+ priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ priv->cop.status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+
+ rte_prefetch0(&priv->sym_cop);
+
+ if ((unlikely(sa->crypto_session == NULL)) &&
+ create_session(ipsec_ctx, sa)) {
+ rte_pktmbuf_free(pkts[i]);
+ continue;
+ }
+
+ rte_crypto_op_attach_sym_session(&priv->cop,
+ sa->crypto_session);
+
+ ret = xform_func(pkts[i], sa, &priv->cop);
+ if (unlikely(ret)) {
+ rte_pktmbuf_free(pkts[i]);
+ continue;
+ }
+ break;
+ case RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+ break;
+ case RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO:
+ priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ priv->cop.status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+
+ rte_prefetch0(&priv->sym_cop);
+
+ if ((unlikely(sa->sec_session == NULL)) &&
+ create_session(ipsec_ctx, sa)) {
+ rte_pktmbuf_free(pkts[i]);
+ continue;
+ }
+
+ rte_security_attach_session(&priv->cop,
+ sa->sec_session);
+
+ ret = xform_func(pkts[i], sa, &priv->cop);
+ if (unlikely(ret)) {
+ rte_pktmbuf_free(pkts[i]);
+ continue;
+ }
+
+ cqp = &ipsec_ctx->tbl[sa->cdev_id_qp];
+ cqp->ol_pkts[cqp->ol_pkts_cnt++] = pkts[i];
+ if (sa->ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA)
+ rte_security_set_pkt_metadata(
+ sa->security_ctx,
+ sa->sec_session, pkts[i], NULL);
continue;
}
@@ -171,7 +361,7 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
static inline int
ipsec_dequeue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
- struct rte_mbuf *pkts[], uint16_t max_pkts)
+ struct rte_mbuf *pkts[], uint16_t max_pkts)
{
int32_t nb_pkts = 0, ret = 0, i, j, nb_cops;
struct ipsec_mbuf_metadata *priv;
@@ -186,6 +376,19 @@ ipsec_dequeue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
if (ipsec_ctx->last_qp == ipsec_ctx->nb_qps)
ipsec_ctx->last_qp %= ipsec_ctx->nb_qps;
+ while (cqp->ol_pkts_cnt > 0 && nb_pkts < max_pkts) {
+ pkt = cqp->ol_pkts[--cqp->ol_pkts_cnt];
+ rte_prefetch0(pkt);
+ priv = get_priv(pkt);
+ sa = priv->sa;
+ ret = xform_func(pkt, sa, &priv->cop);
+ if (unlikely(ret)) {
+ rte_pktmbuf_free(pkt);
+ continue;
+ }
+ pkts[nb_pkts++] = pkt;
+ }
+
if (cqp->in_flight == 0)
continue;
@@ -203,11 +406,14 @@ ipsec_dequeue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
RTE_ASSERT(sa != NULL);
- ret = xform_func(pkt, sa, cops[j]);
- if (unlikely(ret))
- rte_pktmbuf_free(pkt);
- else
- pkts[nb_pkts++] = pkt;
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_NONE) {
+ ret = xform_func(pkt, sa, cops[j]);
+ if (unlikely(ret)) {
+ rte_pktmbuf_free(pkt);
+ continue;
+ }
+ }
+ pkts[nb_pkts++] = pkt;
}
}
diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h
index 7d057ae..775b316 100644
--- a/examples/ipsec-secgw/ipsec.h
+++ b/examples/ipsec-secgw/ipsec.h
@@ -38,6 +38,8 @@
#include <rte_byteorder.h>
#include <rte_crypto.h>
+#include <rte_security.h>
+#include <rte_flow.h>
#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1
#define RTE_LOGTYPE_IPSEC_ESP RTE_LOGTYPE_USER2
@@ -99,7 +101,10 @@ struct ipsec_sa {
uint32_t cdev_id_qp;
uint64_t seq;
uint32_t salt;
- struct rte_cryptodev_sym_session *crypto_session;
+ union {
+ struct rte_cryptodev_sym_session *crypto_session;
+ struct rte_security_session *sec_session;
+ };
enum rte_crypto_cipher_algorithm cipher_algo;
enum rte_crypto_auth_algorithm auth_algo;
enum rte_crypto_aead_algorithm aead_algo;
@@ -117,7 +122,28 @@ struct ipsec_sa {
uint8_t auth_key[MAX_KEY_SIZE];
uint16_t auth_key_len;
uint16_t aad_len;
- struct rte_crypto_sym_xform *xforms;
+ union {
+ struct rte_crypto_sym_xform *xforms;
+ struct rte_security_ipsec_xform *sec_xform;
+ };
+ enum rte_security_session_action_type type;
+ enum rte_security_ipsec_sa_direction direction;
+ uint16_t portid;
+ struct rte_security_ctx *security_ctx;
+ uint32_t ol_flags;
+
+#define MAX_RTE_FLOW_PATTERN (4)
+#define MAX_RTE_FLOW_ACTIONS (2)
+ struct rte_flow_item pattern[MAX_RTE_FLOW_PATTERN];
+ struct rte_flow_action action[MAX_RTE_FLOW_ACTIONS];
+ struct rte_flow_attr attr;
+ union {
+ struct rte_flow_item_ipv4 ipv4_spec;
+ struct rte_flow_item_ipv6 ipv6_spec;
+ };
+ struct rte_flow_item_esp esp_spec;
+ struct rte_flow *flow;
+ struct rte_security_session_conf sess_conf;
} __rte_cache_aligned;
struct ipsec_mbuf_metadata {
@@ -133,6 +159,8 @@ struct cdev_qp {
uint16_t in_flight;
uint16_t len;
struct rte_crypto_op *buf[MAX_PKT_BURST] __rte_aligned(sizeof(void *));
+ struct rte_mbuf *ol_pkts[MAX_PKT_BURST] __rte_aligned(sizeof(void *));
+ uint16_t ol_pkts_cnt;
};
struct ipsec_ctx {
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index 0f5c4fe..4c448e5 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -41,16 +41,20 @@
#include <rte_memzone.h>
#include <rte_crypto.h>
+#include <rte_security.h>
#include <rte_cryptodev.h>
#include <rte_byteorder.h>
#include <rte_errno.h>
#include <rte_ip.h>
#include <rte_random.h>
+#include <rte_ethdev.h>
#include "ipsec.h"
#include "esp.h"
#include "parser.h"
+#define IPDEFTTL 64
+
struct supported_cipher_algo {
const char *keyword;
enum rte_crypto_cipher_algorithm algo;
@@ -238,6 +242,8 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
uint32_t src_p = 0;
uint32_t dst_p = 0;
uint32_t mode_p = 0;
+ uint32_t type_p = 0;
+ uint32_t portid_p = 0;
if (strcmp(tokens[0], "in") == 0) {
ri = &nb_sa_in;
@@ -549,6 +555,52 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
continue;
}
+ if (strcmp(tokens[ti], "type") == 0) {
+ APP_CHECK_PRESENCE(type_p, tokens[ti], status);
+ if (status->status < 0)
+ return;
+
+ INCREMENT_TOKEN_INDEX(ti, n_tokens, status);
+ if (status->status < 0)
+ return;
+
+ if (strcmp(tokens[ti], "inline-crypto-offload") == 0)
+ rule->type =
+ RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO;
+ else if (strcmp(tokens[ti],
+ "inline-protocol-offload") == 0)
+ rule->type =
+ RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL;
+ else if (strcmp(tokens[ti],
+ "lookaside-protocol-offload") == 0)
+ rule->type =
+ RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL;
+ else if (strcmp(tokens[ti], "no-offload") == 0)
+ rule->type = RTE_SECURITY_ACTION_TYPE_NONE;
+ else {
+ APP_CHECK(0, status, "Invalid input \"%s\"",
+ tokens[ti]);
+ return;
+ }
+
+ type_p = 1;
+ continue;
+ }
+
+ if (strcmp(tokens[ti], "port_id") == 0) {
+ APP_CHECK_PRESENCE(portid_p, tokens[ti], status);
+ if (status->status < 0)
+ return;
+ INCREMENT_TOKEN_INDEX(ti, n_tokens, status);
+ if (status->status < 0)
+ return;
+ rule->portid = atoi(tokens[ti]);
+ if (status->status < 0)
+ return;
+ portid_p = 1;
+ continue;
+ }
+
/* unrecognizeable input */
APP_CHECK(0, status, "unrecognized input \"%s\"",
tokens[ti]);
@@ -579,6 +631,14 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
if (status->status < 0)
return;
+ if ((rule->type != RTE_SECURITY_ACTION_TYPE_NONE) && (portid_p == 0))
+ printf("Missing portid option, falling back to non-offload\n");
+
+ if (!type_p || !portid_p) {
+ rule->type = RTE_SECURITY_ACTION_TYPE_NONE;
+ rule->portid = -1;
+ }
+
*ri = *ri + 1;
}
@@ -646,9 +706,11 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound)
struct sa_ctx {
struct ipsec_sa sa[IPSEC_SA_MAX_ENTRIES];
- struct {
- struct rte_crypto_sym_xform a;
- struct rte_crypto_sym_xform b;
+ union {
+ struct {
+ struct rte_crypto_sym_xform a;
+ struct rte_crypto_sym_xform b;
+ };
} xf[IPSEC_SA_MAX_ENTRIES];
};
@@ -681,6 +743,33 @@ sa_create(const char *name, int32_t socket_id)
}
static int
+check_eth_dev_caps(uint16_t portid, uint32_t inbound)
+{
+ struct rte_eth_dev_info dev_info;
+
+ rte_eth_dev_info_get(portid, &dev_info);
+
+ if (inbound) {
+ if ((dev_info.rx_offload_capa &
+ DEV_RX_OFFLOAD_SECURITY) == 0) {
+ RTE_LOG(WARNING, PORT,
+ "hardware RX IPSec offload is not supported\n");
+ return -EINVAL;
+ }
+
+ } else { /* outbound */
+ if ((dev_info.tx_offload_capa &
+ DEV_TX_OFFLOAD_SECURITY) == 0) {
+ RTE_LOG(WARNING, PORT,
+ "hardware TX IPSec offload is not supported\n");
+ return -EINVAL;
+ }
+ }
+ return 0;
+}
+
+
+static int
sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
uint32_t nb_entries, uint32_t inbound)
{
@@ -699,6 +788,16 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
*sa = entries[i];
sa->seq = 0;
+ if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL ||
+ sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) {
+ if (check_eth_dev_caps(sa->portid, inbound))
+ return -EINVAL;
+ }
+
+ sa->direction = (inbound == 1) ?
+ RTE_SECURITY_IPSEC_SA_DIR_INGRESS :
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+
switch (sa->flags) {
case IP4_TUNNEL:
sa->src.ip.ip4 = rte_cpu_to_be_32(sa->src.ip.ip4);
@@ -708,37 +807,21 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
if (sa->aead_algo == RTE_CRYPTO_AEAD_AES_GCM) {
iv_length = 16;
- if (inbound) {
- sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_AEAD;
- sa_ctx->xf[idx].a.aead.algo = sa->aead_algo;
- sa_ctx->xf[idx].a.aead.key.data = sa->cipher_key;
- sa_ctx->xf[idx].a.aead.key.length =
- sa->cipher_key_len;
- sa_ctx->xf[idx].a.aead.op =
- RTE_CRYPTO_AEAD_OP_DECRYPT;
- sa_ctx->xf[idx].a.next = NULL;
- sa_ctx->xf[idx].a.aead.iv.offset = IV_OFFSET;
- sa_ctx->xf[idx].a.aead.iv.length = iv_length;
- sa_ctx->xf[idx].a.aead.aad_length =
- sa->aad_len;
- sa_ctx->xf[idx].a.aead.digest_length =
- sa->digest_len;
- } else { /* outbound */
- sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_AEAD;
- sa_ctx->xf[idx].a.aead.algo = sa->aead_algo;
- sa_ctx->xf[idx].a.aead.key.data = sa->cipher_key;
- sa_ctx->xf[idx].a.aead.key.length =
- sa->cipher_key_len;
- sa_ctx->xf[idx].a.aead.op =
- RTE_CRYPTO_AEAD_OP_ENCRYPT;
- sa_ctx->xf[idx].a.next = NULL;
- sa_ctx->xf[idx].a.aead.iv.offset = IV_OFFSET;
- sa_ctx->xf[idx].a.aead.iv.length = iv_length;
- sa_ctx->xf[idx].a.aead.aad_length =
- sa->aad_len;
- sa_ctx->xf[idx].a.aead.digest_length =
- sa->digest_len;
- }
+ sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_AEAD;
+ sa_ctx->xf[idx].a.aead.algo = sa->aead_algo;
+ sa_ctx->xf[idx].a.aead.key.data = sa->cipher_key;
+ sa_ctx->xf[idx].a.aead.key.length =
+ sa->cipher_key_len;
+ sa_ctx->xf[idx].a.aead.op = (inbound == 1) ?
+ RTE_CRYPTO_AEAD_OP_DECRYPT :
+ RTE_CRYPTO_AEAD_OP_ENCRYPT;
+ sa_ctx->xf[idx].a.next = NULL;
+ sa_ctx->xf[idx].a.aead.iv.offset = IV_OFFSET;
+ sa_ctx->xf[idx].a.aead.iv.length = iv_length;
+ sa_ctx->xf[idx].a.aead.aad_length =
+ sa->aad_len;
+ sa_ctx->xf[idx].a.aead.digest_length =
+ sa->digest_len;
sa->xforms = &sa_ctx->xf[idx].a;
--
2.9.3
^ permalink raw reply [flat|nested] 195+ messages in thread
* Re: [dpdk-dev] [PATCH v6 00/10] introduce security offload library
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 00/10] introduce security offload library Akhil Goyal
` (9 preceding siblings ...)
2017-10-25 15:07 ` [dpdk-dev] [PATCH v6 10/10] examples/ipsec-secgw: add support for security offload Akhil Goyal
@ 2017-10-26 1:16 ` Thomas Monjalon
10 siblings, 0 replies; 195+ messages in thread
From: Thomas Monjalon @ 2017-10-26 1:16 UTC (permalink / raw)
To: Akhil Goyal, declan.doherty, hemant.agrawal, radu.nicolau,
borisp, aviadye
Cc: dev, pablo.de.lara.guarch, sandeep.malik, jerin.jacob,
john.mcnamara, konstantin.ananyev, shahafs, olivier.matz
25/10/2017 17:07, Akhil Goyal:
> This patchset introduce the rte_security library in DPDK.
> This also includes the sample implementation of drivers and
> changes in ipsec gateway application to demonstrate its usage.
[...]
> Akhil Goyal (5):
> cryptodev: support security APIs
> security: introduce security API and framework
> doc: add details of rte security
> crypto/dpaa2_sec: add support for protocol offload ipsec
> examples/ipsec-secgw: add support for security offload
>
> Boris Pismenny (3):
> net: add ESP header to generic flow steering
> mbuf: add security crypto flags and mbuf fields
> ethdev: add rte flow action for crypto
>
> Declan Doherty (1):
> ethdev: support security APIs
>
> Radu Nicolau (1):
> net/ixgbe: enable inline ipsec
Applied, thanks for the big cross-vendors team working
^ permalink raw reply [flat|nested] 195+ messages in thread