* [dpdk-dev] [RFC][PATCH 0/5] cryptodev: Adding support for inline crypto processing of IPsec flows
@ 2017-05-09 14:57 Radu Nicolau
2017-05-09 14:57 ` [dpdk-dev] [RFC][PATCH 1/5] cryptodev: Updated API to add suport for inline IPSec Radu Nicolau
` (6 more replies)
0 siblings, 7 replies; 21+ messages in thread
From: Radu Nicolau @ 2017-05-09 14:57 UTC (permalink / raw)
To: dev; +Cc: Radu Nicolau
In this RFC we introduce a mechanism to support inline hardware acceleration of symmetric crypto processing of IPsec flows on Ethernet adapters within the cryptodev framework, specifically this RFC includes the initial enablement work for the Intel® 82599 10 GbE Controller (IXGBE).
A number of new concepts are proposed to support this model within DPDK.
1. Extension of the librte_cryptodev to support the programming of IPsec Security Association (SA) material as part of crypto session creation including the definition of a new crypto transform type for IPsec to allow programming of the IPsec specific material of the SA in hardware.
By chaining the IPsec transform with the cipher and/or authentication then the user is able to specify all the information about the SA required by hardware to complete crypto processing inline as well as identify IPsec flows for processing on ingress/egress if the hardware is capable.
+---------------+
| IPsec xform |
| *next; |--->+----------------+
+---------------+ | cipher xform |
| *next; |-->+--------------+
+----------------+ | auth xform |
| *next; |
+--------------+
enum rte_crypto_ipsec_direction {
RTE_CRYPTO_INBOUND = 1,
RTE_CRYPTO_OUTBOUND
};
struct rte_crypto_ipsec_addr {
enum ip_addr_type {
IPV4_ADDRESS,
IPV6_ADDRESS
} type; /**< IP address type IPv4/v6 */
union {
uint32_t ipv4; /**< IPv4 Address */
uint32_t ipv6[4]; /**< IPv6 Address */
}; /**< IP address */
};
struct rte_crypto_ipsec_xform {
enum rte_crypto_ipsec_direction dir; /**< Direction - In/Out */
uint32_t spi; /**< SPI */
struct rte_crypto_ipsec_addr src_ip;
/**< Source IP */
struct rte_crypto_ipsec_addr dst_ip;
/**< Destination IP */
};
2. Introduce the ability to have shared device instance within DPDK. The inline crypto capabilities are provided through the devices phyiscal function, to allow inline processing capabilities to be introduced to DPDK within the minimum impact to the existing infrastructure we propose a model where a NIC and Crypto PMD can share a single pci device. This allows the application interfaces to be consistent with DPDKs existing APIs and programming models as well as providing a mechanism for sharing of access to the PCI bar.
+-----------+ +--------------+
| NIC PMD | | CRYPTO PMD |
+-----------+ +--------------+
\ /
+----------------+
| rte_pci_device |
+----------------+
|
+----------------+
| PCI BAR |
+----------------+
3. The definition of new tx/rx mbuf offload flags to indicate that a packet requires inline crypto processing on to the NIC PMD on transmit and to indicate that a packet has been processed by the inline crypto hardware on ingress.
/**
* Inline IPSec Rx processed packet
*/
#define PKT_RX_IPSEC_INLINE_CRYPTO (1ULL << 17)
/**
* Inline IPSec Rx packet authentication failed
*/
#define PKT_RX_IPSEC_INLINE_CRYPTO_AUTH_FAILED (1ULL << 18)
/**
* Inline IPSec Tx process packet
*/
#define PKT_TX_IPSEC_INLINE_CRYPTO (1ULL << 43)
5. The addition of inline crypto metadata into the rte_mbuf structure to allow the required egress metadata to be given to the NIC PMD to build the necessary transmit descriptors in tx_burst processing when the PKT_TX_IPSEC_INLINE_CRYPTO is set. We are looking for feedback on a better approach to handling the passing of this metadata to the NIC as it is understood that different hardware accelerators which support this offload may have different requirements for metadata depending on implementation and other capabilities in the device. One possibility we have consider is that the last 16 bytes of mbuf is reserved for device specific metadata, which layout is flexible depending on the hardware being used.
struct rte_mbuf {
...
/** Inline IPSec metadata*/
struct {
uint16_t sa_idx; /**< SA index */
uint8_t pad_len; /**< Padding length */
uint8_t enc;
} inline_ipsec;
} __rte_cache_aligned;
The figure below demonstrates how the new functionality allows the inline crypto acceleration to be integrated into an existing IPsec stack egress path which is using the cryptodev APIs. It is important to note on the data path that the crypto PMD is only processing the metadata of the mbuf and is not modifying the packet payload in anyway. The main function of the crypto PMD in this approach is to support the configuration of the SA material in the hardware using the cryptodev APIs and to enable transparent integration of the inline crypto acceleration into IPsec data path. Only the IPsec stacks control path is aware of the inline processing and is required to use the extra IPsec transform outlined above.
Egress Data Path
|
+--------|--------+
| egress IPsec |
| | |
| +------V------+ |
| | SABD lookup | | <------ SA maps to cryptodev session
| +------|------+ |
| +------V------+ |
| | Tunnel | | <------ Add tunnel header to packet
| +------|------+ |
| +------V------+ |
| | ESP | | <------ Add ESP header/trailer to packet
| +------|------+ |
| +------|------+ |
| | \--------------------\
| | Crypto | | | <- Crypto processing through
| | /----------------\ | inline crypto PMD
| +------|------+ | | |
+--------V--------+ | |
| | |
+--------V--------+ | | create <-- SA is added to hw
| L2 Stack | | | inline using existing create
+--------|--------+ | | session sym session APIs
| | | |
+--------V--------+ +---|---|----V---+
| | | \---/ | | <- Set inline crypto offload
| NIC PMD | | INLINE | | flag and required metadata
| | | CRYPTO PMD | | to mbuf. Packet data remains
+--------|--------+ +------------V---+ unmodified.
| |
+--------|------------+ Add/Remove
| HW ACCELERATED NIC | SA Entry
| |-----\ | |
| | +---|----+ | |
| | | Inline |<-------------/
| | | Crypto | |
| | +---|----+ | <-- Packet Encryption/Decryption and
| |-----/ | Authentication happens inline
+--------|------------+
V
IXGBE enablement details:
- Only AES-GCM 128 ESP Tunnel/Transport mode and Authentication only mode are supported.
IXGBE PMD:
Rx Path
- To enable decryption for incoming packets 3 tables have to be programmed
in the IXGBE device: IP table, SPI table, and Key table. First one has
128 entries, the other 2 have 1024. An encrypted packet that need to be
decrypted inline needs matching entries in all tables to be processed:
destination IP needs to match an entry in the IP table, SPI needs to
match and entry in the SPI table, and the SPI table entry needs to have
a valid index into the Key table. If all conditions are met then the
packet is decrypted and the crypto status is set in the rx descriptors.
- After the inline crypto processing the packet is presented to host as a
regular rx packet but all IPsec related header are still attached to the packet.
- The IXGBE net driver rx path checks the descriptors and based on the
crypto status sets additional flags in the rte_mbuf.ol_flags field.
- If decryption is succesful, the received packet contains the decrypted
data where the encrypted data was when the packet arrived.
- In the DPKD crypto PMD side, the rte_mbuf.ol_flags are checked and
decryption status set accordingly.
TX path:
- For encryption of the outgoing packets there is only one table that
contains the key as all the other operations are performed by software.
The host need to program this table and set the tx descriptors.
- The IXGBE net driver tx path checks the additional field
rte_mbuf.inline_ipsec, and if the packet needs to be encrypted then the
tx descriptors are set accordingly.
Crypto IXGBE PMD:
- implemented IXGBE Crypto driver; mostly pass-through plus error
checking for the enque-deque operations and IXGBE crypto engine setup
and configuration
IPsec Gateway Sample Application
- ipsec gateway example updated to support inline ipsec
Radu Nicolau (5):
cryptodev: Updated API to add suport for inline IPSec.
pci: allow shared device instances.
mbuff: added inline IPSec flags and metadata
cryptodev: added new crypto PMD supporting inline IPSec for IXGBE
examples: updated IPSec sample app to support inline IPSec
config/common_base | 7 +
drivers/crypto/Makefile | 2 +
drivers/crypto/ixgbe/Makefile | 63 +++
drivers/crypto/ixgbe/ixgbe_crypto_pmd_ops.c | 576 +++++++++++++++++++++
drivers/crypto/ixgbe/ixgbe_crypto_pmd_private.h | 180 +++++++
drivers/crypto/ixgbe/ixgbe_rte_cyptodev.c | 474 +++++++++++++++++
.../crypto/ixgbe/rte_pmd_ixgbe_crypto_version.map | 3 +
drivers/net/ixgbe/ixgbe_ethdev.c | 128 ++---
drivers/net/ixgbe/ixgbe_rxtx.c | 22 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 34 ++
examples/ipsec-secgw/esp.c | 7 +-
examples/ipsec-secgw/ipsec.h | 2 +
examples/ipsec-secgw/sa.c | 165 ++++--
lib/librte_cryptodev/rte_crypto_sym.h | 34 +-
lib/librte_cryptodev/rte_cryptodev.h | 5 +-
lib/librte_eal/common/eal_common_pci.c | 15 +-
lib/librte_eal/common/include/rte_pci.h | 18 +-
lib/librte_mbuf/rte_mbuf.h | 22 +
mk/rte.app.mk | 1 +
19 files changed, 1625 insertions(+), 133 deletions(-)
create mode 100644 drivers/crypto/ixgbe/Makefile
create mode 100644 drivers/crypto/ixgbe/ixgbe_crypto_pmd_ops.c
create mode 100644 drivers/crypto/ixgbe/ixgbe_crypto_pmd_private.h
create mode 100644 drivers/crypto/ixgbe/ixgbe_rte_cyptodev.c
create mode 100644 drivers/crypto/ixgbe/rte_pmd_ixgbe_crypto_version.map
--
2.7.4
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [RFC][PATCH 1/5] cryptodev: Updated API to add suport for inline IPSec.
2017-05-09 14:57 [dpdk-dev] [RFC][PATCH 0/5] cryptodev: Adding support for inline crypto processing of IPsec flows Radu Nicolau
@ 2017-05-09 14:57 ` Radu Nicolau
2017-05-09 14:57 ` [dpdk-dev] [RFC][PATCH 2/5] pci: allow shared device instances Radu Nicolau
` (5 subsequent siblings)
6 siblings, 0 replies; 21+ messages in thread
From: Radu Nicolau @ 2017-05-09 14:57 UTC (permalink / raw)
To: dev; +Cc: Radu Nicolau
Added new xform, rte_crypto_ipsec_xform
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
---
lib/librte_cryptodev/rte_crypto_sym.h | 34 +++++++++++++++++++++++++++++++++-
1 file changed, 33 insertions(+), 1 deletion(-)
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index 3a40844..b11df10 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -346,11 +346,40 @@ struct rte_crypto_auth_xform {
*/
};
+
+enum rte_crypto_ipsec_direction {
+ RTE_CRYPTO_INBOUND = 1,
+ RTE_CRYPTO_OUTBOUND
+};
+
+struct rte_crypto_ipsec_addr {
+ enum ip_addr_type {
+ IPV4_ADDRESS,
+ IPV6_ADDRESS
+ } type; /**< IP address type IPv4/v6 */
+
+ union {
+ uint32_t ipv4; /**< IPv4 Address */
+ uint32_t ipv6[4]; /**< IPv6 Address */
+ }; /**< IP address */
+};
+
+
+struct rte_crypto_ipsec_xform {
+ enum rte_crypto_ipsec_direction dir; /**< Direction - In/Out */
+ uint32_t spi; /**< SPI */
+ uint32_t salt; /**< Salt */
+ struct rte_crypto_ipsec_addr src_ip; /**< Source IP */
+ struct rte_crypto_ipsec_addr dst_ip; /**< Destination IP */
+};
+
+
/** Crypto transformation types */
enum rte_crypto_sym_xform_type {
RTE_CRYPTO_SYM_XFORM_NOT_SPECIFIED = 0, /**< No xform specified */
RTE_CRYPTO_SYM_XFORM_AUTH, /**< Authentication xform */
- RTE_CRYPTO_SYM_XFORM_CIPHER /**< Cipher xform */
+ RTE_CRYPTO_SYM_XFORM_CIPHER, /**< Cipher xform */
+ RTE_CRYPTO_SYM_XFORM_IPSEC /**< IPsec xform */
};
/**
@@ -373,6 +402,9 @@ struct rte_crypto_sym_xform {
/**< Authentication / hash xform */
struct rte_crypto_cipher_xform cipher;
/**< Cipher xform */
+ struct rte_crypto_ipsec_xform ipsec;
+ /**< IPsec xform */
+
};
};
--
2.7.4
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [RFC][PATCH 2/5] pci: allow shared device instances.
2017-05-09 14:57 [dpdk-dev] [RFC][PATCH 0/5] cryptodev: Adding support for inline crypto processing of IPsec flows Radu Nicolau
2017-05-09 14:57 ` [dpdk-dev] [RFC][PATCH 1/5] cryptodev: Updated API to add suport for inline IPSec Radu Nicolau
@ 2017-05-09 14:57 ` Radu Nicolau
2017-05-10 9:09 ` Thomas Monjalon
2017-05-09 14:57 ` [dpdk-dev] [RFC][PATCH 3/5] mbuff: added inline IPSec flags and metadata Radu Nicolau
` (4 subsequent siblings)
6 siblings, 1 reply; 21+ messages in thread
From: Radu Nicolau @ 2017-05-09 14:57 UTC (permalink / raw)
To: dev; +Cc: Radu Nicolau
Updated PCI initialization code to allow devices to be shared across multiple PMDs.
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
---
lib/librte_eal/common/eal_common_pci.c | 15 ++++++++++++---
lib/librte_eal/common/include/rte_pci.h | 18 ++++++++++++++----
2 files changed, 26 insertions(+), 7 deletions(-)
diff --git a/lib/librte_eal/common/eal_common_pci.c b/lib/librte_eal/common/eal_common_pci.c
index b749991..8fdc38f 100644
--- a/lib/librte_eal/common/eal_common_pci.c
+++ b/lib/librte_eal/common/eal_common_pci.c
@@ -203,7 +203,7 @@ static int
rte_pci_probe_one_driver(struct rte_pci_driver *dr,
struct rte_pci_device *dev)
{
- int ret;
+ int ret = 1;
struct rte_pci_addr *loc;
if ((dr == NULL) || (dev == NULL))
@@ -254,6 +254,11 @@ rte_pci_probe_one_driver(struct rte_pci_driver *dr,
rte_pci_unmap_device(dev);
}
+ if (!dr->id_table->shared || ret) {
+ return ret;
+ }
+ /* else continue to parse the table for another match */
+
return ret;
}
@@ -303,6 +308,7 @@ pci_probe_all_drivers(struct rte_pci_device *dev)
{
struct rte_pci_driver *dr = NULL;
int rc = 0;
+ int res = 1;
if (dev == NULL)
return -1;
@@ -319,9 +325,12 @@ pci_probe_all_drivers(struct rte_pci_device *dev)
if (rc > 0)
/* positive value means driver doesn't support it */
continue;
- return 0;
+ if (dr->id_table->shared)
+ res = 0;
+ else
+ return 0;
}
- return 1;
+ return res;
}
/*
diff --git a/lib/librte_eal/common/include/rte_pci.h b/lib/librte_eal/common/include/rte_pci.h
index ab64c63..3a66ef4 100644
--- a/lib/librte_eal/common/include/rte_pci.h
+++ b/lib/librte_eal/common/include/rte_pci.h
@@ -135,6 +135,7 @@ struct rte_pci_id {
uint16_t device_id; /**< Device ID or PCI_ANY_ID. */
uint16_t subsystem_vendor_id; /**< Subsystem vendor ID or PCI_ANY_ID. */
uint16_t subsystem_device_id; /**< Subsystem device ID or PCI_ANY_ID. */
+ uint8_t shared; /**< Device can be shared by multiple drivers. */
};
/**
@@ -187,22 +188,31 @@ struct rte_pci_device {
#ifdef __cplusplus
/** C++ macro used to help building up tables of device IDs */
-#define RTE_PCI_DEVICE(vend, dev) \
+#define _RTE_PCI_DEVICE_SH(vend, dev, sh) \
RTE_CLASS_ANY_ID, \
(vend), \
(dev), \
PCI_ANY_ID, \
- PCI_ANY_ID
+ PCI_ANY_ID, \
+ (sh)
#else
/** Macro used to help building up tables of device IDs */
-#define RTE_PCI_DEVICE(vend, dev) \
+#define _RTE_PCI_DEVICE_SH(vend, dev, sh) \
.class_id = RTE_CLASS_ANY_ID, \
.vendor_id = (vend), \
.device_id = (dev), \
.subsystem_vendor_id = PCI_ANY_ID, \
- .subsystem_device_id = PCI_ANY_ID
+ .subsystem_device_id = PCI_ANY_ID, \
+ .shared = (sh)
#endif
+#define RTE_PCI_DEVICE(vend, dev) \
+ _RTE_PCI_DEVICE_SH((vend), (dev), 0)
+#define RTE_PCI_DEVICE_SH(vend, dev) \
+ _RTE_PCI_DEVICE_SH((vend), (dev), 1)
+
+struct rte_pci_driver;
+
/**
* Initialisation function for the driver called during PCI probing.
*/
--
2.7.4
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [RFC][PATCH 3/5] mbuff: added inline IPSec flags and metadata
2017-05-09 14:57 [dpdk-dev] [RFC][PATCH 0/5] cryptodev: Adding support for inline crypto processing of IPsec flows Radu Nicolau
2017-05-09 14:57 ` [dpdk-dev] [RFC][PATCH 1/5] cryptodev: Updated API to add suport for inline IPSec Radu Nicolau
2017-05-09 14:57 ` [dpdk-dev] [RFC][PATCH 2/5] pci: allow shared device instances Radu Nicolau
@ 2017-05-09 14:57 ` Radu Nicolau
2017-05-09 14:57 ` [dpdk-dev] [RFC][PATCH 4/5] cryptodev: added new crypto PMD supporting inline IPSec for IXGBE Radu Nicolau
` (3 subsequent siblings)
6 siblings, 0 replies; 21+ messages in thread
From: Radu Nicolau @ 2017-05-09 14:57 UTC (permalink / raw)
To: dev; +Cc: Radu Nicolau
Added inline IPSec status flags to ol_flags and added new member for IPSec metadata.
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
---
lib/librte_mbuf/rte_mbuf.h | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 9097f18..e4eba43 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -189,11 +189,27 @@ extern "C" {
*/
#define PKT_RX_TIMESTAMP (1ULL << 17)
+/**
+ * Inline IPSec Rx processed packet
+ */
+#define PKT_RX_IPSEC_INLINE_CRYPTO (1ULL << 18)
+
+/**
+ * Inline IPSec Rx packet authentication failed
+ */
+#define PKT_RX_IPSEC_INLINE_CRYPTO_AUTH_FAILED (1ULL << 19)
+
+
/* add new RX flags here */
/* add new TX flags here */
/**
+ * Inline IPSec Tx process packet
+ */
+#define PKT_TX_IPSEC_INLINE_CRYPTO (1ULL << 43)
+
+/**
* Offload the MACsec. This flag must be set by the application to enable
* this offload feature for a packet to be transmitted.
*/
@@ -542,6 +558,12 @@ struct rte_mbuf {
/** Sequence number. See also rte_reorder_insert(). */
uint32_t seqn;
+ /** Inline IPSec metadata*/
+ struct {
+ uint16_t sa_idx; /**< SA index */
+ uint8_t pad_len; /**< Padding length */
+ uint8_t enc;
+ } inline_ipsec;
} __rte_cache_aligned;
/**
--
2.7.4
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [RFC][PATCH 4/5] cryptodev: added new crypto PMD supporting inline IPSec for IXGBE
2017-05-09 14:57 [dpdk-dev] [RFC][PATCH 0/5] cryptodev: Adding support for inline crypto processing of IPsec flows Radu Nicolau
` (2 preceding siblings ...)
2017-05-09 14:57 ` [dpdk-dev] [RFC][PATCH 3/5] mbuff: added inline IPSec flags and metadata Radu Nicolau
@ 2017-05-09 14:57 ` Radu Nicolau
2017-05-09 14:57 ` [dpdk-dev] [RFC][PATCH 5/5] examples: updated IPSec sample app to support inline IPSec Radu Nicolau
` (2 subsequent siblings)
6 siblings, 0 replies; 21+ messages in thread
From: Radu Nicolau @ 2017-05-09 14:57 UTC (permalink / raw)
To: dev; +Cc: Radu Nicolau
Implemented new cryprodev PMD and updated the net/ixgbe driver.
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
---
config/common_base | 7 +
drivers/crypto/Makefile | 2 +
drivers/crypto/ixgbe/Makefile | 63 +++
drivers/crypto/ixgbe/ixgbe_crypto_pmd_ops.c | 576 +++++++++++++++++++++
drivers/crypto/ixgbe/ixgbe_crypto_pmd_private.h | 180 +++++++
drivers/crypto/ixgbe/ixgbe_rte_cyptodev.c | 474 +++++++++++++++++
.../crypto/ixgbe/rte_pmd_ixgbe_crypto_version.map | 3 +
drivers/net/ixgbe/ixgbe_ethdev.c | 128 ++---
drivers/net/ixgbe/ixgbe_rxtx.c | 22 +-
drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 34 ++
lib/librte_cryptodev/rte_cryptodev.h | 5 +-
mk/rte.app.mk | 1 +
12 files changed, 1427 insertions(+), 68 deletions(-)
create mode 100644 drivers/crypto/ixgbe/Makefile
create mode 100644 drivers/crypto/ixgbe/ixgbe_crypto_pmd_ops.c
create mode 100644 drivers/crypto/ixgbe/ixgbe_crypto_pmd_private.h
create mode 100644 drivers/crypto/ixgbe/ixgbe_rte_cyptodev.c
create mode 100644 drivers/crypto/ixgbe/rte_pmd_ixgbe_crypto_version.map
diff --git a/config/common_base b/config/common_base
index 8907bea..f4ab094 100644
--- a/config/common_base
+++ b/config/common_base
@@ -513,6 +513,13 @@ CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER_DEBUG=n
CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
#
+# Compile PMD for IXGBE inline ipsec device
+#
+CONFIG_RTE_LIBRTE_PMD_IXGBE_CRYPTO=n
+CONFIG_RTE_LIBRTE_PMD_IXGBE_CRYPTO_DEBUG=n
+
+
+#
# Compile generic event device library
#
CONFIG_RTE_LIBRTE_EVENTDEV=y
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 7a719b9..84019cf 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -55,5 +55,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += null
DEPDIRS-null = $(core-libs)
DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec
DEPDIRS-dpaa2_sec = $(core-libs)
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_IXGBE_CRYPTO) += ixgbe
+DEPDIRS-ixgbe = $(core-libs)
include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/ixgbe/Makefile b/drivers/crypto/ixgbe/Makefile
new file mode 100644
index 0000000..ca3102f
--- /dev/null
+++ b/drivers/crypto/ixgbe/Makefile
@@ -0,0 +1,63 @@
+# BSD LICENSE
+#
+# Copyright(c) 2016 Intel Corporation. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of Intel Corporation nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+
+# library name
+LIB = librte_pmd_ixgbe_crypto.a
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+CFLAGS += -I$(RTE_SDK)/drivers/net/ixgbe/
+
+# library version
+LIBABIVER := 1
+
+# versioning export map
+EXPORT_MAP := rte_pmd_ixgbe_crypto_version.map
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_IXGBE_CRYPTO) += ixgbe_crypto_pmd_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_IXGBE_CRYPTO) += ixgbe_rte_cyptodev.c
+
+# export include files
+SYMLINK-y-include +=
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_IXGBE_CRYPTO) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_IXGBE_CRYPTO) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_IXGBE_CRYPTO) += lib/librte_mempool
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_IXGBE_CRYPTO) += lib/librte_ring
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_IXGBE_CRYPTO) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/ixgbe/ixgbe_crypto_pmd_ops.c b/drivers/crypto/ixgbe/ixgbe_crypto_pmd_ops.c
new file mode 100644
index 0000000..ca34e65
--- /dev/null
+++ b/drivers/crypto/ixgbe/ixgbe_crypto_pmd_ops.c
@@ -0,0 +1,576 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_ip.h>
+
+#include "../ixgbe/ixgbe_crypto_pmd_private.h"
+
+static const struct rte_cryptodev_capabilities ixgbe_crypto_pmd_capabilities[] = {
+ { /* AES GCM (AUTH) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_GCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 16
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 65535,
+ .increment = 1
+ }
+ }, }
+ }, }
+ },
+ { /* AES GCM (CIPHER) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_GCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 16
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 12,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ { /* Inline IPSEC (CIPHER)*/
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_IPSEC,
+ }, }
+ },
+ { /* Inline IPSEC (AUTH)*/
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_IPSEC,
+ }, }
+ },
+ RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+/** Configure device */
+static int
+ixgbe_crypto_pmd_config(__rte_unused struct rte_cryptodev *dev,
+ __rte_unused struct rte_cryptodev_config *config)
+{
+ return 0;
+}
+
+/** Start device */
+static int
+ixgbe_crypto_pmd_start(__rte_unused struct rte_cryptodev *dev)
+{
+ return 0;
+}
+
+/** Stop device */
+static void
+ixgbe_crypto_pmd_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+/** Close device */
+static int
+ixgbe_crypto_pmd_close(__rte_unused struct rte_cryptodev *dev)
+{
+ return 0;
+}
+
+/** Get device statistics */
+static void
+ixgbe_crypto_pmd_stats_get(struct rte_cryptodev *dev,
+ struct rte_cryptodev_stats *stats)
+{
+ int qp_id;
+
+ for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+ struct ixgbe_crypto_qp *qp = dev->data->queue_pairs[qp_id];
+
+ stats->enqueued_count += qp->qp_stats.enqueued_count;
+ stats->dequeued_count += qp->qp_stats.dequeued_count;
+
+ stats->enqueue_err_count += qp->qp_stats.enqueue_err_count;
+ stats->dequeue_err_count += qp->qp_stats.dequeue_err_count;
+ }
+}
+
+/** Reset device statistics */
+static void
+ixgbe_crypto_pmd_stats_reset(struct rte_cryptodev *dev)
+{
+ int qp_id;
+
+ for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+ struct ixgbe_crypto_qp *qp = dev->data->queue_pairs[qp_id];
+
+ memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+ }
+}
+
+
+/** Get device info */
+static void
+ixgbe_crypto_pmd_info_get(struct rte_cryptodev *dev,
+ struct rte_cryptodev_info *dev_info)
+{
+ struct ixgbe_crypto_private *internals = dev->data->dev_private;
+
+ if (dev_info != NULL) {
+ dev_info->dev_type = dev->dev_type;
+ dev_info->max_nb_queue_pairs = internals->max_nb_qpairs;
+ dev_info->sym.max_nb_sessions = internals->max_nb_sessions;
+ dev_info->feature_flags = dev->feature_flags;
+ dev_info->capabilities = ixgbe_crypto_pmd_capabilities;
+ }
+}
+
+/** Release queue pair */
+static int
+ixgbe_crypto_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+ if (dev->data->queue_pairs[qp_id] != NULL) {
+ rte_free(dev->data->queue_pairs[qp_id]);
+ dev->data->queue_pairs[qp_id] = NULL;
+ }
+ return 0;
+}
+
+/** set a unique name for the queue pair based on it's name, dev_id and qp_id */
+static int
+ixgbe_crypto_pmd_qp_set_unique_name(struct rte_cryptodev *dev,
+ struct ixgbe_crypto_qp *qp)
+{
+ unsigned n = snprintf(qp->name, sizeof(qp->name),
+ "inln_crypto_%u_qp_%u",
+ dev->data->dev_id, qp->id);
+
+ if (n > sizeof(qp->name))
+ return -1;
+
+ return 0;
+}
+
+/** Create a ring to place process packets on */
+static struct rte_ring *
+ixgbe_crypto_pmd_qp_create_processed_pkts_ring(struct ixgbe_crypto_qp *qp,
+ unsigned ring_size, int socket_id)
+{
+ struct rte_ring *r;
+
+ r = rte_ring_lookup(qp->name);
+ if (r) {
+ if (rte_ring_get_size(r) >= ring_size) {
+ IXGBE_CRYPTO_LOG_INFO(
+ "Reusing existing ring %s for processed packets",
+ qp->name);
+ return r;
+ }
+
+ IXGBE_CRYPTO_LOG_INFO(
+ "Unable to reuse existing ring %s for processed packets",
+ qp->name);
+ return NULL;
+ }
+
+ return rte_ring_create(qp->name, ring_size, socket_id,
+ RING_F_SP_ENQ | RING_F_SC_DEQ);
+}
+
+/** Setup a queue pair */
+static int
+ixgbe_crypto_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+ const struct rte_cryptodev_qp_conf *qp_conf,
+ int socket_id)
+{
+ struct ixgbe_crypto_private *internals = dev->data->dev_private;
+ struct ixgbe_crypto_qp *qp;
+ int retval;
+
+ if (qp_id >= internals->max_nb_qpairs) {
+ IXGBE_CRYPTO_LOG_ERR("Invalid qp_id %u, greater than maximum "
+ "number of queue pairs supported (%u).",
+ qp_id, internals->max_nb_qpairs);
+ return (-EINVAL);
+ }
+
+ /* Free memory prior to re-allocation if needed. */
+ if (dev->data->queue_pairs[qp_id] != NULL)
+ ixgbe_crypto_pmd_qp_release(dev, qp_id);
+
+ /* Allocate the queue pair data structure. */
+ qp = rte_zmalloc_socket("Null Crypto PMD Queue Pair", sizeof(*qp),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (qp == NULL) {
+ IXGBE_CRYPTO_LOG_ERR("Failed to allocate queue pair memory");
+ return (-ENOMEM);
+ }
+
+ qp->id = qp_id;
+ dev->data->queue_pairs[qp_id] = qp;
+
+ retval = ixgbe_crypto_pmd_qp_set_unique_name(dev, qp);
+ if (retval) {
+ IXGBE_CRYPTO_LOG_ERR("Failed to create unique name for ixgbe inline "
+ "crypto device");
+ goto qp_setup_cleanup;
+ }
+
+ qp->processed_pkts = ixgbe_crypto_pmd_qp_create_processed_pkts_ring(qp,
+ qp_conf->nb_descriptors, socket_id);
+ if (qp->processed_pkts == NULL) {
+ IXGBE_CRYPTO_LOG_ERR("Failed to create unique name for ixgbe inline "
+ "crypto device");
+ goto qp_setup_cleanup;
+ }
+
+ qp->sess_mp = dev->data->session_pool;
+
+ memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+
+ return 0;
+
+qp_setup_cleanup:
+ if (qp)
+ rte_free(qp);
+
+ return -1;
+}
+
+/** Start queue pair */
+static int
+ixgbe_crypto_pmd_qp_start(__rte_unused struct rte_cryptodev *dev,
+ __rte_unused uint16_t queue_pair_id)
+{
+ return -ENOTSUP;
+}
+
+/** Stop queue pair */
+static int
+ixgbe_crypto_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev,
+ __rte_unused uint16_t queue_pair_id)
+{
+ return -ENOTSUP;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+ixgbe_crypto_pmd_qp_count(struct rte_cryptodev *dev)
+{
+ return dev->data->nb_queue_pairs;
+}
+
+/** Returns the size of the inline crypto crypto session structure */
+static unsigned
+ixgbe_crypto_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+ return sizeof(struct ixgbe_crypto_session);
+}
+
+/** Configure a null crypto session from a crypto xform chain */
+static void *
+ixgbe_crypto_pmd_session_configure(struct rte_cryptodev *dev,
+ struct rte_crypto_sym_xform *xform, void *sess)
+{
+ int retval;
+
+ if (unlikely(sess == NULL)) {
+ IXGBE_CRYPTO_LOG_ERR("invalid session struct");
+ return NULL;
+ }
+ retval = ixgbe_crypto_set_session_parameters(
+ (struct ixgbe_crypto_session *)sess, xform);
+ if (retval != 0) {
+ IXGBE_CRYPTO_LOG_ERR("failed configure session parameters");
+ return NULL;
+ }
+
+ if (retval < 0) {
+ IXGBE_CRYPTO_LOG_ERR("failed to add crypto session");
+ return NULL;
+ }
+
+ retval = crypto_ixgbe_add_sa(dev, sess);
+ ((struct ixgbe_crypto_session *)sess)->sa_index = retval;
+
+ return sess;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+ixgbe_crypto_pmd_session_clear(struct rte_cryptodev *dev __rte_unused,
+ void *sess)
+{
+ if (sess)
+ memset(sess, 0, sizeof(struct ixgbe_crypto_session));
+}
+
+
+/** verify and set session parameters */
+int
+ixgbe_crypto_set_session_parameters(
+ struct ixgbe_crypto_session *sess,
+ const struct rte_crypto_sym_xform *xform)
+{
+ const struct rte_crypto_sym_xform *auth_xform = NULL;
+ const struct rte_crypto_sym_xform *cipher_xform = NULL;
+ const struct rte_crypto_sym_xform *ipsec_xform;
+
+ if (xform->next == NULL || xform->next->next != NULL) {
+ IXGBE_CRYPTO_LOG_ERR("Two and only two chained xform required");
+ return -EINVAL;
+ }
+
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
+ xform->next->type == RTE_CRYPTO_SYM_XFORM_IPSEC) {
+ cipher_xform = xform;
+ ipsec_xform = xform->next;
+ } else if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+ xform->next->type == RTE_CRYPTO_SYM_XFORM_IPSEC) {
+ auth_xform = xform;
+ ipsec_xform = xform->next;
+ } else if (xform->type == RTE_CRYPTO_SYM_XFORM_IPSEC &&
+ xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+ ipsec_xform = xform;
+ cipher_xform = xform->next;
+ } else if (xform->type == RTE_CRYPTO_SYM_XFORM_IPSEC &&
+ xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+ ipsec_xform = xform;
+ auth_xform = xform->next;
+ } else {
+ IXGBE_CRYPTO_LOG_ERR("Cipher or auth xform and ipsec xform required");
+ return -EINVAL;
+ }
+
+
+ if ((cipher_xform && cipher_xform->cipher.algo != RTE_CRYPTO_CIPHER_AES_GCM) ||
+ (auth_xform && auth_xform->auth.algo != RTE_CRYPTO_AUTH_AES_GCM)) {
+ IXGBE_CRYPTO_LOG_ERR("Only AES GCM supported");
+ return -EINVAL;
+ }
+
+ /* Select Crypto operation */
+ if (ipsec_xform->ipsec.dir == RTE_CRYPTO_OUTBOUND)
+ sess->op = IXGBE_OP_AUTHENTICATED_ENCRYPTION;
+ else if (ipsec_xform->ipsec.dir == RTE_CRYPTO_INBOUND)
+ sess->op = IXGBE_OP_AUTHENTICATED_DECRYPTION;
+ else {
+ IXGBE_CRYPTO_LOG_ERR("Invalid operation");
+ return -EINVAL;
+ }
+
+ if (cipher_xform != NULL) {
+ sess->enc = 1;
+ sess->key = cipher_xform->cipher.key.data;
+ } else {
+ sess->enc = 0;
+ sess->key = auth_xform->auth.key.data;
+ }
+
+ sess->salt = ipsec_xform->ipsec.salt;
+ sess->dir = ipsec_xform->ipsec.dir;
+ sess->dst_ip = ipsec_xform->ipsec.dst_ip;
+ sess->src_ip = ipsec_xform->ipsec.src_ip;
+ sess->spi = ipsec_xform->ipsec.spi;
+
+ return 0;
+}
+
+
+
+/** Process crypto operation for mbuf */
+static int
+process_op(const struct ixgbe_crypto_qp *qp, struct rte_crypto_op *op,
+ struct ixgbe_crypto_session *sess)
+{
+
+ if (sess->op == IXGBE_OP_AUTHENTICATED_DECRYPTION) {
+ if (op->sym->m_src->ol_flags & PKT_RX_IPSEC_INLINE_CRYPTO_AUTH_FAILED)
+ op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+ else if (!(op->sym->m_src->ol_flags & PKT_RX_IPSEC_INLINE_CRYPTO))
+ op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+ else
+ {
+ op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ /*FIXME why is the buffer coming 4 bytes short?*/
+ {
+ struct rte_mbuf *buff = op->sym->m_dst? op->sym->m_dst: op->sym->m_src;
+ struct ipv4_hdr *ip4 = rte_pktmbuf_mtod(buff, struct ipv4_hdr*);
+ uint16_t plen = rte_pktmbuf_pkt_len(buff);
+ uint16_t iplen;
+ if ((ip4->version_ihl & 0xf0) == 0x40) {
+ iplen = rte_be_to_cpu_16(ip4->total_length);
+
+ } else {
+ struct ipv6_hdr *ip6 = (struct ipv6_hdr*)ip4;
+ iplen = rte_be_to_cpu_16(ip6->payload_len) + sizeof(struct ipv6_hdr);
+ }
+ if (iplen > plen)
+ rte_pktmbuf_append(buff, (iplen - plen));
+ }
+ }
+
+ } else {
+ /* set status as successful by default */
+ op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ struct rte_mbuf *buff = op->sym->m_dst? op->sym->m_dst: op->sym->m_src;
+ uint8_t *pad_len = rte_pktmbuf_mtod_offset(buff, uint8_t *,
+ rte_pktmbuf_pkt_len(buff) - 18);
+ buff->ol_flags |= PKT_TX_IPSEC_INLINE_CRYPTO;
+ buff->inline_ipsec.enc = sess->enc;
+ buff->inline_ipsec.sa_idx = sess->sa_index;
+ buff->inline_ipsec.pad_len = *pad_len;
+ }
+
+ /*
+ * if crypto session and operation are valid just enqueue the packet
+ * in the processed ring
+ */
+ return rte_ring_enqueue(qp->processed_pkts, (void *)op);
+}
+
+static struct ixgbe_crypto_session *
+get_session(struct ixgbe_crypto_qp *qp, struct rte_crypto_sym_op *op)
+{
+ struct ixgbe_crypto_session *sess;
+
+ if (op->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+ if (unlikely(op->session == NULL ||
+ op->session->dev_type != RTE_CRYPTODEV_IXGBE_PMD))
+ return NULL;
+
+ sess = (struct ixgbe_crypto_session *)op->session->_private;
+ } else {
+ struct rte_cryptodev_session *c_sess = NULL;
+
+ if (rte_mempool_get(qp->sess_mp, (void **)&c_sess))
+ return NULL;
+
+ sess = (struct ixgbe_crypto_session *)c_sess->_private;
+
+ if (ixgbe_crypto_set_session_parameters(sess, op->xform) != 0)
+ return NULL;
+ }
+
+ return sess;
+}
+
+/** Enqueue burst */
+uint16_t
+ixgbe_crypto_pmd_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops,
+ uint16_t nb_ops)
+{
+ struct ixgbe_crypto_session *sess;
+ struct ixgbe_crypto_qp *qp = queue_pair;
+
+ int i, retval;
+
+ for (i = 0; i < nb_ops; i++) {
+ sess = get_session(qp, ops[i]->sym);
+ if (unlikely(sess == NULL))
+ goto enqueue_err;
+
+ retval = process_op(qp, ops[i], sess);
+ if (unlikely(retval < 0))
+ goto enqueue_err;
+ }
+
+ qp->qp_stats.enqueued_count += i;
+ return i;
+
+enqueue_err:
+ if (ops[i])
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+
+ qp->qp_stats.enqueue_err_count++;
+ return i;
+}
+
+/** Dequeue burst */
+uint16_t
+ixgbe_crypto_pmd_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops,
+ uint16_t nb_ops)
+{
+ struct ixgbe_crypto_qp *qp = queue_pair;
+
+ unsigned nb_dequeued;
+
+ nb_dequeued = rte_ring_dequeue_burst(qp->processed_pkts,
+ (void **)ops, nb_ops, NULL);
+ qp->qp_stats.dequeued_count += nb_dequeued;
+
+ return nb_dequeued;
+}
+
+struct rte_cryptodev_ops ixgbe_crypto_pmd_ops = {
+ .dev_configure = ixgbe_crypto_pmd_config,
+ .dev_start = ixgbe_crypto_pmd_start,
+ .dev_stop = ixgbe_crypto_pmd_stop,
+ .dev_close = ixgbe_crypto_pmd_close,
+
+ .stats_get = ixgbe_crypto_pmd_stats_get,
+ .stats_reset = ixgbe_crypto_pmd_stats_reset,
+
+ .dev_infos_get = ixgbe_crypto_pmd_info_get,
+
+ .queue_pair_setup = ixgbe_crypto_pmd_qp_setup,
+ .queue_pair_release = ixgbe_crypto_pmd_qp_release,
+ .queue_pair_start = ixgbe_crypto_pmd_qp_start,
+ .queue_pair_stop = ixgbe_crypto_pmd_qp_stop,
+ .queue_pair_count = ixgbe_crypto_pmd_qp_count,
+
+ .session_get_size = ixgbe_crypto_pmd_session_get_size,
+ .session_configure = ixgbe_crypto_pmd_session_configure,
+ .session_clear = ixgbe_crypto_pmd_session_clear
+};
diff --git a/drivers/crypto/ixgbe/ixgbe_crypto_pmd_private.h b/drivers/crypto/ixgbe/ixgbe_crypto_pmd_private.h
new file mode 100644
index 0000000..42c1f60
--- /dev/null
+++ b/drivers/crypto/ixgbe/ixgbe_crypto_pmd_private.h
@@ -0,0 +1,180 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _IXGBE_CRYPTO_PMD_PRIVATE_H_
+#define _IXGBE_CRYPTO_PMD_PRIVATE_H_
+
+#include "rte_config.h"
+#include "rte_ethdev.h"
+#include "base/ixgbe_type.h"
+#include "base/ixgbe_api.h"
+#include "ixgbe_rxtx.h"
+#include "ixgbe_ethdev.h"
+
+#define IXGBE_CRYPTO_LOG_ERR(fmt, args...) \
+ RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+ RTE_STR(CRYPTODEV_NAME_IXGBE_PMD), \
+ __func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_PMD_IXGBE_CRYPTO_DEBUG
+#define IXGBE_CRYPTO_LOG_INFO(fmt, args...) \
+ RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+ RTE_STR(CRYPTODEV_NAME_IXGBE_PMD), \
+ __func__, __LINE__, ## args)
+
+#define IXGBE_CRYPTO_LOG_DBG(fmt, args...) \
+ RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+ RTE_STR(CRYPTODEV_NAME_IXGBE_PMD), \
+ __func__, __LINE__, ## args)
+#else
+#define IXGBE_CRYPTO_LOG_INFO(fmt, args...)
+#define IXGBE_CRYPTO_LOG_DBG(fmt, args...)
+#endif
+
+
+#define IPSRXIDX_RX_EN 0x00000001
+#define IPSRXIDX_TABLE_IP 0x00000002
+#define IPSRXIDX_TABLE_SPI 0x00000004
+#define IPSRXIDX_TABLE_KEY 0x00000006
+#define IPSRXIDX_WRITE 0x80000000
+#define IPSRXIDX_READ 0x40000000
+#define IPSRXMOD_VALID 0x00000001
+#define IPSRXMOD_PROTO 0x00000004
+#define IPSRXMOD_DECRYPT 0x00000008
+#define IPSRXMOD_IPV6 0x00000010
+#define IXGBE_ADVTXD_POPTS_IPSEC 0x00000400
+#define IXGBE_ADVTXD_TUCMD_IPSEC_TYPE_ESP 0x00002000
+#define IXGBE_ADVTXD_TUCMD_IPSEC_ENCRYPT_EN 0x00004000
+#define IXGBE_RXDADV_IPSEC_STATUS_SECP 0x00020000
+#define IXGBE_RXDADV_IPSEC_ERROR_BIT_MASK 0x18000000
+#define IXGBE_RXDADV_IPSEC_ERROR_INVALID_PROTOCOL 0x08000000
+#define IXGBE_RXDADV_IPSEC_ERROR_INVALID_LENGTH 0x10000000
+#define IXGBE_RXDADV_IPSEC_ERROR_AUTHENTICATION_FAILED 0x18000000
+
+#define IPSEC_MAX_RX_IP_COUNT 128
+#define IPSEC_MAX_SA_COUNT 1024
+
+
+enum ixgbe_operation {
+ IXGBE_OP_AUTHENTICATED_ENCRYPTION,
+ IXGBE_OP_AUTHENTICATED_DECRYPTION
+};
+
+enum ixgbe_gcm_key {
+ IXGBE_GCM_KEY_128,
+ IXGBE_GCM_KEY_256
+};
+
+/** inline crypto crypto queue pair */
+struct ixgbe_crypto_qp {
+ uint16_t id;
+ /**< Queue Pair Identifier */
+ char name[RTE_CRYPTODEV_NAME_LEN];
+ /**< Unique Queue Pair Name */
+ struct rte_ring *processed_pkts;
+ /**< Ring for placing process packets */
+ struct rte_mempool *sess_mp;
+ /**< Session Mempool */
+ struct rte_cryptodev_stats qp_stats;
+ /**< Queue pair statistics */
+} __rte_cache_aligned;
+
+
+/** inline crypto crypto private session structure */
+struct ixgbe_crypto_session {
+ enum ixgbe_operation op;
+ uint8_t enc;
+ enum rte_crypto_ipsec_direction dir;
+ uint8_t* key;
+ uint32_t salt;
+ uint32_t sa_index;
+ uint32_t spi;
+ struct rte_crypto_ipsec_addr src_ip;
+ struct rte_crypto_ipsec_addr dst_ip;
+
+ uint32_t reserved;
+} __rte_cache_aligned;
+
+struct ixgbe_crypto_rx_ip_table
+{
+ struct rte_crypto_ipsec_addr ip;
+ uint16_t ref_count;
+};
+struct ixgbe_crypto_rx_sa_table
+{
+ uint32_t spi;
+ uint32_t ip_index;
+ uint32_t key[4];
+ uint32_t salt;
+ uint8_t mode;
+ uint8_t used;
+};
+
+struct ixgbe_crypto_tx_sa_table
+{
+ uint32_t spi;
+ uint32_t key[4];
+ uint32_t salt;
+ uint8_t used;
+};
+
+
+/** private data structure for each inline crypto crypto device */
+struct ixgbe_crypto_private {
+ struct ixgbe_adapter ixgbe_private;
+ unsigned max_nb_qpairs; /**< Max number of queue pairs */
+ unsigned max_nb_sessions; /**< Max number of sessions */
+#define IS_INITIALIZED (1 << 0)
+ uint8_t flags;
+ struct ixgbe_crypto_rx_ip_table rx_ip_table[IPSEC_MAX_RX_IP_COUNT];
+ struct ixgbe_crypto_rx_sa_table rx_sa_table[IPSEC_MAX_SA_COUNT];
+ struct ixgbe_crypto_tx_sa_table tx_sa_table[IPSEC_MAX_SA_COUNT];
+};
+
+
+/** Set and validate inline crypto crypto session parameters */
+extern int
+ixgbe_crypto_set_session_parameters(struct ixgbe_crypto_session *sess,
+ const struct rte_crypto_sym_xform *xform);
+extern uint16_t
+ixgbe_crypto_pmd_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops,
+ uint16_t nb_ops);
+extern uint16_t
+ixgbe_crypto_pmd_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops,
+ uint16_t nb_ops);
+
+int crypto_ixgbe_add_sa(struct rte_cryptodev *cryptodev, struct ixgbe_crypto_session *sess);
+
+/** device specific operations function pointer structure */
+extern struct rte_cryptodev_ops ixgbe_crypto_pmd_ops;
+
+#endif /* _IXGBE_CRYPTO_PMD_PRIVATE_H_ */
diff --git a/drivers/crypto/ixgbe/ixgbe_rte_cyptodev.c b/drivers/crypto/ixgbe/ixgbe_rte_cyptodev.c
new file mode 100644
index 0000000..3d73c86
--- /dev/null
+++ b/drivers/crypto/ixgbe/ixgbe_rte_cyptodev.c
@@ -0,0 +1,474 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+
+#include "../ixgbe/ixgbe_crypto_pmd_private.h"
+
+#define IXGBE_WAIT_RW(__reg,__rw) \
+{ \
+ IXGBE_WRITE_REG(hw, (__reg), reg); \
+ while ((IXGBE_READ_REG(hw, (__reg))) & (__rw)); \
+}
+#define IXGBE_WAIT_RREAD IXGBE_WAIT_RW(IXGBE_IPSRXIDX, IPSRXIDX_READ)
+#define IXGBE_WAIT_RWRITE IXGBE_WAIT_RW(IXGBE_IPSRXIDX, IPSRXIDX_WRITE)
+#define IXGBE_WAIT_TREAD IXGBE_WAIT_RW(IXGBE_IPSTXIDX, IPSRXIDX_READ)
+#define IXGBE_WAIT_TWRITE IXGBE_WAIT_RW(IXGBE_IPSTXIDX, IPSRXIDX_WRITE)
+
+#define CMP_IP(a, b) \
+ ((a).ipv6[0] == (b).ipv6[0] && (a).ipv6[1] == (b).ipv6[1] && \
+ (a).ipv6[2] == (b).ipv6[2] && (a).ipv6[3] == (b).ipv6[3])
+
+
+static void crypto_ixgbe_clear_ipsec_tables(struct rte_cryptodev *cryptodev)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(cryptodev->data->dev_private);
+ int i = 0;
+
+ /* clear Rx IP table*/
+ for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
+ uint16_t index = i << 3;
+ uint32_t reg = IPSRXIDX_RX_EN | IPSRXIDX_WRITE | IPSRXIDX_TABLE_IP | index;
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3), 0);
+ IXGBE_WAIT_RWRITE;
+ }
+
+ /* clear Rx SPI and Rx/Tx SA tables*/
+ for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+ uint32_t index = i << 3;
+ uint32_t reg = IPSRXIDX_RX_EN | IPSRXIDX_WRITE | IPSRXIDX_TABLE_SPI | index;
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI, 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX, 0);
+ IXGBE_WAIT_RWRITE;
+ reg = IPSRXIDX_RX_EN | IPSRXIDX_WRITE | IPSRXIDX_TABLE_KEY | index;
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT, 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD, 0);
+ IXGBE_WAIT_RWRITE;
+ reg = IPSRXIDX_RX_EN | IPSRXIDX_WRITE | index;
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT, 0);
+ IXGBE_WAIT_TWRITE;
+ }
+}
+
+static int crypto_ixgbe_enable_ipsec(struct rte_cryptodev *cryptodev)
+{
+ struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cryptodev->device);
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(cryptodev->data->dev_private);
+ struct rte_eth_link link;
+ uint8_t port_id;
+ uint32_t reg;
+ int ret;
+ char pci_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+ hw->device_id = pci_dev->id.device_id;
+ hw->vendor_id = pci_dev->id.vendor_id;
+ hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
+ ixgbe_set_mac_type(hw);
+ rte_pci_device_name(&(RTE_DEV_TO_PCI(cryptodev->device)->addr), pci_name, sizeof(pci_name));
+ ret = rte_eth_dev_get_port_by_name(pci_name, &port_id);
+
+ if (ret) {
+ IXGBE_CRYPTO_LOG_ERR("Error getting the port id");
+ return -1;
+ }
+ else {
+ IXGBE_CRYPTO_LOG_DBG("inline ipsec crypto device at %s port id %d",
+ pci_name, port_id);
+ }
+
+ /* Halt the data paths */
+ reg = IXGBE_SECTXCTRL_TX_DIS;
+ IXGBE_WRITE_REG(hw, IXGBE_SECTXCTRL, reg);
+ reg = IXGBE_SECRXCTRL_RX_DIS;
+ IXGBE_WRITE_REG(hw, IXGBE_SECRXCTRL, reg);
+
+ /* Wait for Tx path to empty */
+ do {
+ rte_eth_link_get_nowait(port_id, &link);
+ if (link.link_status != ETH_LINK_UP) {
+ /* Fix for HSD:4426139
+ If the Tx FIFO has data but no link, we can't clear the Tx Sec
+ block. So set MAC loopback before block clear*/
+ reg = IXGBE_READ_REG(hw, IXGBE_MACC);
+ reg |= IXGBE_MACC_FLU;
+ IXGBE_WRITE_REG(hw, IXGBE_MACC, reg);
+
+ reg = IXGBE_READ_REG(hw, IXGBE_HLREG0);
+ reg |= IXGBE_HLREG0_LPBK;
+ IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
+ struct timespec time;
+ time.tv_sec = 0;
+ time.tv_nsec = 1000000 * 3;
+ nanosleep(&time, NULL);
+ }
+
+ reg = IXGBE_READ_REG(hw, IXGBE_SECTXSTAT);
+
+ rte_eth_link_get_nowait(port_id, &link);
+ if (link.link_status != ETH_LINK_UP) {
+ reg = IXGBE_READ_REG(hw, IXGBE_MACC);
+ reg &= ~(IXGBE_MACC_FLU);
+ IXGBE_WRITE_REG(hw, IXGBE_MACC, reg);
+
+ reg = IXGBE_READ_REG(hw, IXGBE_HLREG0);
+ reg &= ~(IXGBE_HLREG0_LPBK);
+ IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
+ }
+ } while (!(reg & IXGBE_SECTXSTAT_SECTX_RDY));
+
+ /* Wait for Rx path to empty*/
+ do
+ {
+ reg = IXGBE_READ_REG(hw, IXGBE_SECRXSTAT);
+ }
+ while (!(reg & IXGBE_SECRXSTAT_SECRX_RDY));
+
+ /* Set IXGBE_SECTXBUFFAF to 0x15 as required in the datasheet*/
+ IXGBE_WRITE_REG(hw, IXGBE_SECTXBUFFAF, 0x15);
+
+ /* IFG needs to be set to 3 when we are using security. Otherwise a Tx
+ hang will occur with heavy traffic.*/
+ reg = IXGBE_READ_REG(hw, IXGBE_SECTXMINIFG);
+ reg = (reg & 0xFFFFFFF0) | 0x3;
+ IXGBE_WRITE_REG(hw, IXGBE_SECTXMINIFG, reg);
+
+ reg = IXGBE_READ_REG(hw, IXGBE_HLREG0);
+ reg |= IXGBE_HLREG0_TXCRCEN | IXGBE_HLREG0_RXCRCSTRP;
+ IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
+
+ /* Enable the Tx crypto engine and restart the Tx data path;
+ set the STORE_FORWARD bit for IPSec.*/
+ IXGBE_WRITE_REG(hw, IXGBE_SECTXCTRL, IXGBE_SECTXCTRL_STORE_FORWARD);
+
+ /* Enable the Rx crypto engine and restart the Rx data path*/
+ IXGBE_WRITE_REG(hw, IXGBE_SECRXCTRL, 0);
+
+ /* Test if crypto was enabled */
+ reg = IXGBE_READ_REG(hw, IXGBE_SECTXCTRL);
+ if (reg != IXGBE_SECTXCTRL_STORE_FORWARD)
+ {
+ IXGBE_CRYPTO_LOG_ERR("Error enabling Tx Crypto");
+ return -1;
+ }
+ reg = IXGBE_READ_REG(hw, IXGBE_SECRXCTRL);
+ if (reg != 0)
+ {
+ IXGBE_CRYPTO_LOG_ERR("Error enabling Rx Crypto");
+ return -1;
+ }
+
+ crypto_ixgbe_clear_ipsec_tables(cryptodev);
+ return 0;
+}
+
+
+int crypto_ixgbe_add_sa(struct rte_cryptodev *cryptodev, struct ixgbe_crypto_session *sess)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(cryptodev->data->dev_private);
+ struct ixgbe_crypto_private *internals = cryptodev->data->dev_private;
+ char pci_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+ uint32_t reg;
+ int sa_index = -1;
+
+ rte_pci_device_name(&(RTE_DEV_TO_PCI(cryptodev->device)->addr), pci_name, sizeof(pci_name));
+
+ if (!(internals->flags & IS_INITIALIZED)) {
+ if (crypto_ixgbe_enable_ipsec(cryptodev) == 0)
+ internals->flags |= IS_INITIALIZED;
+ }
+
+ if (sess->dir == RTE_CRYPTO_INBOUND) {
+ int i, ip_index = -1;
+
+ /* Find a match in the IP table*/
+ for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
+ if (CMP_IP(internals->rx_ip_table[i].ip, sess->dst_ip)) {
+ ip_index = i;
+ break;
+ }
+ }
+ /* If no match, find a free entry in the IP table*/
+ if (ip_index < 0) {
+ for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
+ if (internals->rx_ip_table[i].ref_count == 0) {
+ ip_index = i;
+ break;
+ }
+ }
+ }
+
+ /* Fail if no match and no free entries*/
+ if (ip_index < 0) {
+ IXGBE_CRYPTO_LOG_ERR("%s no free entry left in the Rx IP table\n", pci_name);
+ return -1;
+ }
+
+ /* Find a free entry in the SA table*/
+ for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+ if (internals->rx_sa_table[i].used == 0) {
+ sa_index = i;
+ break;
+ }
+ }
+ /* Fail if no free entries*/
+ if (sa_index < 0) {
+ IXGBE_CRYPTO_LOG_ERR("%s no free entry left in the Rx SA table\n", pci_name);
+ return -1;
+ }
+
+ internals->rx_ip_table[ip_index].ip.ipv6[0] = rte_cpu_to_be_32(sess->dst_ip.ipv6[0]);
+ internals->rx_ip_table[ip_index].ip.ipv6[1] = rte_cpu_to_be_32(sess->dst_ip.ipv6[1]);
+ internals->rx_ip_table[ip_index].ip.ipv6[2] = rte_cpu_to_be_32(sess->dst_ip.ipv6[2]);
+ internals->rx_ip_table[ip_index].ip.ipv6[3] = rte_cpu_to_be_32(sess->dst_ip.ipv6[3]);
+ internals->rx_ip_table[ip_index].ref_count++;
+
+ internals->rx_sa_table[sa_index].spi = rte_cpu_to_be_32(sess->spi);
+ internals->rx_sa_table[sa_index].ip_index = ip_index;
+ internals->rx_sa_table[sa_index].key[3] = rte_cpu_to_be_32(*(uint32_t*)&sess->key[0]);
+ internals->rx_sa_table[sa_index].key[2] = rte_cpu_to_be_32(*(uint32_t*)&sess->key[4]);
+ internals->rx_sa_table[sa_index].key[1] = rte_cpu_to_be_32(*(uint32_t*)&sess->key[8]);
+ internals->rx_sa_table[sa_index].key[0] = rte_cpu_to_be_32(*(uint32_t*)&sess->key[12]);
+ internals->rx_sa_table[sa_index].salt = rte_cpu_to_be_32(sess->salt);
+ internals->rx_sa_table[sa_index].mode = IPSRXMOD_VALID;
+ if (sess->enc)
+ internals->rx_sa_table[sa_index].mode |= (IPSRXMOD_PROTO | IPSRXMOD_DECRYPT);
+ if (sess->dst_ip.type == IPV6_ADDRESS)
+ internals->rx_sa_table[sa_index].mode |= IPSRXMOD_IPV6;
+ internals->rx_sa_table[sa_index].used = 1;
+
+
+ /* write IP table entry*/
+ reg = IPSRXIDX_RX_EN | IPSRXIDX_WRITE | IPSRXIDX_TABLE_IP | (ip_index << 3);
+ if (internals->rx_ip_table[ip_index].ip.type == IPV4_ADDRESS) {
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3), internals->rx_ip_table[ip_index].ip.ipv4);
+ } else {
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), internals->rx_ip_table[ip_index].ip.ipv6[0]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), internals->rx_ip_table[ip_index].ip.ipv6[1]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), internals->rx_ip_table[ip_index].ip.ipv6[2]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3), internals->rx_ip_table[ip_index].ip.ipv6[3]);
+ }
+ IXGBE_WAIT_RWRITE;
+
+ /* write SPI table entry*/
+ reg = IPSRXIDX_RX_EN | IPSRXIDX_WRITE | IPSRXIDX_TABLE_SPI | (sa_index << 3);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI, internals->rx_sa_table[sa_index].spi);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX, internals->rx_sa_table[sa_index].ip_index);
+ IXGBE_WAIT_RWRITE;
+
+ /* write Key table entry*/
+ reg = IPSRXIDX_RX_EN | IPSRXIDX_WRITE | IPSRXIDX_TABLE_KEY | (sa_index << 3);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0), internals->rx_sa_table[sa_index].key[0]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1), internals->rx_sa_table[sa_index].key[1]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2), internals->rx_sa_table[sa_index].key[2]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3), internals->rx_sa_table[sa_index].key[3]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT, internals->rx_sa_table[sa_index].salt);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD, internals->rx_sa_table[sa_index].mode);
+ IXGBE_WAIT_RWRITE;
+
+ } else { /* sess->dir == RTE_CRYPTO_OUTBOUND */
+ int i;
+
+ /* Find a free entry in the SA table*/
+ for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+ if (internals->tx_sa_table[i].used == 0) {
+ sa_index = i;
+ break;
+ }
+ }
+ /* Fail if no free entries*/
+ if (sa_index < 0) {
+ IXGBE_CRYPTO_LOG_ERR("%s no free entry left in the Tx SA table\n", pci_name);
+ return -1;
+ }
+
+ internals->tx_sa_table[sa_index].spi = rte_cpu_to_be_32(sess->spi);
+ internals->tx_sa_table[sa_index].key[3] = rte_cpu_to_be_32(*(uint32_t*)&sess->key[0]);
+ internals->tx_sa_table[sa_index].key[2] = rte_cpu_to_be_32(*(uint32_t*)&sess->key[4]);
+ internals->tx_sa_table[sa_index].key[1] = rte_cpu_to_be_32(*(uint32_t*)&sess->key[8]);
+ internals->tx_sa_table[sa_index].key[0] = rte_cpu_to_be_32(*(uint32_t*)&sess->key[12]);
+ internals->tx_sa_table[sa_index].salt = rte_cpu_to_be_32(sess->salt);
+
+ reg = IPSRXIDX_RX_EN | IPSRXIDX_WRITE | (sa_index << 3);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0), internals->tx_sa_table[sa_index].key[0]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1), internals->tx_sa_table[sa_index].key[1]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2), internals->tx_sa_table[sa_index].key[2]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3), internals->tx_sa_table[sa_index].key[3]);
+ IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT, internals->tx_sa_table[sa_index].salt);
+ IXGBE_WAIT_TWRITE;
+
+ }
+
+ return sa_index;
+}
+
+/*
+ * The set of PCI devices this driver supports
+ */
+static const struct rte_pci_id pci_id_ixgbe_crypto_map[] = {
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598_BX) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598AF_DUAL_PORT) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598AF_SINGLE_PORT) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598AT) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598AT2) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598EB_SFP_LOM) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598EB_CX4) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598_CX4_DUAL_PORT) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598_DA_DUAL_PORT) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598_SR_DUAL_PORT_EM) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598EB_XF_LR) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_KX4) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_KX4_MEZZ) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_KR) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_COMBO_BACKPLANE) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_SUBDEV_ID_82599_KX4_KR_MEZZ) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_CX4) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_SFP) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_SUBDEV_ID_82599_SFP) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_SUBDEV_ID_82599_RNDC) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_SUBDEV_ID_82599_560FLR) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_SUBDEV_ID_82599_ECNA_DP) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_BACKPLANE_FCOE) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_SFP_FCOE) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_SFP_EM) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_SFP_SF2) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_SFP_SF_QP) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_QSFP_SF_QP) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599EN_SFP) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_XAUI_LOM) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_T3_LOM) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_LS) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X540T) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X540T1) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_X_SFP) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_X_10G_T) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_X_1G_T) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550T) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550T1) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_KR) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_KR_L) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_SFP_N) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_SGMII) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_SGMII_L) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_10G_T) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_QSFP) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_QSFP_N) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_SFP) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_1G_T) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_1G_T_L) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_X_KX4) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_X_KR) },
+#ifdef RTE_NIC_BYPASS
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_BYPASS) },
+#endif
+ { .vendor_id = 0, /* sentinel */ },
+};
+
+
+static int
+crypto_ixgbe_dev_init(__rte_unused struct rte_cryptodev_driver *crypto_drv,
+ struct rte_cryptodev *cryptodev)
+{
+ struct ixgbe_crypto_private *internals;
+ char pci_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+ PMD_INIT_FUNC_TRACE();
+
+ rte_pci_device_name(&(RTE_DEV_TO_PCI(cryptodev->device)->addr), pci_name, sizeof(pci_name));
+ IXGBE_CRYPTO_LOG_DBG("Found crypto device at %s", pci_name);
+
+ cryptodev->dev_type = RTE_CRYPTODEV_IXGBE_PMD;
+ cryptodev->dev_ops = &ixgbe_crypto_pmd_ops;
+
+ cryptodev->enqueue_burst = ixgbe_crypto_pmd_enqueue_burst;
+ cryptodev->dequeue_burst = ixgbe_crypto_pmd_dequeue_burst;
+
+ cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+ RTE_CRYPTODEV_FF_HW_ACCELERATED |
+ RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
+ RTE_CRYPTODEV_FF_MBUF_SCATTER_GATHER;
+
+ internals = cryptodev->data->dev_private;
+ internals->max_nb_sessions = RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_SESSIONS;
+ internals->max_nb_qpairs = RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_QUEUE_PAIRS;
+ internals->flags = 0;
+ memset(internals->rx_ip_table, 0, sizeof(internals->rx_ip_table));
+ memset(internals->rx_sa_table, 0, sizeof(internals->rx_sa_table));
+ memset(internals->tx_sa_table, 0, sizeof(internals->tx_sa_table));
+
+
+ /*
+ * For secondary processes, we don't initialise any further as primary
+ * has already done this work. Only check we don't need a different
+ * RX function
+ */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ IXGBE_CRYPTO_LOG_DBG("Device already initialised by primary process");
+ return 0;
+ }
+
+ return 0;
+}
+
+
+static struct rte_cryptodev_driver cryptodev_ixgbe_pmd_drv = {
+ .pci_drv = {
+ .id_table = pci_id_ixgbe_crypto_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+ .probe = rte_cryptodev_pci_probe,
+ .remove = rte_cryptodev_pci_remove,
+ },
+ .cryptodev_init = crypto_ixgbe_dev_init,
+ .dev_private_size = sizeof(struct ixgbe_crypto_private),
+};
+
+RTE_PMD_REGISTER_PCI(CRYPTODEV_NAME_IXGBE_PMD, cryptodev_ixgbe_pmd_drv.pci_drv);
+RTE_PMD_REGISTER_PCI_TABLE(CRYPTODEV_NAME_IXGBE_PMD, pci_id_ixgbe_crypto_map);
+
diff --git a/drivers/crypto/ixgbe/rte_pmd_ixgbe_crypto_version.map b/drivers/crypto/ixgbe/rte_pmd_ixgbe_crypto_version.map
new file mode 100644
index 0000000..dc4d417
--- /dev/null
+++ b/drivers/crypto/ixgbe/rte_pmd_ixgbe_crypto_version.map
@@ -0,0 +1,3 @@
+DPDK_16.04 {
+ local: *;
+};
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index ec667d8..bcb1489 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -428,61 +428,61 @@ static void ixgbe_l2_tunnel_conf(struct rte_eth_dev *dev);
* The set of PCI devices this driver supports
*/
static const struct rte_pci_id pci_id_ixgbe_map[] = {
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598_BX) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598AF_DUAL_PORT) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598AF_SINGLE_PORT) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598AT) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598AT2) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598EB_SFP_LOM) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598EB_CX4) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598_CX4_DUAL_PORT) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598_DA_DUAL_PORT) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598_SR_DUAL_PORT_EM) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598EB_XF_LR) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_KX4) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_KX4_MEZZ) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_KR) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_COMBO_BACKPLANE) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_SUBDEV_ID_82599_KX4_KR_MEZZ) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_CX4) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_SFP) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_SUBDEV_ID_82599_SFP) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_SUBDEV_ID_82599_RNDC) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_SUBDEV_ID_82599_560FLR) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_SUBDEV_ID_82599_ECNA_DP) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_BACKPLANE_FCOE) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_SFP_FCOE) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_SFP_EM) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_SFP_SF2) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_SFP_SF_QP) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_QSFP_SF_QP) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599EN_SFP) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_XAUI_LOM) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_T3_LOM) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_LS) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X540T) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X540T1) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_X_SFP) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_X_10G_T) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_X_1G_T) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550T) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550T1) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_KR) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_KR_L) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_SFP_N) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_SGMII) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_SGMII_L) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_10G_T) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_QSFP) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_QSFP_N) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_SFP) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_1G_T) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_1G_T_L) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_X_KX4) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_X_KR) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598_BX) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598AF_DUAL_PORT) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598AF_SINGLE_PORT) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598AT) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598AT2) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598EB_SFP_LOM) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598EB_CX4) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598_CX4_DUAL_PORT) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598_DA_DUAL_PORT) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598_SR_DUAL_PORT_EM) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82598EB_XF_LR) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_KX4) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_KX4_MEZZ) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_KR) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_COMBO_BACKPLANE) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_SUBDEV_ID_82599_KX4_KR_MEZZ) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_CX4) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_SFP) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_SUBDEV_ID_82599_SFP) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_SUBDEV_ID_82599_RNDC) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_SUBDEV_ID_82599_560FLR) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_SUBDEV_ID_82599_ECNA_DP) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_BACKPLANE_FCOE) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_SFP_FCOE) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_SFP_EM) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_SFP_SF2) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_SFP_SF_QP) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_QSFP_SF_QP) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599EN_SFP) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_XAUI_LOM) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_T3_LOM) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_LS) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X540T) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X540T1) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_X_SFP) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_X_10G_T) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_X_1G_T) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550T) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550T1) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_KR) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_KR_L) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_SFP_N) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_SGMII) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_SGMII_L) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_10G_T) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_QSFP) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_QSFP_N) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_SFP) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_1G_T) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_1G_T_L) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_X_KX4) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_X_KR) },
#ifdef RTE_NIC_BYPASS
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_BYPASS) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_BYPASS) },
#endif
{ .vendor_id = 0, /* sentinel */ },
};
@@ -491,16 +491,16 @@ static const struct rte_pci_id pci_id_ixgbe_map[] = {
* The set of PCI devices this driver supports (for 82599 VF)
*/
static const struct rte_pci_id pci_id_ixgbevf_map[] = {
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_VF) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_VF_HV) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X540_VF) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X540_VF_HV) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550_VF_HV) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550_VF) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_VF) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_VF_HV) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_X_VF) },
- { RTE_PCI_DEVICE(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_X_VF_HV) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_VF) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_82599_VF_HV) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X540_VF) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X540_VF_HV) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550_VF_HV) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550_VF) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_VF) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_A_VF_HV) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_X_VF) },
+ { RTE_PCI_DEVICE_SH(IXGBE_INTEL_VENDOR_ID, IXGBE_DEV_ID_X550EM_X_VF_HV) },
{ .vendor_id = 0, /* sentinel */ },
};
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 1e07895..f2a9066 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -395,7 +395,8 @@ ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
static inline void
ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
volatile struct ixgbe_adv_tx_context_desc *ctx_txd,
- uint64_t ol_flags, union ixgbe_tx_offload tx_offload)
+ uint64_t ol_flags, union ixgbe_tx_offload tx_offload,
+ struct rte_mbuf *mb)
{
uint32_t type_tucmd_mlhl;
uint32_t mss_l4len_idx = 0;
@@ -480,6 +481,14 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
<< IXGBE_ADVTXD_TUNNEL_LEN;
}
+ if (mb->ol_flags & PKT_TX_IPSEC_INLINE_CRYPTO)
+ {
+ seqnum_seed |= (IXGBE_ADVTXD_IPSEC_SA_INDEX_MASK & mb->inline_ipsec.sa_idx);
+ type_tucmd_mlhl |= mb->inline_ipsec.enc?
+ (IXGBE_ADVTXD_TUCMD_IPSEC_TYPE_ESP | IXGBE_ADVTXD_TUCMD_IPSEC_ENCRYPT_EN) : 0;
+ type_tucmd_mlhl |= (mb->inline_ipsec.pad_len & IXGBE_ADVTXD_IPSEC_ESP_LEN_MASK);
+ }
+
txq->ctx_cache[ctx_idx].flags = ol_flags;
txq->ctx_cache[ctx_idx].tx_offload.data[0] =
tx_offload_mask.data[0] & tx_offload.data[0];
@@ -855,7 +864,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
}
ixgbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req,
- tx_offload);
+ tx_offload, tx_pkt);
txe->last_id = tx_last;
tx_id = txe->next_id;
@@ -872,7 +881,8 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
olinfo_status |= ctx << IXGBE_ADVTXD_IDX_SHIFT;
}
- olinfo_status |= (pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT);
+ olinfo_status |= ((pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT) |
+ (((ol_flags & PKT_TX_IPSEC_INLINE_CRYPTO) != 0) * IXGBE_ADVTXD_POPTS_IPSEC));
m_seg = tx_pkt;
do {
@@ -1450,6 +1460,12 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status)
pkt_flags |= PKT_RX_EIP_CKSUM_BAD;
}
+ if (rx_status & IXGBE_RXD_STAT_SECP) {
+ pkt_flags |= PKT_RX_IPSEC_INLINE_CRYPTO;
+ if (rx_status & IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG)
+ pkt_flags |= PKT_RX_IPSEC_INLINE_CRYPTO_AUTH_FAILED;
+ }
+
return pkt_flags;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index a7bc199..0d88252 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -126,6 +126,7 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
{
__m128i ptype0, ptype1, vtag0, vtag1, csum;
__m128i rearm0, rearm1, rearm2, rearm3;
+ __m128i sterr0, sterr1, sterr2, sterr3, tmp1, tmp2;
/* mask everything except rss type */
const __m128i rsstype_msk = _mm_set_epi16(
@@ -172,6 +173,16 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
0, PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t));
+ const __m128i ipsec_sterr_msk = _mm_set_epi32(
+ 0, IXGBE_RXD_STAT_SECP | IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG, 0, 0);
+ const __m128i ipsec_proc_msk = _mm_set_epi32(
+ 0, IXGBE_RXD_STAT_SECP, 0, 0);
+ const __m128i ipsec_err_flag = _mm_set_epi32(
+ 0, PKT_RX_IPSEC_INLINE_CRYPTO_AUTH_FAILED | PKT_RX_IPSEC_INLINE_CRYPTO, 0, 0);
+ const __m128i ipsec_proc_flag = _mm_set_epi32(
+ 0, PKT_RX_IPSEC_INLINE_CRYPTO, 0, 0);
+
+
ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
@@ -234,6 +245,29 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
#endif /* RTE_MACHINE_CPUFLAG_SSE4_2 */
+
+ /*inline ipsec, extract the flags from the descriptors*/
+ sterr0 = _mm_and_si128(descs[0], ipsec_sterr_msk);
+ sterr1 = _mm_and_si128(descs[1], ipsec_sterr_msk);
+ sterr2 = _mm_and_si128(descs[2], ipsec_sterr_msk);
+ sterr3 = _mm_and_si128(descs[3], ipsec_sterr_msk);
+ tmp1 = _mm_cmpeq_epi32(sterr0, ipsec_sterr_msk);
+ tmp2 = _mm_cmpeq_epi32(sterr0, ipsec_proc_msk);
+ sterr0 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag), _mm_and_si128(tmp2, ipsec_proc_flag));
+ tmp1 = _mm_cmpeq_epi32(sterr1, ipsec_sterr_msk);
+ tmp2 = _mm_cmpeq_epi32(sterr1, ipsec_proc_msk);
+ sterr1 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag), _mm_and_si128(tmp2, ipsec_proc_flag));
+ tmp1 = _mm_cmpeq_epi32(sterr2, ipsec_sterr_msk);
+ tmp2 = _mm_cmpeq_epi32(sterr2, ipsec_proc_msk);
+ sterr2 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag), _mm_and_si128(tmp2, ipsec_proc_flag));
+ tmp1 = _mm_cmpeq_epi32(sterr3, ipsec_sterr_msk);
+ tmp2 = _mm_cmpeq_epi32(sterr3, ipsec_proc_msk);
+ sterr3 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag), _mm_and_si128(tmp2, ipsec_proc_flag));
+ rearm0 = _mm_or_si128(rearm0, sterr0);
+ rearm1 = _mm_or_si128(rearm1, sterr1);
+ rearm2 = _mm_or_si128(rearm2, sterr2);
+ rearm3 = _mm_or_si128(rearm3, sterr3);
+
_mm_store_si128((__m128i *)&rx_pkts[0]->rearm_data, rearm0);
_mm_store_si128((__m128i *)&rx_pkts[1]->rearm_data, rearm1);
_mm_store_si128((__m128i *)&rx_pkts[2]->rearm_data, rearm2);
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 88aeb87..28fb92b 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -72,6 +72,8 @@ extern "C" {
/**< Scheduler Crypto PMD device name */
#define CRYPTODEV_NAME_DPAA2_SEC_PMD cryptodev_dpaa2_sec_pmd
/**< NXP DPAA2 - SEC PMD device name */
+#define CRYPTODEV_NAME_IXGBE_PMD crypto_ixgbe_ipsec
+/**< IXGBE inline ipsec crypto PMD device name */
/** Crypto device type */
enum rte_cryptodev_type {
@@ -82,10 +84,11 @@ enum rte_cryptodev_type {
RTE_CRYPTODEV_SNOW3G_PMD, /**< SNOW 3G PMD */
RTE_CRYPTODEV_KASUMI_PMD, /**< KASUMI PMD */
RTE_CRYPTODEV_ZUC_PMD, /**< ZUC PMD */
- RTE_CRYPTODEV_OPENSSL_PMD, /**< OpenSSL PMD */
+ RTE_CRYPTODEV_OPENSSL_PMD, /**< OpenSSL PMD */
RTE_CRYPTODEV_ARMV8_PMD, /**< ARMv8 crypto PMD */
RTE_CRYPTODEV_SCHEDULER_PMD, /**< Crypto Scheduler PMD */
RTE_CRYPTODEV_DPAA2_SEC_PMD, /**< NXP DPAA2 - SEC PMD */
+ RTE_CRYPTODEV_IXGBE_PMD, /**< IXGBE Inline IPSec PMD */
};
extern const char **rte_cyptodev_names;
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index bcaf1b3..260599e 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -150,6 +150,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += -L$(AESNI_MULTI_BUFFER_LIB_PATH)
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_GCM) += -lrte_pmd_aesni_gcm -lisal_crypto
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_OPENSSL) += -lrte_pmd_openssl -lcrypto
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += -lrte_pmd_null_crypto
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_IXGBE_CRYPTO) += -lrte_pmd_ixgbe_crypto
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += -lrte_pmd_qat -lcrypto
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += -lrte_pmd_snow3g
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += -L$(LIBSSO_SNOW3G_PATH)/build -lsso_snow3g
--
2.7.4
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [RFC][PATCH 5/5] examples: updated IPSec sample app to support inline IPSec
2017-05-09 14:57 [dpdk-dev] [RFC][PATCH 0/5] cryptodev: Adding support for inline crypto processing of IPsec flows Radu Nicolau
` (3 preceding siblings ...)
2017-05-09 14:57 ` [dpdk-dev] [RFC][PATCH 4/5] cryptodev: added new crypto PMD supporting inline IPSec for IXGBE Radu Nicolau
@ 2017-05-09 14:57 ` Radu Nicolau
2017-05-10 16:07 ` [dpdk-dev] [RFC][PATCH 0/5] cryptodev: Adding support for inline crypto processing of IPsec flows Boris Pismenny
2017-05-16 21:46 ` Thomas Monjalon
6 siblings, 0 replies; 21+ messages in thread
From: Radu Nicolau @ 2017-05-09 14:57 UTC (permalink / raw)
To: dev; +Cc: Radu Nicolau
Added new SA types: ipv4-inline and ipv6-inline.
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
---
examples/ipsec-secgw/esp.c | 7 +-
examples/ipsec-secgw/ipsec.h | 2 +
examples/ipsec-secgw/sa.c | 165 ++++++++++++++++++++++++++++---------------
3 files changed, 117 insertions(+), 57 deletions(-)
diff --git a/examples/ipsec-secgw/esp.c b/examples/ipsec-secgw/esp.c
index e77afa0..f1dfac4 100644
--- a/examples/ipsec-secgw/esp.c
+++ b/examples/ipsec-secgw/esp.c
@@ -253,11 +253,12 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
pad_len = pad_payload_len + ip_hdr_len - rte_pktmbuf_pkt_len(m);
RTE_ASSERT(sa->flags == IP4_TUNNEL || sa->flags == IP6_TUNNEL ||
+ sa->flags == IP4_INLINE || sa->flags == IP6_INLINE ||
sa->flags == TRANSPORT);
- if (likely(sa->flags == IP4_TUNNEL))
+ if (likely(sa->flags == IP4_TUNNEL || sa->flags == IP4_INLINE))
ip_hdr_len = sizeof(struct ip);
- else if (sa->flags == IP6_TUNNEL)
+ else if (sa->flags == IP6_TUNNEL || sa->flags == IP6_INLINE)
ip_hdr_len = sizeof(struct ip6_hdr);
else if (sa->flags != TRANSPORT) {
RTE_LOG(ERR, IPSEC_ESP, "Unsupported SA flags: 0x%x\n",
@@ -281,11 +282,13 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
switch (sa->flags) {
case IP4_TUNNEL:
+ case IP4_INLINE:
ip4 = ip4ip_outbound(m, sizeof(struct esp_hdr) + sa->iv_len,
&sa->src, &sa->dst);
esp = (struct esp_hdr *)(ip4 + 1);
break;
case IP6_TUNNEL:
+ case IP6_INLINE:
ip6 = ip6ip_outbound(m, sizeof(struct esp_hdr) + sa->iv_len,
&sa->src, &sa->dst);
esp = (struct esp_hdr *)(ip6 + 1);
diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h
index fe42661..502c182 100644
--- a/examples/ipsec-secgw/ipsec.h
+++ b/examples/ipsec-secgw/ipsec.h
@@ -107,6 +107,8 @@ struct ipsec_sa {
#define IP4_TUNNEL (1 << 0)
#define IP6_TUNNEL (1 << 1)
#define TRANSPORT (1 << 2)
+#define IP4_INLINE (1 << 3)
+#define IP6_INLINE (1 << 4)
struct ip_addr src;
struct ip_addr dst;
uint8_t cipher_key[MAX_KEY_SIZE];
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index 39624c4..b58bca7 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -256,6 +256,10 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
rule->flags = IP6_TUNNEL;
else if (strcmp(tokens[ti], "transport") == 0)
rule->flags = TRANSPORT;
+ else if (strcmp(tokens[ti], "ipv4-inline") == 0)
+ rule->flags = IP4_INLINE;
+ else if (strcmp(tokens[ti], "ipv6-inline") == 0)
+ rule->flags = IP6_INLINE;
else {
APP_CHECK(0, status, "unrecognized "
"input \"%s\"", tokens[ti]);
@@ -395,7 +399,7 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
if (status->status < 0)
return;
- if (rule->flags == IP4_TUNNEL) {
+ if (rule->flags == IP4_TUNNEL || rule->flags == IP4_INLINE) {
struct in_addr ip;
APP_CHECK(parse_ipv4_addr(tokens[ti],
@@ -407,7 +411,7 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
return;
rule->src.ip.ip4 = rte_bswap32(
(uint32_t)ip.s_addr);
- } else if (rule->flags == IP6_TUNNEL) {
+ } else if (rule->flags == IP6_TUNNEL || rule->flags == IP6_INLINE) {
struct in6_addr ip;
APP_CHECK(parse_ipv6_addr(tokens[ti], &ip,
@@ -438,7 +442,7 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
if (status->status < 0)
return;
- if (rule->flags == IP4_TUNNEL) {
+ if (rule->flags == IP4_TUNNEL || rule->flags == IP4_INLINE) {
struct in_addr ip;
APP_CHECK(parse_ipv4_addr(tokens[ti],
@@ -450,7 +454,7 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
return;
rule->dst.ip.ip4 = rte_bswap32(
(uint32_t)ip.s_addr);
- } else if (rule->flags == IP6_TUNNEL) {
+ } else if (rule->flags == IP6_TUNNEL || rule->flags == IP6_INLINE) {
struct in6_addr ip;
APP_CHECK(parse_ipv6_addr(tokens[ti], &ip,
@@ -518,14 +522,16 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound)
switch (sa->flags) {
case IP4_TUNNEL:
- printf("IP4Tunnel ");
+ case IP4_INLINE:
+ printf(sa->flags == IP4_TUNNEL? "IP4Tunnel " : "IP4Inline ");
uint32_t_to_char(sa->src.ip.ip4, &a, &b, &c, &d);
printf("%hhu.%hhu.%hhu.%hhu ", d, c, b, a);
uint32_t_to_char(sa->dst.ip.ip4, &a, &b, &c, &d);
printf("%hhu.%hhu.%hhu.%hhu", d, c, b, a);
break;
case IP6_TUNNEL:
- printf("IP6Tunnel ");
+ case IP6_INLINE:
+ printf(sa->flags == IP6_TUNNEL? "IP6Tunnel " : "IP6Inline ");
for (i = 0; i < 16; i++) {
if (i % 2 && i != 15)
printf("%.2x:", sa->src.ip.ip6.ip6_b[i]);
@@ -603,60 +609,107 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
switch (sa->flags) {
case IP4_TUNNEL:
+ case IP4_INLINE:
sa->src.ip.ip4 = rte_cpu_to_be_32(sa->src.ip.ip4);
sa->dst.ip.ip4 = rte_cpu_to_be_32(sa->dst.ip.ip4);
}
- if (inbound) {
- sa_ctx->xf[idx].b.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
- sa_ctx->xf[idx].b.cipher.algo = sa->cipher_algo;
- sa_ctx->xf[idx].b.cipher.key.data = sa->cipher_key;
- sa_ctx->xf[idx].b.cipher.key.length =
- sa->cipher_key_len;
- sa_ctx->xf[idx].b.cipher.op =
- RTE_CRYPTO_CIPHER_OP_DECRYPT;
- sa_ctx->xf[idx].b.next = NULL;
-
- sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_AUTH;
- sa_ctx->xf[idx].a.auth.algo = sa->auth_algo;
- sa_ctx->xf[idx].a.auth.add_auth_data_length =
- sa->aad_len;
- sa_ctx->xf[idx].a.auth.key.data = sa->auth_key;
- sa_ctx->xf[idx].a.auth.key.length =
- sa->auth_key_len;
- sa_ctx->xf[idx].a.auth.digest_length =
- sa->digest_len;
- sa_ctx->xf[idx].a.auth.op =
- RTE_CRYPTO_AUTH_OP_VERIFY;
-
- } else { /* outbound */
- sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
- sa_ctx->xf[idx].a.cipher.algo = sa->cipher_algo;
- sa_ctx->xf[idx].a.cipher.key.data = sa->cipher_key;
- sa_ctx->xf[idx].a.cipher.key.length =
- sa->cipher_key_len;
- sa_ctx->xf[idx].a.cipher.op =
- RTE_CRYPTO_CIPHER_OP_ENCRYPT;
- sa_ctx->xf[idx].a.next = NULL;
-
- sa_ctx->xf[idx].b.type = RTE_CRYPTO_SYM_XFORM_AUTH;
- sa_ctx->xf[idx].b.auth.algo = sa->auth_algo;
- sa_ctx->xf[idx].b.auth.add_auth_data_length =
- sa->aad_len;
- sa_ctx->xf[idx].b.auth.key.data = sa->auth_key;
- sa_ctx->xf[idx].b.auth.key.length =
- sa->auth_key_len;
- sa_ctx->xf[idx].b.auth.digest_length =
- sa->digest_len;
- sa_ctx->xf[idx].b.auth.op =
- RTE_CRYPTO_AUTH_OP_GENERATE;
+ if (sa->flags == IP4_INLINE || sa->flags == IP6_INLINE) {
+
+ if (inbound) {
+ sa_ctx->xf[idx].b.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+ sa_ctx->xf[idx].b.cipher.algo = sa->cipher_algo;
+ sa_ctx->xf[idx].b.cipher.key.data = sa->cipher_key;
+ sa_ctx->xf[idx].b.cipher.key.length =
+ sa->cipher_key_len;
+ sa_ctx->xf[idx].b.cipher.op =
+ RTE_CRYPTO_CIPHER_OP_DECRYPT;
+ sa_ctx->xf[idx].b.next = NULL;
+
+ sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_IPSEC;
+ sa_ctx->xf[idx].a.ipsec.dir = RTE_CRYPTO_INBOUND;
+ sa_ctx->xf[idx].a.ipsec.spi = sa->spi;
+ sa_ctx->xf[idx].a.ipsec.salt = sa->salt;
+ sa_ctx->xf[idx].a.ipsec.src_ip.ipv4 = rte_cpu_to_be_32(sa->src.ip.ip4);
+ sa_ctx->xf[idx].a.ipsec.dst_ip.ipv4 = rte_cpu_to_be_32(sa->dst.ip.ip4);
+
+ } else { /* outbound */
+ sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+ sa_ctx->xf[idx].a.cipher.algo = sa->cipher_algo;
+ sa_ctx->xf[idx].a.cipher.key.data = sa->cipher_key;
+ sa_ctx->xf[idx].a.cipher.key.length =
+ sa->cipher_key_len;
+ sa_ctx->xf[idx].a.cipher.op =
+ RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+ sa_ctx->xf[idx].a.next = NULL;
+
+ sa_ctx->xf[idx].b.type = RTE_CRYPTO_SYM_XFORM_IPSEC;
+ sa_ctx->xf[idx].b.ipsec.dir = RTE_CRYPTO_OUTBOUND;
+ sa_ctx->xf[idx].b.ipsec.spi = sa->spi;
+ sa_ctx->xf[idx].b.ipsec.salt = sa->salt;
+ sa_ctx->xf[idx].b.ipsec.src_ip.ipv4 = rte_cpu_to_be_32(sa->src.ip.ip4);
+ sa_ctx->xf[idx].b.ipsec.dst_ip.ipv4 = rte_cpu_to_be_32(sa->dst.ip.ip4);
+ }
+
+ sa_ctx->xf[idx].a.next = &sa_ctx->xf[idx].b;
+ sa_ctx->xf[idx].b.next = NULL;
+ sa->xforms = &sa_ctx->xf[idx].a;
+
+ print_one_sa_rule(sa, inbound);
+ }
+ else {
+
+ if (inbound) {
+ sa_ctx->xf[idx].b.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+ sa_ctx->xf[idx].b.cipher.algo = sa->cipher_algo;
+ sa_ctx->xf[idx].b.cipher.key.data = sa->cipher_key;
+ sa_ctx->xf[idx].b.cipher.key.length =
+ sa->cipher_key_len;
+ sa_ctx->xf[idx].b.cipher.op =
+ RTE_CRYPTO_CIPHER_OP_DECRYPT;
+ sa_ctx->xf[idx].b.next = NULL;
+
+ sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+ sa_ctx->xf[idx].a.auth.algo = sa->auth_algo;
+ sa_ctx->xf[idx].a.auth.add_auth_data_length =
+ sa->aad_len;
+ sa_ctx->xf[idx].a.auth.key.data = sa->auth_key;
+ sa_ctx->xf[idx].a.auth.key.length =
+ sa->auth_key_len;
+ sa_ctx->xf[idx].a.auth.digest_length =
+ sa->digest_len;
+ sa_ctx->xf[idx].a.auth.op =
+ RTE_CRYPTO_AUTH_OP_VERIFY;
+
+ } else { /* outbound */
+ sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+ sa_ctx->xf[idx].a.cipher.algo = sa->cipher_algo;
+ sa_ctx->xf[idx].a.cipher.key.data = sa->cipher_key;
+ sa_ctx->xf[idx].a.cipher.key.length =
+ sa->cipher_key_len;
+ sa_ctx->xf[idx].a.cipher.op =
+ RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+ sa_ctx->xf[idx].a.next = NULL;
+
+ sa_ctx->xf[idx].b.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+ sa_ctx->xf[idx].b.auth.algo = sa->auth_algo;
+ sa_ctx->xf[idx].b.auth.add_auth_data_length =
+ sa->aad_len;
+ sa_ctx->xf[idx].b.auth.key.data = sa->auth_key;
+ sa_ctx->xf[idx].b.auth.key.length =
+ sa->auth_key_len;
+ sa_ctx->xf[idx].b.auth.digest_length =
+ sa->digest_len;
+ sa_ctx->xf[idx].b.auth.op =
+ RTE_CRYPTO_AUTH_OP_GENERATE;
+ }
+
+ sa_ctx->xf[idx].a.next = &sa_ctx->xf[idx].b;
+ sa_ctx->xf[idx].b.next = NULL;
+ sa->xforms = &sa_ctx->xf[idx].a;
+
+ print_one_sa_rule(sa, inbound);
}
-
- sa_ctx->xf[idx].a.next = &sa_ctx->xf[idx].b;
- sa_ctx->xf[idx].b.next = NULL;
- sa->xforms = &sa_ctx->xf[idx].a;
-
- print_one_sa_rule(sa, inbound);
}
return 0;
@@ -755,6 +808,7 @@ single_inbound_lookup(struct ipsec_sa *sadb, struct rte_mbuf *pkt,
switch (sa->flags) {
case IP4_TUNNEL:
+ case IP4_INLINE:
src4_addr = RTE_PTR_ADD(ip, offsetof(struct ip, ip_src));
if ((ip->ip_v == IPVERSION) &&
(sa->src.ip.ip4 == *src4_addr) &&
@@ -762,6 +816,7 @@ single_inbound_lookup(struct ipsec_sa *sadb, struct rte_mbuf *pkt,
*sa_ret = sa;
break;
case IP6_TUNNEL:
+ case IP6_INLINE:
src6_addr = RTE_PTR_ADD(ip, offsetof(struct ip6_hdr, ip6_src));
if ((ip->ip_v == IP6_VERSION) &&
!memcmp(&sa->src.ip.ip6.ip6, src6_addr, 16) &&
--
2.7.4
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [RFC][PATCH 2/5] pci: allow shared device instances.
2017-05-09 14:57 ` [dpdk-dev] [RFC][PATCH 2/5] pci: allow shared device instances Radu Nicolau
@ 2017-05-10 9:09 ` Thomas Monjalon
2017-05-10 10:11 ` Radu Nicolau
0 siblings, 1 reply; 21+ messages in thread
From: Thomas Monjalon @ 2017-05-10 9:09 UTC (permalink / raw)
To: Radu Nicolau; +Cc: dev
Hi,
09/05/2017 16:57, Radu Nicolau:
> Updated PCI initialization code to allow devices to be shared across multiple PMDs.
>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
I was waiting the day when we have a device shared
by two different interfaces.
Note that some Mellanox and Chelsio devices already instantiate
two ethdev ports per PCI device.
Please explain your idea behind this "shared" flag.
What is your exact need?
Do you think it is the best solution?
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [RFC][PATCH 2/5] pci: allow shared device instances.
2017-05-10 9:09 ` Thomas Monjalon
@ 2017-05-10 10:11 ` Radu Nicolau
2017-05-10 10:28 ` Thomas Monjalon
0 siblings, 1 reply; 21+ messages in thread
From: Radu Nicolau @ 2017-05-10 10:11 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
Hi
On 5/10/2017 10:09 AM, Thomas Monjalon wrote:
> Hi,
>
> 09/05/2017 16:57, Radu Nicolau:
>> Updated PCI initialization code to allow devices to be shared across multiple PMDs.
>>
>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> I was waiting the day when we have a device shared
> by two different interfaces.
> Note that some Mellanox and Chelsio devices already instantiate
> two ethdev ports per PCI device.
>
> Please explain your idea behind this "shared" flag.
> What is your exact need?
Currently for each pci device a look-up into a list of PMDs is
performed, and when a match is found the system moves to the next
device. Having this flag will allow a PMD to inform the system that
there may be more matches, more PMDs that can be used for this
particular device.
There is a difference when comparing to the devices you mentioned above,
in this case the PMDs are totally different types, one network and one
cryptodev PMD for each IXGBE network card.
> Do you think it is the best solution?
We evaluated different approaches and this is what we settled on. It
might not be the best, if there are any suggestions of other ways to
achieve this I would be thankful.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [RFC][PATCH 2/5] pci: allow shared device instances.
2017-05-10 10:11 ` Radu Nicolau
@ 2017-05-10 10:28 ` Thomas Monjalon
2017-05-10 10:47 ` Radu Nicolau
2017-05-10 10:52 ` Declan Doherty
0 siblings, 2 replies; 21+ messages in thread
From: Thomas Monjalon @ 2017-05-10 10:28 UTC (permalink / raw)
To: Radu Nicolau; +Cc: dev
10/05/2017 12:11, Radu Nicolau:
> Hi
>
>
> On 5/10/2017 10:09 AM, Thomas Monjalon wrote:
> > Hi,
> >
> > 09/05/2017 16:57, Radu Nicolau:
> >> Updated PCI initialization code to allow devices to be shared across multiple PMDs.
> >>
> >> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> > I was waiting the day when we have a device shared
> > by two different interfaces.
> > Note that some Mellanox and Chelsio devices already instantiate
> > two ethdev ports per PCI device.
> >
> > Please explain your idea behind this "shared" flag.
> > What is your exact need?
>
> Currently for each pci device a look-up into a list of PMDs is
> performed, and when a match is found the system moves to the next
> device. Having this flag will allow a PMD to inform the system that
> there may be more matches, more PMDs that can be used for this
> particular device.
> There is a difference when comparing to the devices you mentioned above,
> in this case the PMDs are totally different types, one network and one
> cryptodev PMD for each IXGBE network card.
Yes I know it is a lack in DPDK.
Linux introduced MultiFunction Device in 2005:
http://events.linuxfoundation.org/sites/events/files/slides/belloni-mfd-regmap-syscon_0.pdf
> > Do you think it is the best solution?
>
> We evaluated different approaches and this is what we settled on. It
> might not be the best, if there are any suggestions of other ways to
> achieve this I would be thankful.
Please could you explain the other approaches you thought
with pros and cons?
Thanks
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [RFC][PATCH 2/5] pci: allow shared device instances.
2017-05-10 10:28 ` Thomas Monjalon
@ 2017-05-10 10:47 ` Radu Nicolau
2017-05-10 10:52 ` Declan Doherty
1 sibling, 0 replies; 21+ messages in thread
From: Radu Nicolau @ 2017-05-10 10:47 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
On 5/10/2017 11:28 AM, Thomas Monjalon wrote:
> 10/05/2017 12:11, Radu Nicolau:
>> Hi
>>
>>
>> On 5/10/2017 10:09 AM, Thomas Monjalon wrote:
>>> Hi,
>>>
>>> 09/05/2017 16:57, Radu Nicolau:
>>>> Updated PCI initialization code to allow devices to be shared across multiple PMDs.
>>>>
>>>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>>> I was waiting the day when we have a device shared
>>> by two different interfaces.
>>> Note that some Mellanox and Chelsio devices already instantiate
>>> two ethdev ports per PCI device.
>>>
>>> Please explain your idea behind this "shared" flag.
>>> What is your exact need?
>> Currently for each pci device a look-up into a list of PMDs is
>> performed, and when a match is found the system moves to the next
>> device. Having this flag will allow a PMD to inform the system that
>> there may be more matches, more PMDs that can be used for this
>> particular device.
>> There is a difference when comparing to the devices you mentioned above,
>> in this case the PMDs are totally different types, one network and one
>> cryptodev PMD for each IXGBE network card.
> Yes I know it is a lack in DPDK.
> Linux introduced MultiFunction Device in 2005:
> http://events.linuxfoundation.org/sites/events/files/slides/belloni-mfd-regmap-syscon_0.pdf
>
>>> Do you think it is the best solution?
>> We evaluated different approaches and this is what we settled on. It
>> might not be the best, if there are any suggestions of other ways to
>> achieve this I would be thankful.
> Please could you explain the other approaches you thought
> with pros and cons?
We have considered a vdev crypto PMD approach that would have not
require changes to the eal section, but it would have required some sort
of side communication with the IXGBE PMD; another one was a some sort of
on-demant initialized cryptodev. Over these approaches the current one
is cleaner and makes more sense, initialize a net and crypto PMD for a
device that is both a NIC and a cryptro device.
>
> Thanks
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [RFC][PATCH 2/5] pci: allow shared device instances.
2017-05-10 10:28 ` Thomas Monjalon
2017-05-10 10:47 ` Radu Nicolau
@ 2017-05-10 10:52 ` Declan Doherty
2017-05-10 11:08 ` Jerin Jacob
1 sibling, 1 reply; 21+ messages in thread
From: Declan Doherty @ 2017-05-10 10:52 UTC (permalink / raw)
To: Thomas Monjalon, Radu Nicolau; +Cc: dev
Hey Thomas, I've been working on this with Radu, so see my take below
On 10/05/2017 11:28 AM, Thomas Monjalon wrote:
> 10/05/2017 12:11, Radu Nicolau:
>> Hi
>>
>>
>> On 5/10/2017 10:09 AM, Thomas Monjalon wrote:
>>> Hi,
>>>
>>> 09/05/2017 16:57, Radu Nicolau:
>>>> Updated PCI initialization code to allow devices to be shared across multiple PMDs.
>>>>
>>>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>>> I was waiting the day when we have a device shared
>>> by two different interfaces.
>>> Note that some Mellanox and Chelsio devices already instantiate
>>> two ethdev ports per PCI device.
>>>
>>> Please explain your idea behind this "shared" flag.
>>> What is your exact need?
>>
>> Currently for each pci device a look-up into a list of PMDs is
>> performed, and when a match is found the system moves to the next
>> device. Having this flag will allow a PMD to inform the system that
>> there may be more matches, more PMDs that can be used for this
>> particular device.
>> There is a difference when comparing to the devices you mentioned above,
>> in this case the PMDs are totally different types, one network and one
>> cryptodev PMD for each IXGBE network card.
>
> Yes I know it is a lack in DPDK.
> Linux introduced MultiFunction Device in 2005:
> http://events.linuxfoundation.org/sites/events/files/slides/belloni-mfd-regmap-syscon_0.pdf
>
So at the most basic level the intention is to allow more than one
device of different types, in our case a net PMD and a crypto PMD, to be
instantiated on a single PCI bar, in essence to share the bar. I'm not
familiar with the approaches taken in the Mellanox and Chelsio devices
but I assume they are handled with the driver probe/create functions
independently from the EAL infrastructure?
For the initial proto-typing of this RFC we only implemented the
multi-device creation but I envisage that there will be a requirement
for sharing state between drivers, or at a minimum implementing locking
around shared resources, registers etc. And I would like to see this
done in a generic fashion that can me leverage by any driver and not
have each driver having to solve this independently.
>>> Do you think it is the best solution?
>>
>> We evaluated different approaches and this is what we settled on. It
>> might not be the best, if there are any suggestions of other ways to
>> achieve this I would be thankful.
I think this approach was sufficient to enable the RFC and kick off the
discussion, but it is not a fully featured solution and we wanted to get
community feedback before progressing to far along with a fully featured
solution.
>
> Please could you explain the other approaches you thought
> with pros and cons?
>
> Thanks
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [RFC][PATCH 2/5] pci: allow shared device instances.
2017-05-10 10:52 ` Declan Doherty
@ 2017-05-10 11:08 ` Jerin Jacob
2017-05-10 11:31 ` Declan Doherty
2017-05-10 11:37 ` Thomas Monjalon
0 siblings, 2 replies; 21+ messages in thread
From: Jerin Jacob @ 2017-05-10 11:08 UTC (permalink / raw)
To: Declan Doherty; +Cc: Thomas Monjalon, Radu Nicolau, dev
-----Original Message-----
> Date: Wed, 10 May 2017 11:52:45 +0100
> From: Declan Doherty <declan.doherty@intel.com>
> To: Thomas Monjalon <thomas@monjalon.net>, Radu Nicolau
> <radu.nicolau@intel.com>
> CC: dev@dpdk.org
> Subject: Re: [dpdk-dev] [RFC][PATCH 2/5] pci: allow shared device instances.
> User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101
> Thunderbird/45.8.0
>
> Hey Thomas, I've been working on this with Radu, so see my take below
>
> On 10/05/2017 11:28 AM, Thomas Monjalon wrote:
> > 10/05/2017 12:11, Radu Nicolau:
> > > Hi
> > >
> > >
> > > On 5/10/2017 10:09 AM, Thomas Monjalon wrote:
> > > > Hi,
> > > >
> > > > 09/05/2017 16:57, Radu Nicolau:
> > > > > Updated PCI initialization code to allow devices to be shared across multiple PMDs.
> > > > >
> > > > > Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> > > > I was waiting the day when we have a device shared
> > > > by two different interfaces.
> > > > Note that some Mellanox and Chelsio devices already instantiate
> > > > two ethdev ports per PCI device.
> > > >
> > > > Please explain your idea behind this "shared" flag.
> > > > What is your exact need?
> > >
> > > Currently for each pci device a look-up into a list of PMDs is
> > > performed, and when a match is found the system moves to the next
> > > device. Having this flag will allow a PMD to inform the system that
> > > there may be more matches, more PMDs that can be used for this
> > > particular device.
> > > There is a difference when comparing to the devices you mentioned above,
> > > in this case the PMDs are totally different types, one network and one
> > > cryptodev PMD for each IXGBE network card.
> >
> > Yes I know it is a lack in DPDK.
> > Linux introduced MultiFunction Device in 2005:
> > http://events.linuxfoundation.org/sites/events/files/slides/belloni-mfd-regmap-syscon_0.pdf
> >
>
> So at the most basic level the intention is to allow more than one device of
> different types, in our case a net PMD and a crypto PMD, to be instantiated
> on a single PCI bar, in essence to share the bar. I'm not familiar with the
> approaches taken in the Mellanox and Chelsio devices but I assume they are
> handled with the driver probe/create functions independently from the EAL
> infrastructure?
>
> For the initial proto-typing of this RFC we only implemented the
> multi-device creation but I envisage that there will be a requirement for
> sharing state between drivers, or at a minimum implementing locking around
> shared resources, registers etc. And I would like to see this done in a
> generic fashion that can me leverage by any driver and not have each driver
> having to solve this independently.
Cavium's next generation PCI based NW devices has similar scheme where we
need to share the same BAR with multiple DPDK subsystems(ethdev,
eventdev etc) unlike current generation(OcteonTX).
I think, Another possible way to handle this in generic way is to:
Register a new rte_bus for the shared PCI access which sits on top PCIe bus.
With new bus's scan and probe scheme, it can probe the two devices.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [RFC][PATCH 2/5] pci: allow shared device instances.
2017-05-10 11:08 ` Jerin Jacob
@ 2017-05-10 11:31 ` Declan Doherty
2017-05-10 12:18 ` Jerin Jacob
2017-05-10 11:37 ` Thomas Monjalon
1 sibling, 1 reply; 21+ messages in thread
From: Declan Doherty @ 2017-05-10 11:31 UTC (permalink / raw)
To: Jerin Jacob; +Cc: Thomas Monjalon, Radu Nicolau, dev
On 10/05/2017 12:08 PM, Jerin Jacob wrote:
> -----Original Message-----
>> Date: Wed, 10 May 2017 11:52:45 +0100
>> From: Declan Doherty <declan.doherty@intel.com>
>> To: Thomas Monjalon <thomas@monjalon.net>, Radu Nicolau
>> <radu.nicolau@intel.com>
>> CC: dev@dpdk.org
>> Subject: Re: [dpdk-dev] [RFC][PATCH 2/5] pci: allow shared device instances.
>> User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101
>> Thunderbird/45.8.0
>>
>> Hey Thomas, I've been working on this with Radu, so see my take below
>>
>> On 10/05/2017 11:28 AM, Thomas Monjalon wrote:
>>> 10/05/2017 12:11, Radu Nicolau:
>>>> Hi
>>>>
>>>>
>>>> On 5/10/2017 10:09 AM, Thomas Monjalon wrote:
>>>>> Hi,
>>>>>
>>>>> 09/05/2017 16:57, Radu Nicolau:
>>>>>> Updated PCI initialization code to allow devices to be shared across multiple PMDs.
>>>>>>
>>>>>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>>>>> I was waiting the day when we have a device shared
>>>>> by two different interfaces.
>>>>> Note that some Mellanox and Chelsio devices already instantiate
>>>>> two ethdev ports per PCI device.
>>>>>
>>>>> Please explain your idea behind this "shared" flag.
>>>>> What is your exact need?
>>>>
>>>> Currently for each pci device a look-up into a list of PMDs is
>>>> performed, and when a match is found the system moves to the next
>>>> device. Having this flag will allow a PMD to inform the system that
>>>> there may be more matches, more PMDs that can be used for this
>>>> particular device.
>>>> There is a difference when comparing to the devices you mentioned above,
>>>> in this case the PMDs are totally different types, one network and one
>>>> cryptodev PMD for each IXGBE network card.
>>>
>>> Yes I know it is a lack in DPDK.
>>> Linux introduced MultiFunction Device in 2005:
>>> http://events.linuxfoundation.org/sites/events/files/slides/belloni-mfd-regmap-syscon_0.pdf
>>>
>>
>> So at the most basic level the intention is to allow more than one device of
>> different types, in our case a net PMD and a crypto PMD, to be instantiated
>> on a single PCI bar, in essence to share the bar. I'm not familiar with the
>> approaches taken in the Mellanox and Chelsio devices but I assume they are
>> handled with the driver probe/create functions independently from the EAL
>> infrastructure?
>>
>> For the initial proto-typing of this RFC we only implemented the
>> multi-device creation but I envisage that there will be a requirement for
>> sharing state between drivers, or at a minimum implementing locking around
>> shared resources, registers etc. And I would like to see this done in a
>> generic fashion that can me leverage by any driver and not have each driver
>> having to solve this independently.
>
> Cavium's next generation PCI based NW devices has similar scheme where we
> need to share the same BAR with multiple DPDK subsystems(ethdev,
> eventdev etc) unlike current generation(OcteonTX).
>
Have you done investigation into how you would like to support this, and
are you trending to any particular approach. The rte_bus approach as you
outline below does sound like it would suit this multi-function device.
> I think, Another possible way to handle this in generic way is to:
> Register a new rte_bus for the shared PCI access which sits on top PCIe bus.
> With new bus's scan and probe scheme, it can probe the two devices.
>
>
Yes, this would work and I think it makes a lot of sense in the case
where you have logically independent hardware functional blocks on a
shared bus. In our particular case, we only have a single physical
device, which we are presenting as 2 logical devices purely to improve
the sw model through DPDK existing infrastructure. We may also need to
implement some shared context for protecting access to shared resources
such as register and to synchronized exposure of capabilities. In the
case of the IXBGE family of devices they can support MACsec or IPsec
functionality but not both at the same time, so some mechanism of
passing this state between the net and crypto PMDs will be required. I
guess it should be possible to do this through the bus model as well but
we'll need to have another look, although my initial feeling is they are
slightly different problems.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [RFC][PATCH 2/5] pci: allow shared device instances.
2017-05-10 11:08 ` Jerin Jacob
2017-05-10 11:31 ` Declan Doherty
@ 2017-05-10 11:37 ` Thomas Monjalon
1 sibling, 0 replies; 21+ messages in thread
From: Thomas Monjalon @ 2017-05-10 11:37 UTC (permalink / raw)
To: Jerin Jacob, Declan Doherty, Radu Nicolau; +Cc: dev
10/05/2017 13:08, Jerin Jacob:
> From: Declan Doherty <declan.doherty@intel.com>
> >
> > Hey Thomas, I've been working on this with Radu, so see my take below
> >
> > On 10/05/2017 11:28 AM, Thomas Monjalon wrote:
> > > 10/05/2017 12:11, Radu Nicolau:
> > > > Hi
> > > >
> > > > On 5/10/2017 10:09 AM, Thomas Monjalon wrote:
> > > > > Hi,
> > > > >
> > > > > 09/05/2017 16:57, Radu Nicolau:
> > > > > > Updated PCI initialization code to allow devices to be shared across multiple PMDs.
> > > > > >
> > > > > > Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> > > > > I was waiting the day when we have a device shared
> > > > > by two different interfaces.
> > > > > Note that some Mellanox and Chelsio devices already instantiate
> > > > > two ethdev ports per PCI device.
> > > > >
> > > > > Please explain your idea behind this "shared" flag.
> > > > > What is your exact need?
> > > >
> > > > Currently for each pci device a look-up into a list of PMDs is
> > > > performed, and when a match is found the system moves to the next
> > > > device. Having this flag will allow a PMD to inform the system that
> > > > there may be more matches, more PMDs that can be used for this
> > > > particular device.
> > > > There is a difference when comparing to the devices you mentioned above,
> > > > in this case the PMDs are totally different types, one network and one
> > > > cryptodev PMD for each IXGBE network card.
> > >
> > > Yes I know it is a lack in DPDK.
> > > Linux introduced MultiFunction Device in 2005:
> > > http://events.linuxfoundation.org/sites/events/files/slides/belloni-mfd-regmap-syscon_0.pdf
> >
> > So at the most basic level the intention is to allow more than one device of
> > different types, in our case a net PMD and a crypto PMD, to be instantiated
> > on a single PCI bar, in essence to share the bar. I'm not familiar with the
> > approaches taken in the Mellanox and Chelsio devices but I assume they are
> > handled with the driver probe/create functions independently from the EAL
> > infrastructure?
Yes it is done in ethdev driver without real impact on EAL.
> > For the initial proto-typing of this RFC we only implemented the
> > multi-device creation but I envisage that there will be a requirement for
> > sharing state between drivers, or at a minimum implementing locking around
> > shared resources, registers etc. And I would like to see this done in a
> > generic fashion that can me leverage by any driver and not have each driver
> > having to solve this independently.
>
> Cavium's next generation PCI based NW devices has similar scheme where we
> need to share the same BAR with multiple DPDK subsystems(ethdev,
> eventdev etc) unlike current generation(OcteonTX).
>
> I think, Another possible way to handle this in generic way is to:
> Register a new rte_bus for the shared PCI access which sits on top PCIe bus.
> With new bus's scan and probe scheme, it can probe the two devices.
Jerin, I don't see the benefit of a new virtual bus.
> > I think this approach was sufficient to enable the RFC and kick off the
> > discussion, but it is not a fully featured solution and we wanted to get
> > community feedback before progressing to far along with a fully featured
> > solution.
Yes, that's how I've understood the RFC.
That's why I try to start the discussion early, requesting more inputs.
10/05/2017 12:47, Radu Nicolau:
> We have considered a vdev crypto PMD approach that would have not
> require changes to the eal section, but it would have required some sort
> of side communication with the IXGBE PMD; another one was a some sort of
> on-demant initialized cryptodev. Over these approaches the current one
> is cleaner and makes more sense, initialize a net and crypto PMD for a
> device that is both a NIC and a cryptro device.
Yes regarding the probing, an EAL change makes more sense.
If someone has another view, please share.
The other important topic to discuss is how we share device registers
between different drivers.
Please do not limit yourself to what exists and do not try to avoid
any breakage when brainstorming.
Thanks
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [RFC][PATCH 2/5] pci: allow shared device instances.
2017-05-10 11:31 ` Declan Doherty
@ 2017-05-10 12:18 ` Jerin Jacob
0 siblings, 0 replies; 21+ messages in thread
From: Jerin Jacob @ 2017-05-10 12:18 UTC (permalink / raw)
To: Declan Doherty; +Cc: Thomas Monjalon, Radu Nicolau, dev
-----Original Message-----
> Date: Wed, 10 May 2017 12:31:48 +0100
> From: Declan Doherty <declan.doherty@intel.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> CC: Thomas Monjalon <thomas@monjalon.net>, Radu Nicolau
> <radu.nicolau@intel.com>, dev@dpdk.org
> Subject: Re: [dpdk-dev] [RFC][PATCH 2/5] pci: allow shared device instances.
> User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101
> Thunderbird/45.8.0
>
> On 10/05/2017 12:08 PM, Jerin Jacob wrote:
> > -----Original Message-----
> > > Date: Wed, 10 May 2017 11:52:45 +0100
> > > From: Declan Doherty <declan.doherty@intel.com>
> > > To: Thomas Monjalon <thomas@monjalon.net>, Radu Nicolau
> > > <radu.nicolau@intel.com>
> > > CC: dev@dpdk.org
> > > Subject: Re: [dpdk-dev] [RFC][PATCH 2/5] pci: allow shared device instances.
> > > User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101
> > > Thunderbird/45.8.0
> > >
> > > Hey Thomas, I've been working on this with Radu, so see my take below
> > >
> > > On 10/05/2017 11:28 AM, Thomas Monjalon wrote:
> > > > 10/05/2017 12:11, Radu Nicolau:
> > > > > Hi
> > > > >
> > > > >
> > > > > On 5/10/2017 10:09 AM, Thomas Monjalon wrote:
> > > > > > Hi,
> > > > > >
> > > > > > 09/05/2017 16:57, Radu Nicolau:
> > > > > > > Updated PCI initialization code to allow devices to be shared across multiple PMDs.
> > > > > > >
> > > > > > > Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> > > > > > I was waiting the day when we have a device shared
> > > > > > by two different interfaces.
> > > > > > Note that some Mellanox and Chelsio devices already instantiate
> > > > > > two ethdev ports per PCI device.
> > > > > >
> > > > > > Please explain your idea behind this "shared" flag.
> > > > > > What is your exact need?
> > > > >
> > > > > Currently for each pci device a look-up into a list of PMDs is
> > > > > performed, and when a match is found the system moves to the next
> > > > > device. Having this flag will allow a PMD to inform the system that
> > > > > there may be more matches, more PMDs that can be used for this
> > > > > particular device.
> > > > > There is a difference when comparing to the devices you mentioned above,
> > > > > in this case the PMDs are totally different types, one network and one
> > > > > cryptodev PMD for each IXGBE network card.
> > > >
> > > > Yes I know it is a lack in DPDK.
> > > > Linux introduced MultiFunction Device in 2005:
> > > > http://events.linuxfoundation.org/sites/events/files/slides/belloni-mfd-regmap-syscon_0.pdf
> > > >
> > >
> > > So at the most basic level the intention is to allow more than one device of
> > > different types, in our case a net PMD and a crypto PMD, to be instantiated
> > > on a single PCI bar, in essence to share the bar. I'm not familiar with the
> > > approaches taken in the Mellanox and Chelsio devices but I assume they are
> > > handled with the driver probe/create functions independently from the EAL
> > > infrastructure?
> > >
> > > For the initial proto-typing of this RFC we only implemented the
> > > multi-device creation but I envisage that there will be a requirement for
> > > sharing state between drivers, or at a minimum implementing locking around
> > > shared resources, registers etc. And I would like to see this done in a
> > > generic fashion that can me leverage by any driver and not have each driver
> > > having to solve this independently.
> >
> > Cavium's next generation PCI based NW devices has similar scheme where we
> > need to share the same BAR with multiple DPDK subsystems(ethdev,
> > eventdev etc) unlike current generation(OcteonTX).
> >
>
> Have you done investigation into how you would like to support this, and are
> you trending to any particular approach. The rte_bus approach as you outline
> below does sound like it would suit this multi-function device.
Not much investigation has been done as its for next generation.
It is no PCIe multi function device.
There will be a lot shared functions between these shared DPDK devices and there
should be place holder for this in code.I thought driver/bus/foo may a
option. In additional to this, If we expose new function pointer based
interfaces in bus for the shared device register access and other shared resource
alloc/free between these two DPDK devices, it can be centralized to one
place(driver/bus/foo) and generalized.
Just 2c. We haven't done any prototype.
>
> > I think, Another possible way to handle this in generic way is to:
> > Register a new rte_bus for the shared PCI access which sits on top PCIe bus.
> > With new bus's scan and probe scheme, it can probe the two devices.
> >
> >
>
> Yes, this would work and I think it makes a lot of sense in the case where
> you have logically independent hardware functional blocks on a shared bus.
> In our particular case, we only have a single physical device, which we are
> presenting as 2 logical devices purely to improve the sw model through DPDK
> existing infrastructure. We may also need to implement some shared context
> for protecting access to shared resources such as register and to
> synchronized exposure of capabilities. In the case of the IXBGE family of
> devices they can support MACsec or IPsec functionality but not both at the
> same time, so some mechanism of passing this state between the net and
> crypto PMDs will be required. I guess it should be possible to do this
> through the bus model as well but we'll need to have another look, although
> my initial feeling is they are slightly different problems.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [RFC][PATCH 0/5] cryptodev: Adding support for inline crypto processing of IPsec flows
2017-05-09 14:57 [dpdk-dev] [RFC][PATCH 0/5] cryptodev: Adding support for inline crypto processing of IPsec flows Radu Nicolau
` (4 preceding siblings ...)
2017-05-09 14:57 ` [dpdk-dev] [RFC][PATCH 5/5] examples: updated IPSec sample app to support inline IPSec Radu Nicolau
@ 2017-05-10 16:07 ` Boris Pismenny
2017-05-10 17:21 ` Declan Doherty
2017-05-16 21:46 ` Thomas Monjalon
6 siblings, 1 reply; 21+ messages in thread
From: Boris Pismenny @ 2017-05-10 16:07 UTC (permalink / raw)
To: adu.nicolau, dev
> 5. The addition of inline crypto metadata into the rte_mbuf structure to allow the required egress metadata to be given to the NIC PMD to build the necessary transmit descriptors in tx_burst processing when the PKT_TX_IPSEC_INLINE_CRYPTO is set. We are looking for feedback on a better approach to handling the passing of this metadata to the NIC as it is understood that different hardware accelerators which support this offload may have different requirements for metadata depending on implementation and other capabilities in the device. One possibility we have consider is that the last 16 bytes of mbuf is reserved for device specific metadata, which layout is flexible depending on the hardware being used.
>
> struct rte_mbuf {
> ...
> /** Inline IPSec metadata*/
> struct {
> uint16_t sa_idx; /**< SA index */
> uint8_t pad_len; /**< Padding length */
> uint8_t enc;
> } inline_ipsec;
> } __rte_cache_aligned;
Assuming that you see the packet with PKT_TX_IPSEC_INLINE_CRYPTO, could you infer these parameters from the packet itself?
>
>
> The figure below demonstrates how the new functionality allows the inline crypto acceleration to be integrated into an existing IPsec stack egress path which is using the cryptodev APIs. It is important to note on the data path that the crypto PMD is only processing the metadata of the mbuf and is not modifying the packet payload in anyway. The main function of the crypto PMD in this approach is to support the configuration of the SA material in the hardware using the cryptodev APIs and to enable transparent integration of the inline crypto acceleration into IPsec data path. Only the IPsec stacks control path is aware of the inline processing and is required to use the extra IPsec transform outlined above.
>
>
> Egress Data Path
> |
> +--------|--------+
> | egress IPsec |
> | | |
> | +------V------+ |
> | | SABD lookup | | <------ SA maps to cryptodev session
> | +------|------+ |
> | +------V------+ |
> | | Tunnel | | <------ Add tunnel header to packet
> | +------|------+ |
> | +------V------+ |
> | | ESP | | <------ Add ESP header/trailer to packet
> | +------|------+ |
> | +------|------+ |
> | | \--------------------\
> | | Crypto | | | <- Crypto processing through
> | | /----------------\ | inline crypto PMD
> | +------|------+ | | |
> +--------V--------+ | |
> | | |
> +--------V--------+ | | create <-- SA is added to hw
> | L2 Stack | | | inline using existing create
> +--------|--------+ | | session sym session APIs
> | | | |
> +--------V--------+ +---|---|----V---+
> | | | \---/ | | <- Set inline crypto offload
> | NIC PMD | | INLINE | | flag and required metadata
> | | | CRYPTO PMD | | to mbuf. Packet data remains
> +--------|--------+ +------------V---+ unmodified.
> | |
> +--------|------------+ Add/Remove
> | HW ACCELERATED NIC | SA Entry
> | |-----\ | |
> | | +---|----+ | |
> | | | Inline |<-------------/
> | | | Crypto | |
> | | +---|----+ | <-- Packet Encryption/Decryption and
> | |-----/ | Authentication happens inline
> +--------|------------+
> V
>
>
> IXGBE enablement details:
> - Only AES-GCM 128 ESP Tunnel/Transport mode and Authentication only mode are supported.
>
> IXGBE PMD:
>
> Rx Path
> - To enable decryption for incoming packets 3 tables have to be programmed
> in the IXGBE device: IP table, SPI table, and Key table. First one has
> 128 entries, the other 2 have 1024. An encrypted packet that need to be
> decrypted inline needs matching entries in all tables to be processed:
> destination IP needs to match an entry in the IP table, SPI needs to
> match and entry in the SPI table, and the SPI table entry needs to have
> a valid index into the Key table. If all conditions are met then the
> packet is decrypted and the crypto status is set in the rx descriptors.
> - After the inline crypto processing the packet is presented to host as a
> regular rx packet but all IPsec related header are still attached to the packet.
> - The IXGBE net driver rx path checks the descriptors and based on the
> crypto status sets additional flags in the rte_mbuf.ol_flags field.
> - If decryption is succesful, the received packet contains the decrypted
> data where the encrypted data was when the packet arrived.
> - In the DPKD crypto PMD side, the rte_mbuf.ol_flags are checked and
> decryption status set accordingly.
>
>
> TX path:
> - For encryption of the outgoing packets there is only one table that
> contains the key as all the other operations are performed by software.
> The host need to program this table and set the tx descriptors.
>
> - The IXGBE net driver tx path checks the additional field
> rte_mbuf.inline_ipsec, and if the packet needs to be encrypted then the
> tx descriptors are set accordingly.
>
> Crypto IXGBE PMD:
>
> - implemented IXGBE Crypto driver; mostly pass-through plus error
> checking for the enque-deque operations and IXGBE crypto engine setup
> and configuration
>
> IPsec Gateway Sample Application
>
> - ipsec gateway example updated to support inline ipsec
>
>
> Radu Nicolau (5):
> cryptodev: Updated API to add suport for inline IPSec.
> pci: allow shared device instances.
> mbuff: added inline IPSec flags and metadata
> cryptodev: added new crypto PMD supporting inline IPSec for IXGBE
> examples: updated IPSec sample app to support inline IPSec
>
> config/common_base | 7 +
> drivers/crypto/Makefile | 2 +
> drivers/crypto/ixgbe/Makefile | 63 +++
> drivers/crypto/ixgbe/ixgbe_crypto_pmd_ops.c | 576 +++++++++++++++++++++
> drivers/crypto/ixgbe/ixgbe_crypto_pmd_private.h | 180 +++++++
> drivers/crypto/ixgbe/ixgbe_rte_cyptodev.c | 474 +++++++++++++++++
> .../crypto/ixgbe/rte_pmd_ixgbe_crypto_version.map | 3 +
> drivers/net/ixgbe/ixgbe_ethdev.c | 128 ++---
> drivers/net/ixgbe/ixgbe_rxtx.c | 22 +-
> drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 34 ++
> examples/ipsec-secgw/esp.c | 7 +-
> examples/ipsec-secgw/ipsec.h | 2 +
> examples/ipsec-secgw/sa.c | 165 ++++--
> lib/librte_cryptodev/rte_crypto_sym.h | 34 +-
> lib/librte_cryptodev/rte_cryptodev.h | 5 +-
> lib/librte_eal/common/eal_common_pci.c | 15 +-
> lib/librte_eal/common/include/rte_pci.h | 18 +-
> lib/librte_mbuf/rte_mbuf.h | 22 +
> mk/rte.app.mk | 1 +
> 19 files changed, 1625 insertions(+), 133 deletions(-)
> create mode 100644 drivers/crypto/ixgbe/Makefile
> create mode 100644 drivers/crypto/ixgbe/ixgbe_crypto_pmd_ops.c
> create mode 100644 drivers/crypto/ixgbe/ixgbe_crypto_pmd_private.h
> create mode 100644 drivers/crypto/ixgbe/ixgbe_rte_cyptodev.c
> create mode 100644 drivers/crypto/ixgbe/rte_pmd_ixgbe_crypto_version.map
>
This is a nice approach.
We are also working on adding support for IPsec inline crypto in DPDK.
I hope we could submit a RFC with working code soon.
We considered 3 approaches for IPsec inline support:
1. IPsec inline as a cryptodev (like this RFC)
2. IPsec inline as a rte_flow action. (see details below)
3. Mix between approach 2 and approach 3.
In approach 2, there is no need for an additional crypto PMD.
Inline IPsec is exposed as another feature of a NIC PMD.
For the control-path, we introduce a new rte_flow_action_type for crypto
and a flag to mark flows as egress flows.
Then, it is possible to program the SA by calling rte_flow_create with
an appropriate pattern of IP and ESP header fields, and an action that
contains rte_crypto_ipsec_xform as the configuration.
The main benefit of using the rte_flow API is that we can reuse, the
existing API with patterns and actions. For example, it would be
possible to add support for UDP encapsulation of IPsec without
changing the API. Support for VLAN/VXLAN/GRE/etc could be added
similarly to UDP encapsulation.
For the data-path, all is handled in the NIC PMD, during rx/tx_burst.
While, the application marks the packets for encryption in the
transmit path. And it receives packets marked as decrypted/auth-fail
on the receive side.
In approach 3, there is a crypto PMD for configuring the keys, then
the rte_flow_action_type configuration contains the crypto session
and the data-path could go through the crypto PMD as in approach 1.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [RFC][PATCH 0/5] cryptodev: Adding support for inline crypto processing of IPsec flows
2017-05-10 16:07 ` [dpdk-dev] [RFC][PATCH 0/5] cryptodev: Adding support for inline crypto processing of IPsec flows Boris Pismenny
@ 2017-05-10 17:21 ` Declan Doherty
2017-05-11 5:27 ` Boris Pismenny
0 siblings, 1 reply; 21+ messages in thread
From: Declan Doherty @ 2017-05-10 17:21 UTC (permalink / raw)
To: Boris Pismenny, radu.nicolau, dev
On 10/05/2017 5:07 PM, Boris Pismenny wrote:
>
>
>> 5. The addition of inline crypto metadata into the rte_mbuf structure to allow the required egress metadata to be given to the NIC PMD to build the necessary transmit descriptors in tx_burst processing when the PKT_TX_IPSEC_INLINE_CRYPTO is set. We are looking for feedback on a better approach to handling the passing of this metadata to the NIC as it is understood that different hardware accelerators which support this offload may have different requirements for metadata depending on implementation and other capabilities in the device. One possibility we have consider is that the last 16 bytes of mbuf is reserved for device specific metadata, which layout is flexible depending on the hardware being used.
>>
>> struct rte_mbuf {
>> ...
>> /** Inline IPSec metadata*/
>> struct {
>> uint16_t sa_idx; /**< SA index */
>> uint8_t pad_len; /**< Padding length */
>> uint8_t enc;
>> } inline_ipsec;
>> } __rte_cache_aligned;
>
> Assuming that you see the packet with PKT_TX_IPSEC_INLINE_CRYPTO, could you infer these parameters from the packet itself?
>
In our case this isn't really possible as each packet in a burst could
be be associated with a different security association/crypto session
and will also have different lengths/padding etc. We could use some sort
of cookie to store this, but I think it would have a big performance
impact. I do think that this structure in the mbuf should not be device
specific as it is now for the required metadata, but I would like to
guarantee that the metadata is in the mbuf.
>>
>>
....
>
> This is a nice approach.
>
> We are also working on adding support for IPsec inline crypto in DPDK.
> I hope we could submit a RFC with working code soon.
Iis your device capable of full IPsec protocol processing, ESP header
insertion, encap/decap etc? In our case the inline functionality is
limited to the crypto processing, so we are working on the assumption
that the user will be integrating with an existing IPsec stack. On
ingress the lookup is based on the Destination IP and SPI, on egress the
metadata is
>
> We considered 3 approaches for IPsec inline support:
> 1. IPsec inline as a cryptodev (like this RFC)
> 2. IPsec inline as a rte_flow action. (see details below)
> 3. Mix between approach 2 and approach 3.
>
> In approach 2, there is no need for an additional crypto PMD.
> Inline IPsec is exposed as another feature of a NIC PMD.
>
> For the control-path, we introduce a new rte_flow_action_type for crypto
> and a flag to mark flows as egress flows.
> Then, it is possible to program the SA by calling rte_flow_create with
> an appropriate pattern of IP and ESP header fields, and an action that
> contains rte_crypto_ipsec_xform as the configuration.
>
> The main benefit of using the rte_flow API is that we can reuse, the
> existing API with patterns and actions. For example, it would be
> possible to add support for UDP encapsulation of IPsec without
> changing the API. Support for VLAN/VXLAN/GRE/etc could be added
> similarly to UDP encapsulation.
This make sense when hw is capable of full offload. So the rte_flow
actions might be VxLAN and ESP Tunnel for a flow. The other approach is
that to separate rules are created one for IPsec, then a second for the
VxLAN tunnel which trigger on the IPsec flow, but this probably implies
that either the PMD flattens these flow actions into a single action or
the hw supports re circulation. One concern I would have is population
of the namespace of rte_flow with IPsec/crypto session material, but I
guess it should be possible to come up with a clean way of supporting this.
> For the data-path, all is handled in the NIC PMD, during rx/tx_burst.
> While, the application marks the packets for encryption in the
> transmit path. And it receives packets marked as decrypted/auth-fail
> on the receive side.
when you say the application marks the packet, this is essentially the
IPsec stack? The main benefit in the approach of this RFC is that it is
possible to integrate the inline crypto processing transparently in the
data path. The crypto PMD can handle setting and interpreting the
metadata required and the IPsec stack is just using the crypto PMD as it
would any other crypto PMD.
>
> In approach 3, there is a crypto PMD for configuring the keys, then
> the rte_flow_action_type configuration contains the crypto session
> and the data-path could go through the crypto PMD as in approach 1.
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [RFC][PATCH 0/5] cryptodev: Adding support for inline crypto processing of IPsec flows
2017-05-10 17:21 ` Declan Doherty
@ 2017-05-11 5:27 ` Boris Pismenny
2017-05-11 9:05 ` Radu Nicolau
0 siblings, 1 reply; 21+ messages in thread
From: Boris Pismenny @ 2017-05-11 5:27 UTC (permalink / raw)
To: Declan Doherty, radu.nicolau, dev
> >> 5. The addition of inline crypto metadata into the rte_mbuf structure to
> allow the required egress metadata to be given to the NIC PMD to build the
> necessary transmit descriptors in tx_burst processing when the
> PKT_TX_IPSEC_INLINE_CRYPTO is set. We are looking for feedback on a
> better approach to handling the passing of this metadata to the NIC as it is
> understood that different hardware accelerators which support this offload
> may have different requirements for metadata depending on
> implementation and other capabilities in the device. One possibility we have
> consider is that the last 16 bytes of mbuf is reserved for device specific
> metadata, which layout is flexible depending on the hardware being used.
> >>
> >> struct rte_mbuf {
> >> ...
> >> /** Inline IPSec metadata*/
> >> struct {
> >> uint16_t sa_idx; /**< SA index */
> >> uint8_t pad_len; /**< Padding length */
> >> uint8_t enc;
> >> } inline_ipsec;
> >> } __rte_cache_aligned;
> >
> > Assuming that you see the packet with PKT_TX_IPSEC_INLINE_CRYPTO,
> could you infer these parameters from the packet itself?
> >
>
> In our case this isn't really possible as each packet in a burst could be be
> associated with a different security association/crypto session and will also
> have different lengths/padding etc. We could use some sort of cookie to
> store this, but I think it would have a big performance impact. I do think that
> this structure in the mbuf should not be device specific as it is now for the
> required metadata, but I would like to guarantee that the metadata is in the
> mbuf.
>
>
> >
> > This is a nice approach.
> >
> > We are also working on adding support for IPsec inline crypto in DPDK.
> > I hope we could submit a RFC with working code soon.
> Iis your device capable of full IPsec protocol processing, ESP header insertion,
> encap/decap etc? In our case the inline functionality is limited to the crypto
> processing, so we are working on the assumption that the user will be
> integrating with an existing IPsec stack. On ingress the lookup is based on the
> Destination IP and SPI, on egress the metadata is
Currently our device is not capable of full IPsec protocol processing.
But, future devices will not have this limitation and it shouldn't be
assumed in the API. We also, need to integrate with an existing
IPsec stack. However, we perform a lookup on both egress and
Ingress for source IP, destination IP and SPI.
>
> >
> > We considered 3 approaches for IPsec inline support:
> > 1. IPsec inline as a cryptodev (like this RFC) 2. IPsec inline as a
> > rte_flow action. (see details below) 3. Mix between approach 2 and
> > approach 3.
> >
> > In approach 2, there is no need for an additional crypto PMD.
> > Inline IPsec is exposed as another feature of a NIC PMD.
> >
>
> > For the control-path, we introduce a new rte_flow_action_type for
> > crypto and a flag to mark flows as egress flows.
> > Then, it is possible to program the SA by calling rte_flow_create with
> > an appropriate pattern of IP and ESP header fields, and an action that
> > contains rte_crypto_ipsec_xform as the configuration.
> >
> > The main benefit of using the rte_flow API is that we can reuse, the
> > existing API with patterns and actions. For example, it would be
> > possible to add support for UDP encapsulation of IPsec without
> > changing the API. Support for VLAN/VXLAN/GRE/etc could be added
> > similarly to UDP encapsulation.
>
> This make sense when hw is capable of full offload. So the rte_flow actions
> might be VxLAN and ESP Tunnel for a flow. The other approach is that to
> separate rules are created one for IPsec, then a second for the VxLAN tunnel
> which trigger on the IPsec flow, but this probably implies that either the PMD
> flattens these flow actions into a single action or the hw supports re
> circulation. One concern I would have is population of the namespace of
> rte_flow with IPsec/crypto session material, but I guess it should be possible
> to come up with a clean way of supporting this.
Full offload is not necessary if the device has a capable parser.
Encapsulations could be added by the DPDK application, and
for inline offload the device must be aware of them. Unlike other
crypto PMDs that actually perform encryption, inline crypto PMDs
need to setup the entire packet format up to ESP in order to process
the request. One concern I have in this matter, is that the semantics of
other crypto PMDs are different from inline - they actually perform crypto.
Inline would break if the packet format is not correct.
Adding a VXLAN flow and an ESP tunnel separately is similar to the
third approach, because the PMD requires some indication that it should
flatten these rules and this indication will be in the form of a crypto session.
Maybe this approach will be the best.
In the 3rd approach there is a crypto PMD that allows for zero changes
in the datapath and it is possible to set device specific metadata in its
enqueue_burst. While, the control path is split between rte_crypto and
rte_flow. First a crypto session is setup, then the crypto session is
provided to rte_flow_create as an action. The crypto session will not
include any networking related code, just the keys and the salt.
What is your concern about the population of rte_flow with IPsec/crypto
material? I think that at least some crypto in rte_flow is necessary to
support advanced use-cases without re-implementing rte_flow inside
rte_crypto.
>
>
> > For the data-path, all is handled in the NIC PMD, during rx/tx_burst.
> > While, the application marks the packets for encryption in the
> > transmit path. And it receives packets marked as decrypted/auth-fail
> > on the receive side.
>
> when you say the application marks the packet, this is essentially the IPsec
> stack? The main benefit in the approach of this RFC is that it is possible to
> integrate the inline crypto processing transparently in the data path. The
> crypto PMD can handle setting and interpreting the metadata required and
> the IPsec stack is just using the crypto PMD as it would any other crypto
> PMD.
>
Right. I'm concerned about the overhead of going through a crypto PMD,
and I understand that it might be necessary for your device.
Changes in the data path would be minor in any way we implement inline.
If the metadata is not device specific, then shouldn't the application set it
directly based on the SA itself?
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [RFC][PATCH 0/5] cryptodev: Adding support for inline crypto processing of IPsec flows
2017-05-11 5:27 ` Boris Pismenny
@ 2017-05-11 9:05 ` Radu Nicolau
0 siblings, 0 replies; 21+ messages in thread
From: Radu Nicolau @ 2017-05-11 9:05 UTC (permalink / raw)
To: Boris Pismenny, Declan Doherty, dev
Hi,
Just a comment on the last question
On 5/11/2017 6:27 AM, Boris Pismenny wrote:
>>>> 5. The addition of inline crypto metadata into the rte_mbuf structure to
>> allow the required egress metadata to be given to the NIC PMD to build the
>> necessary transmit descriptors in tx_burst processing when the
>> PKT_TX_IPSEC_INLINE_CRYPTO is set. We are looking for feedback on a
>> better approach to handling the passing of this metadata to the NIC as it is
>> understood that different hardware accelerators which support this offload
>> may have different requirements for metadata depending on
>> implementation and other capabilities in the device. One possibility we have
>> consider is that the last 16 bytes of mbuf is reserved for device specific
>> metadata, which layout is flexible depending on the hardware being used.
>>>> struct rte_mbuf {
>>>> ...
>>>> /** Inline IPSec metadata*/
>>>> struct {
>>>> uint16_t sa_idx; /**< SA index */
>>>> uint8_t pad_len; /**< Padding length */
>>>> uint8_t enc;
>>>> } inline_ipsec;
>>>> } __rte_cache_aligned;
>>> Assuming that you see the packet with PKT_TX_IPSEC_INLINE_CRYPTO,
>> could you infer these parameters from the packet itself?
>> In our case this isn't really possible as each packet in a burst could be be
>> associated with a different security association/crypto session and will also
>> have different lengths/padding etc. We could use some sort of cookie to
>> store this, but I think it would have a big performance impact. I do think that
>> this structure in the mbuf should not be device specific as it is now for the
>> required metadata, but I would like to guarantee that the metadata is in the
>> mbuf.
>>
>>
>>> This is a nice approach.
>>>
>>> We are also working on adding support for IPsec inline crypto in DPDK.
>>> I hope we could submit a RFC with working code soon.
>> Iis your device capable of full IPsec protocol processing, ESP header insertion,
>> encap/decap etc? In our case the inline functionality is limited to the crypto
>> processing, so we are working on the assumption that the user will be
>> integrating with an existing IPsec stack. On ingress the lookup is based on the
>> Destination IP and SPI, on egress the metadata is
> Currently our device is not capable of full IPsec protocol processing.
> But, future devices will not have this limitation and it shouldn't be
> assumed in the API. We also, need to integrate with an existing
> IPsec stack. However, we perform a lookup on both egress and
> Ingress for source IP, destination IP and SPI.
>
>>> We considered 3 approaches for IPsec inline support:
>>> 1. IPsec inline as a cryptodev (like this RFC) 2. IPsec inline as a
>>> rte_flow action. (see details below) 3. Mix between approach 2 and
>>> approach 3.
>>>
>>> In approach 2, there is no need for an additional crypto PMD.
>>> Inline IPsec is exposed as another feature of a NIC PMD.
>>>
>>> For the control-path, we introduce a new rte_flow_action_type for
>>> crypto and a flag to mark flows as egress flows.
>>> Then, it is possible to program the SA by calling rte_flow_create with
>>> an appropriate pattern of IP and ESP header fields, and an action that
>>> contains rte_crypto_ipsec_xform as the configuration.
>>>
>>> The main benefit of using the rte_flow API is that we can reuse, the
>>> existing API with patterns and actions. For example, it would be
>>> possible to add support for UDP encapsulation of IPsec without
>>> changing the API. Support for VLAN/VXLAN/GRE/etc could be added
>>> similarly to UDP encapsulation.
>> This make sense when hw is capable of full offload. So the rte_flow actions
>> might be VxLAN and ESP Tunnel for a flow. The other approach is that to
>> separate rules are created one for IPsec, then a second for the VxLAN tunnel
>> which trigger on the IPsec flow, but this probably implies that either the PMD
>> flattens these flow actions into a single action or the hw supports re
>> circulation. One concern I would have is population of the namespace of
>> rte_flow with IPsec/crypto session material, but I guess it should be possible
>> to come up with a clean way of supporting this.
> Full offload is not necessary if the device has a capable parser.
> Encapsulations could be added by the DPDK application, and
> for inline offload the device must be aware of them. Unlike other
> crypto PMDs that actually perform encryption, inline crypto PMDs
> need to setup the entire packet format up to ESP in order to process
> the request. One concern I have in this matter, is that the semantics of
> other crypto PMDs are different from inline - they actually perform crypto.
> Inline would break if the packet format is not correct.
>
> Adding a VXLAN flow and an ESP tunnel separately is similar to the
> third approach, because the PMD requires some indication that it should
> flatten these rules and this indication will be in the form of a crypto session.
> Maybe this approach will be the best.
>
> In the 3rd approach there is a crypto PMD that allows for zero changes
> in the datapath and it is possible to set device specific metadata in its
> enqueue_burst. While, the control path is split between rte_crypto and
> rte_flow. First a crypto session is setup, then the crypto session is
> provided to rte_flow_create as an action. The crypto session will not
> include any networking related code, just the keys and the salt.
>
> What is your concern about the population of rte_flow with IPsec/crypto
> material? I think that at least some crypto in rte_flow is necessary to
> support advanced use-cases without re-implementing rte_flow inside
> rte_crypto.
>
>>
>>> For the data-path, all is handled in the NIC PMD, during rx/tx_burst.
>>> While, the application marks the packets for encryption in the
>>> transmit path. And it receives packets marked as decrypted/auth-fail
>>> on the receive side.
>> when you say the application marks the packet, this is essentially the IPsec
>> stack? The main benefit in the approach of this RFC is that it is possible to
>> integrate the inline crypto processing transparently in the data path. The
>> crypto PMD can handle setting and interpreting the metadata required and
>> the IPsec stack is just using the crypto PMD as it would any other crypto
>> PMD.
>>
> Right. I'm concerned about the overhead of going through a crypto PMD,
> and I understand that it might be necessary for your device.
> Changes in the data path would be minor in any way we implement inline.
There is no change at all in the actual data path using the PMD
approach, the only changes required for this RFC is in the control path,
when setting up the crypto transforms.
> If the metadata is not device specific, then shouldn't the application set it
> directly based on the SA itself?
>
>
The metadata structure has to be device agnostic, but the actual data
stored might not be. In this particular case, the value of the sa_index
is the index into an internal table in the device, which is determined
by the crypto PMD when the encryption key is stored. The
application/data path is not aware of this value.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [RFC][PATCH 0/5] cryptodev: Adding support for inline crypto processing of IPsec flows
2017-05-09 14:57 [dpdk-dev] [RFC][PATCH 0/5] cryptodev: Adding support for inline crypto processing of IPsec flows Radu Nicolau
` (5 preceding siblings ...)
2017-05-10 16:07 ` [dpdk-dev] [RFC][PATCH 0/5] cryptodev: Adding support for inline crypto processing of IPsec flows Boris Pismenny
@ 2017-05-16 21:46 ` Thomas Monjalon
2017-05-24 10:06 ` Declan Doherty
6 siblings, 1 reply; 21+ messages in thread
From: Thomas Monjalon @ 2017-05-16 21:46 UTC (permalink / raw)
To: Radu Nicolau
Cc: dev, olivier.matz, jerin.jacob, declan.doherty, Boris Pismenny
09/05/2017 16:57, Radu Nicolau:
> In this RFC we introduce a mechanism to support inline hardware
> acceleration of symmetric crypto processing of IPsec flows
> on Ethernet adapters within the cryptodev framework,
> specifically this RFC includes the initial enablement work
> for the Intel® 82599 10 GbE Controller (IXGBE).
We must stop after this first introduction and think about what
inline crypto processing is.
At the beginning are two types of processing:
- networking Rx/Tx
- crypto
Then we want to combine them in the same device.
We could also try to combine more processing:
- compression
- pattern matching
- etc
We will also probably have in future some devices able to combine
processing or do them separately (inline crypto or simple crypto).
Is there a good way to specify these combinations?
I'm dreaming of a pipeline model with a JIT compiler...
Here we are adding one more layer to the combination of Rx/Tx + crypto:
it is a specific API for IPsec.
One more thing in this landscape:
How the eventdev model propose to combine such processing?
[...]
> 3. The definition of new tx/rx mbuf offload flags to indicate that a packet requires inline crypto processing on to the NIC PMD on transmit and to indicate that a packet has been processed by the inline crypto hardware on ingress.
>
> /**
> * Inline IPSec Rx processed packet
> */
> #define PKT_RX_IPSEC_INLINE_CRYPTO (1ULL << 17)
>
> /**
> * Inline IPSec Rx packet authentication failed
> */
> #define PKT_RX_IPSEC_INLINE_CRYPTO_AUTH_FAILED (1ULL << 18)
>
> /**
> * Inline IPSec Tx process packet
> */
> #define PKT_TX_IPSEC_INLINE_CRYPTO (1ULL << 43)
We won't be able to add an offload flag for every protocols.
Can we define a more generic flag for Rx crypto failure?
The type of Rx crypto can be defined as a packet type.
IPsec is exactly the same thing as VLAN to this regard.
Olivier, what do you plan for VLAN flags and packet types?
Where is the item 4? :)
> 5. The addition of inline crypto metadata into the rte_mbuf structure to allow the required egress metadata to be given to the NIC PMD to build the necessary transmit descriptors in tx_burst processing when the PKT_TX_IPSEC_INLINE_CRYPTO is set. We are looking for feedback on a better approach to handling the passing of this metadata to the NIC as it is understood that different hardware accelerators which support this offload may have different requirements for metadata depending on implementation and other capabilities in the device. One possibility we have consider is that the last 16 bytes of mbuf is reserved for device specific metadata, which layout is flexible depending on the hardware being used.
>
> struct rte_mbuf {
> ...
> /** Inline IPSec metadata*/
> struct {
> uint16_t sa_idx; /**< SA index */
> uint8_t pad_len; /**< Padding length */
> uint8_t enc;
> } inline_ipsec;
> } __rte_cache_aligned;
I really think we should stop adding such things in the mbuf.
It is convenient for performance, but have we looked at other options?
We cannot reserve a metadata block and share it with other layers, because
it would prevent us from combining offloads of different layers.
And we won't have enough space for every layers.
There was the same discussion when introducing cryptodev. And the
conclusion was to not directly link any crypto metadata to the mbuf.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [RFC][PATCH 0/5] cryptodev: Adding support for inline crypto processing of IPsec flows
2017-05-16 21:46 ` Thomas Monjalon
@ 2017-05-24 10:06 ` Declan Doherty
0 siblings, 0 replies; 21+ messages in thread
From: Declan Doherty @ 2017-05-24 10:06 UTC (permalink / raw)
To: Thomas Monjalon, Radu Nicolau
Cc: dev, olivier.matz, jerin.jacob, Boris Pismenny
On 16/05/2017 10:46 PM, Thomas Monjalon wrote:
> 09/05/2017 16:57, Radu Nicolau:
>> In this RFC we introduce a mechanism to support inline hardware
>> acceleration of symmetric crypto processing of IPsec flows
>> on Ethernet adapters within the cryptodev framework,
>> specifically this RFC includes the initial enablement work
>> for the Intel® 82599 10 GbE Controller (IXGBE).
>
> We must stop after this first introduction and think about what
> inline crypto processing is.
>
> At the beginning are two types of processing:
> - networking Rx/Tx
> - crypto
> Then we want to combine them in the same device.
> We could also try to combine more processing:
> - compression
> - pattern matching
> - etc
> We will also probably have in future some devices able to combine
> processing or do them separately (inline crypto or simple crypto).
>
> Is there a good way to specify these combinations?
> I'm dreaming of a pipeline model with a JIT compiler...
>
Indeed flexible pipeline device are going to be an interesting challenge
to support within DPDK.
I think inline offloading of symmetric crypto processing is an
interesting offload it doesn't really effect the logical pipeline from
an application point of view, it just delays a single element (crypto)
of a pipeline stage (IPsec in this case) for performance, as it is
always processed in the context of a higher level protocol such as
IPsec, SSL, DTLS etc. This is one of the main reasons I proposed the
cryptodev model instead of rte_flow type model, as the processing being
done inline is exactly the same as provided by a cryptodev PMD be it
executed on the host or on a lookaside accelerator, and a full IPsec
protocol stack is still required. Also using rte_flow would essentially
be creating a second crypto control plane API in DPDK for programming
the same functionality.
In the future when inline accelerators which can offload the all of
processing for a particular protocol, including encap/decap then I
rte_flow is a much more appropriate approach.
> Here we are adding one more layer to the combination of Rx/Tx + crypto:
> it is a specific API for IPsec.
>
> One more thing in this landscape:
> How the eventdev model propose to combine such processing?
>
> [...]
>> 3. The definition of new tx/rx mbuf offload flags to indicate that a packet requires inline crypto processing on to the NIC PMD on transmit and to indicate that a packet has been processed by the inline crypto hardware on ingress.
>>
>> /**
>> * Inline IPSec Rx processed packet
>> */
>> #define PKT_RX_IPSEC_INLINE_CRYPTO (1ULL << 17)
>>
>> /**
>> * Inline IPSec Rx packet authentication failed
>> */
>> #define PKT_RX_IPSEC_INLINE_CRYPTO_AUTH_FAILED (1ULL << 18)
>>
>> /**
>> * Inline IPSec Tx process packet
>> */
>> #define PKT_TX_IPSEC_INLINE_CRYPTO (1ULL << 43)
>
> We won't be able to add an offload flag for every protocols.
> Can we define a more generic flag for Rx crypto failure?
> The type of Rx crypto can be defined as a packet type.
> IPsec is exactly the same thing as VLAN to this regard.
> Olivier, what do you plan for VLAN flags and packet types?
>
How about:
#define PKT_RX_INLINE_CRYPTO (1ULL << 17)
#define PKT_RX_INLINE_CRYPTO_PROCESSING_FAILED (1ULL << 18)
#define PKT_TX_INLINE_CRYPTO (1ULL << 43)
that way it's protocol independent and these flags could be used with
any accelerator providing inline crypto functionality for any protocol.
> Where is the item 4? :)
Lost in space :) just a typo.
>
>> 5. The addition of inline crypto metadata into the rte_mbuf structure toallow the required egress metadata to be given to the NIC PMD to build thenecessary transmit descriptors in tx_burst processing when the PKT_TX_IPSEC_INLINE_CRYPTO is set. We are looking for feedback on a better approach tohandling the passing of this metadata to the NIC as it is understood that different hardware accelerators which support this offload may have different requirements for metadata depending on implementation and other capabilities in the device. One possibility we have consider is that the last 16 bytes of mbuf is reserved for device specific metadata, which layout is flexible depending on the hardware being used.
>>
>> struct rte_mbuf {
>> ...
>> /** Inline IPSec metadata*/
>> struct {
>> uint16_t sa_idx; /**< SA index */
>> uint8_t pad_len; /**< Padding length */
>> uint8_t enc;
>> } inline_ipsec;
>> } __rte_cache_aligned;
>
> I really think we should stop adding such things in the mbuf.
> It is convenient for performance, but have we looked at other options?
>
I've consider making the use of private data a requirement for using
this offload but it seemed a bit restrictive and onerous to application
developers, but if no other alternative is available then perhaps this
would be the best path forward.
Another approach I had considered was managing private cookie's on a per
packet basis, but the performance impact of requiring independent
freeing of the cookie for each packet meant I ruled that out.
In general I think a generic security flow identification field
(uint32_t securtiy_flow_id) in the mbuf may work better as an
alternative to the inline ipsec struct. This would allow for generic
security flow identifier which could be used on both egress and ingress.
This would also be applicable to more than just inline crypto IPsec
offloads. It could also be used for a full inline IPsec offload and also
for offloading of other protocols such as SSL/DTLS in the future. In our
case we could also use this to fulfill our requirement to pass metadata
with each packet, the security flow id could be used to reference this
metadata internally within the pmd. I though it might be possible to
extend the hash field for this purpose also but I think it would make
sense to have an independent flow id for security protocols as rss/flow
director functionality may be performed on the inner decrypted payloads
and need to be reported separately.
> We cannot reserve a metadata block and share it with other layers, because
> it would prevent us from combining offloads of different layers.
> And we won't have enough space for every layers.
>
> There was the same discussion when introducing cryptodev. And the
> conclusion was to not directly link any crypto metadata to the mbuf.
>
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2017-05-24 10:06 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-09 14:57 [dpdk-dev] [RFC][PATCH 0/5] cryptodev: Adding support for inline crypto processing of IPsec flows Radu Nicolau
2017-05-09 14:57 ` [dpdk-dev] [RFC][PATCH 1/5] cryptodev: Updated API to add suport for inline IPSec Radu Nicolau
2017-05-09 14:57 ` [dpdk-dev] [RFC][PATCH 2/5] pci: allow shared device instances Radu Nicolau
2017-05-10 9:09 ` Thomas Monjalon
2017-05-10 10:11 ` Radu Nicolau
2017-05-10 10:28 ` Thomas Monjalon
2017-05-10 10:47 ` Radu Nicolau
2017-05-10 10:52 ` Declan Doherty
2017-05-10 11:08 ` Jerin Jacob
2017-05-10 11:31 ` Declan Doherty
2017-05-10 12:18 ` Jerin Jacob
2017-05-10 11:37 ` Thomas Monjalon
2017-05-09 14:57 ` [dpdk-dev] [RFC][PATCH 3/5] mbuff: added inline IPSec flags and metadata Radu Nicolau
2017-05-09 14:57 ` [dpdk-dev] [RFC][PATCH 4/5] cryptodev: added new crypto PMD supporting inline IPSec for IXGBE Radu Nicolau
2017-05-09 14:57 ` [dpdk-dev] [RFC][PATCH 5/5] examples: updated IPSec sample app to support inline IPSec Radu Nicolau
2017-05-10 16:07 ` [dpdk-dev] [RFC][PATCH 0/5] cryptodev: Adding support for inline crypto processing of IPsec flows Boris Pismenny
2017-05-10 17:21 ` Declan Doherty
2017-05-11 5:27 ` Boris Pismenny
2017-05-11 9:05 ` Radu Nicolau
2017-05-16 21:46 ` Thomas Monjalon
2017-05-24 10:06 ` Declan Doherty
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).