* [dpdk-dev] [RFC 00/10] new features for ipsec and security libraries
@ 2021-07-13 13:32 Radu Nicolau
2021-07-13 13:32 ` [dpdk-dev] [RFC 01/10] security: add support for TSO on IPsec session Radu Nicolau
` (9 more replies)
0 siblings, 10 replies; 12+ messages in thread
From: Radu Nicolau @ 2021-07-13 13:32 UTC (permalink / raw)
Cc: dev, Radu Nicolau, Declan Doherty, Abhijit Sinha, Daniel Martin Buckley
Add support for:
TSO, NAT-T/UDP encapsulation, ESN
AES_CCM, CHACHA20_POLY1305 and AES_GMAC
SA telemetry
mbuf offload flags
Initial SQN value
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijits.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Radu Nicolau (10):
security: add support for TSO on IPsec session
security: add UDP params for IPsec NAT-T
security: add ESN field to ipsec_xform
mbuf: add IPsec ESP tunnel type
ipsec: add support for AEAD algorithms
ipsec: add transmit segmentation offload support
ipsec: add support for NAT-T
ipsec: add support for SA telemetry
ipsec: add support for initial SQN value
ipsec: add ol_flags support
lib/ipsec/crypto.h | 137 ++++++++++++
lib/ipsec/esp_inb.c | 88 +++++++-
lib/ipsec/esp_outb.c | 262 +++++++++++++++++++----
lib/ipsec/iph.h | 23 +-
lib/ipsec/meson.build | 2 +-
lib/ipsec/rte_ipsec.h | 11 +
lib/ipsec/rte_ipsec_sa.h | 11 +-
lib/ipsec/sa.c | 406 ++++++++++++++++++++++++++++++++++--
lib/ipsec/sa.h | 43 ++++
lib/ipsec/version.map | 8 +
lib/mbuf/rte_mbuf_core.h | 1 +
lib/security/rte_security.h | 31 +++
12 files changed, 950 insertions(+), 73 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [dpdk-dev] [RFC 01/10] security: add support for TSO on IPsec session
2021-07-13 13:32 [dpdk-dev] [RFC 00/10] new features for ipsec and security libraries Radu Nicolau
@ 2021-07-13 13:32 ` Radu Nicolau
2021-07-13 13:32 ` [dpdk-dev] [RFC 02/10] security: add UDP params for IPsec NAT-T Radu Nicolau
` (8 subsequent siblings)
9 siblings, 0 replies; 12+ messages in thread
From: Radu Nicolau @ 2021-07-13 13:32 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty
Cc: dev, Radu Nicolau, Abhijit Sinha, Daniel Martin Buckley
Allow user to provision a per security session maximum segment size
(MSS) for use when Transmit Segmentation Offload (TSO) is supported.
The MSS value will be used when PKT_TX_TCP_SEG or PKT_TX_UDP_SEG
ol_flags are specified in mbuf.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijits.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/security/rte_security.h | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 88d31de0a6..45896a77d0 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -181,6 +181,19 @@ struct rte_security_ipsec_sa_options {
* * 0: Disable per session security statistics collection for this SA.
*/
uint32_t stats : 1;
+
+ /** Transmit Segmentation Offload (TSO)
+ *
+ * * 1: Enable per session security TSO support, use MSS value provide
+ * in IPsec security session when PKT_TX_TCP_SEG or PKT_TX_UDP_SEG
+ * ol_flags are set in mbuf.
+ * this SA, if supported by the driver.
+ * * 0: No TSO support for offload IPsec packets. Hardware will not
+ * attempt to segment packet, and packet transmission will fail if
+ * larger than MTU of interface
+ */
+ uint32_t tso : 1;
+
};
/** IPSec security association direction */
@@ -217,6 +230,8 @@ struct rte_security_ipsec_xform {
/**< Anti replay window size to enable sequence replay attack handling.
* replay checking is disabled if the window size is 0.
*/
+ uint32_t mss;
+ /**< IPsec payload Maximum Segment Size */
};
/**
--
2.25.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [dpdk-dev] [RFC 02/10] security: add UDP params for IPsec NAT-T
2021-07-13 13:32 [dpdk-dev] [RFC 00/10] new features for ipsec and security libraries Radu Nicolau
2021-07-13 13:32 ` [dpdk-dev] [RFC 01/10] security: add support for TSO on IPsec session Radu Nicolau
@ 2021-07-13 13:32 ` Radu Nicolau
2021-07-13 13:32 ` [dpdk-dev] [RFC 03/10] security: add ESN field to ipsec_xform Radu Nicolau
` (7 subsequent siblings)
9 siblings, 0 replies; 12+ messages in thread
From: Radu Nicolau @ 2021-07-13 13:32 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty
Cc: dev, Radu Nicolau, Abhijit Sinha, Daniel Martin Buckley
Add support for specifying UDP port params for UDP encapsulation option.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijits.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/security/rte_security.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 45896a77d0..03572b10ab 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -112,6 +112,12 @@ struct rte_security_ipsec_tunnel_param {
};
};
+struct rte_security_ipsec_udp_param {
+
+ uint16_t sport;
+ uint16_t dport;
+};
+
/**
* IPsec Security Association option flags
*/
@@ -224,6 +230,8 @@ struct rte_security_ipsec_xform {
/**< IPsec SA Mode - transport/tunnel */
struct rte_security_ipsec_tunnel_param tunnel;
/**< Tunnel parameters, NULL for transport mode */
+ struct rte_security_ipsec_udp_param udp;
+ /**< UDP parameters, ignored when udp_encap option not specified */
uint64_t esn_soft_limit;
/**< ESN for which the overflow event need to be raised */
uint32_t replay_win_sz;
--
2.25.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [dpdk-dev] [RFC 03/10] security: add ESN field to ipsec_xform
2021-07-13 13:32 [dpdk-dev] [RFC 00/10] new features for ipsec and security libraries Radu Nicolau
2021-07-13 13:32 ` [dpdk-dev] [RFC 01/10] security: add support for TSO on IPsec session Radu Nicolau
2021-07-13 13:32 ` [dpdk-dev] [RFC 02/10] security: add UDP params for IPsec NAT-T Radu Nicolau
@ 2021-07-13 13:32 ` Radu Nicolau
2021-07-13 13:32 ` [dpdk-dev] [RFC 04/10] mbuf: add IPsec ESP tunnel type Radu Nicolau
` (6 subsequent siblings)
9 siblings, 0 replies; 12+ messages in thread
From: Radu Nicolau @ 2021-07-13 13:32 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty
Cc: dev, Radu Nicolau, Abhijit Sinha, Daniel Martin Buckley
Update ipsec_xform definition to include ESN field.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijits.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/security/rte_security.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 03572b10ab..702de58b48 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -240,6 +240,14 @@ struct rte_security_ipsec_xform {
*/
uint32_t mss;
/**< IPsec payload Maximum Segment Size */
+ union {
+ uint64_t value;
+ struct {
+ uint32_t low;
+ uint32_t hi;
+ };
+ } esn;
+ /**< Extended Sequence Number */
};
/**
--
2.25.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [dpdk-dev] [RFC 04/10] mbuf: add IPsec ESP tunnel type
2021-07-13 13:32 [dpdk-dev] [RFC 00/10] new features for ipsec and security libraries Radu Nicolau
` (2 preceding siblings ...)
2021-07-13 13:32 ` [dpdk-dev] [RFC 03/10] security: add ESN field to ipsec_xform Radu Nicolau
@ 2021-07-13 13:32 ` Radu Nicolau
2021-07-13 13:32 ` [dpdk-dev] [RFC 05/10] ipsec: add support for AEAD algorithms Radu Nicolau
` (5 subsequent siblings)
9 siblings, 0 replies; 12+ messages in thread
From: Radu Nicolau @ 2021-07-13 13:32 UTC (permalink / raw)
To: Olivier Matz
Cc: dev, Radu Nicolau, Declan Doherty, Abhijit Sinha, Daniel Martin Buckley
Add tunnel type for IPsec ESP tunnels
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijits.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/mbuf/rte_mbuf_core.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index bb38d7f581..a4d95deee6 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -253,6 +253,7 @@ extern "C" {
#define PKT_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
#define PKT_TX_TUNNEL_VXLAN_GPE (0x6ULL << 45)
#define PKT_TX_TUNNEL_GTP (0x7ULL << 45)
+#define PKT_TX_TUNNEL_ESP (0x8ULL << 45)
/**
* Generic IP encapsulated tunnel type, used for TSO and checksum offload.
* It can be used for tunnels which are not standards or listed above.
--
2.25.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [dpdk-dev] [RFC 05/10] ipsec: add support for AEAD algorithms
2021-07-13 13:32 [dpdk-dev] [RFC 00/10] new features for ipsec and security libraries Radu Nicolau
` (3 preceding siblings ...)
2021-07-13 13:32 ` [dpdk-dev] [RFC 04/10] mbuf: add IPsec ESP tunnel type Radu Nicolau
@ 2021-07-13 13:32 ` Radu Nicolau
2021-07-13 13:32 ` [dpdk-dev] [RFC 06/10] ipsec: add transmit segmentation offload support Radu Nicolau
` (4 subsequent siblings)
9 siblings, 0 replies; 12+ messages in thread
From: Radu Nicolau @ 2021-07-13 13:32 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, Radu Nicolau, Declan Doherty, Abhijit Sinha, Daniel Martin Buckley
Add support for AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijits.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/crypto.h | 137 +++++++++++++++++++++++++++++++++++++++++++
lib/ipsec/esp_inb.c | 66 ++++++++++++++++++++-
lib/ipsec/esp_outb.c | 70 +++++++++++++++++++++-
lib/ipsec/sa.c | 54 +++++++++++++++--
lib/ipsec/sa.h | 6 ++
5 files changed, 322 insertions(+), 11 deletions(-)
diff --git a/lib/ipsec/crypto.h b/lib/ipsec/crypto.h
index 3d03034590..598ee9cebd 100644
--- a/lib/ipsec/crypto.h
+++ b/lib/ipsec/crypto.h
@@ -21,6 +21,37 @@ struct aesctr_cnt_blk {
uint32_t cnt;
} __rte_packed;
+ /*
+ * CHACHA20-POLY1305 devices have some specific requirements
+ * for IV and AAD formats.
+ * Ideally that to be done by the driver itself.
+ */
+
+struct aead_chacha20_poly1305_iv {
+ uint32_t salt;
+ uint64_t iv;
+ uint32_t cnt;
+} __rte_packed;
+
+struct aead_chacha20_poly1305_aad {
+ uint32_t spi;
+ /*
+ * RFC 4106, section 5:
+ * Two formats of the AAD are defined:
+ * one for 32-bit sequence numbers, and one for 64-bit ESN.
+ */
+ union {
+ uint32_t u32[2];
+ uint64_t u64;
+ } sqn;
+ uint32_t align0; /* align to 16B boundary */
+} __rte_packed;
+
+struct chacha20_poly1305_esph_iv {
+ struct rte_esp_hdr esph;
+ uint64_t iv;
+} __rte_packed;
+
/*
* AES-GCM devices have some specific requirements for IV and AAD formats.
* Ideally that to be done by the driver itself.
@@ -51,6 +82,47 @@ struct gcm_esph_iv {
uint64_t iv;
} __rte_packed;
+ /*
+ * AES-CCM devices have some specific requirements for IV and AAD formats.
+ * Ideally that to be done by the driver itself.
+ */
+union aead_ccm_salt {
+ uint32_t salt;
+ struct inner {
+ uint8_t salt8[3];
+ uint8_t ccm_flags;
+ } inner;
+} salt_union;
+
+
+struct aead_ccm_iv {
+ uint8_t ccm_flags;
+ uint8_t salt[3];
+ uint64_t iv;
+ uint32_t cnt;
+} __rte_packed;
+
+struct aead_ccm_aad {
+ uint8_t padding[18];
+ uint32_t spi;
+ /*
+ * RFC 4309, section 5:
+ * Two formats of the AAD are defined:
+ * one for 32-bit sequence numbers, and one for 64-bit ESN.
+ */
+ union {
+ uint32_t u32[2];
+ uint64_t u64;
+ } sqn;
+ uint32_t align0; /* align to 16B boundary */
+} __rte_packed;
+
+struct ccm_esph_iv {
+ struct rte_esp_hdr esph;
+ uint64_t iv;
+} __rte_packed;
+
+
static inline void
aes_ctr_cnt_blk_fill(struct aesctr_cnt_blk *ctr, uint64_t iv, uint32_t nonce)
{
@@ -59,6 +131,16 @@ aes_ctr_cnt_blk_fill(struct aesctr_cnt_blk *ctr, uint64_t iv, uint32_t nonce)
ctr->cnt = rte_cpu_to_be_32(1);
}
+static inline void
+aead_chacha20_poly1305_iv_fill(struct aead_chacha20_poly1305_iv
+ *chacha20_poly1305,
+ uint64_t iv, uint32_t salt)
+{
+ chacha20_poly1305->salt = salt;
+ chacha20_poly1305->iv = iv;
+ chacha20_poly1305->cnt = rte_cpu_to_be_32(1);
+}
+
static inline void
aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
{
@@ -67,6 +149,21 @@ aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
gcm->cnt = rte_cpu_to_be_32(1);
}
+static inline void
+aead_ccm_iv_fill(struct aead_ccm_iv *ccm, uint64_t iv, uint32_t salt)
+{
+ union aead_ccm_salt tsalt;
+
+ tsalt.salt = salt;
+ ccm->ccm_flags = tsalt.inner.ccm_flags;
+ ccm->salt[0] = tsalt.inner.salt8[0];
+ ccm->salt[1] = tsalt.inner.salt8[1];
+ ccm->salt[2] = tsalt.inner.salt8[2];
+ ccm->iv = iv;
+ ccm->cnt = rte_cpu_to_be_32(1);
+}
+
+
/*
* RFC 4106, 5 AAD Construction
* spi and sqn should already be converted into network byte order.
@@ -86,6 +183,25 @@ aead_gcm_aad_fill(struct aead_gcm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
aad->align0 = 0;
}
+/*
+ * RFC 4309, 5 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_ccm_aad_fill(struct aead_ccm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
+ int esn)
+{
+ aad->spi = spi;
+ if (esn)
+ aad->sqn.u64 = sqn;
+ else {
+ aad->sqn.u32[0] = sqn_low32(sqn);
+ aad->sqn.u32[1] = 0;
+ }
+ aad->align0 = 0;
+}
+
static inline void
gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
{
@@ -93,6 +209,27 @@ gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
iv[1] = 0;
}
+
+/*
+ * RFC 4106, 5 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_chacha20_poly1305_aad_fill(struct aead_chacha20_poly1305_aad *aad,
+ rte_be32_t spi, rte_be64_t sqn,
+ int esn)
+{
+ aad->spi = spi;
+ if (esn)
+ aad->sqn.u64 = sqn;
+ else {
+ aad->sqn.u32[0] = sqn_low32(sqn);
+ aad->sqn.u32[1] = 0;
+ }
+ aad->align0 = 0;
+}
+
/*
* Helper routine to copy IV
* Right now we support only algorithms with IV length equals 0/8/16 bytes.
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index 2b1df6a032..d66c88f05d 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -63,6 +63,8 @@ inb_cop_prepare(struct rte_crypto_op *cop,
{
struct rte_crypto_sym_op *sop;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint64_t *ivc, *ivp;
uint32_t algo;
@@ -83,6 +85,24 @@ inb_cop_prepare(struct rte_crypto_op *cop,
sa->iv_ofs);
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ sop_aead_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ ccm = rte_crypto_op_ctod_offset(cop, struct aead_ccm_iv *,
+ sa->iv_ofs);
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ sop_aead_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ chacha20_poly1305 = rte_crypto_op_ctod_offset(cop,
+ struct aead_chacha20_poly1305_iv *,
+ sa->iv_ofs);
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CBC:
case ALGO_TYPE_3DES_CBC:
sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
@@ -91,6 +111,14 @@ inb_cop_prepare(struct rte_crypto_op *cop,
ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs);
copy_iv(ivc, ivp, sa->iv_len);
break;
+ case ALGO_TYPE_AES_GMAC:
+ sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+ sa->iv_ofs);
+ aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
@@ -110,6 +138,8 @@ inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
uint32_t *pofs, uint32_t plen, void *iv)
{
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint64_t *ivp;
uint32_t clen;
@@ -120,9 +150,19 @@ inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
switch (sa->algo_type) {
case ALGO_TYPE_AES_GCM:
+ case ALGO_TYPE_AES_GMAC:
gcm = (struct aead_gcm_iv *)iv;
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ ccm = (struct aead_ccm_iv *)iv;
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ chacha20_poly1305 = (struct aead_chacha20_poly1305_iv *)iv;
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CBC:
case ALGO_TYPE_3DES_CBC:
copy_iv(iv, ivp, sa->iv_len);
@@ -175,6 +215,8 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
const union sym_op_data *icv)
{
struct aead_gcm_aad *aad;
+ struct aead_ccm_aad *caad;
+ struct aead_chacha20_poly1305_aad *chacha_aad;
/* insert SQN.hi between ESP trailer and ICV */
if (sa->sqh_len != 0)
@@ -184,9 +226,27 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
* fill AAD fields, if any (aad fields are placed after icv),
* right now we support only one AEAD algorithm: AES-GCM.
*/
- if (sa->aad_len != 0) {
- aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
- aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ switch (sa->algo_type) {
+ case ALGO_TYPE_AES_GCM:
+ if (sa->aad_len != 0) {
+ aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+ aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_AES_CCM:
+ if (sa->aad_len != 0) {
+ caad = (struct aead_ccm_aad *)(icv->va + sa->icv_len);
+ aead_ccm_aad_fill(caad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ if (sa->aad_len != 0) {
+ chacha_aad = (struct aead_chacha20_poly1305_aad *)
+ (icv->va + sa->icv_len);
+ aead_chacha20_poly1305_aad_fill(chacha_aad,
+ sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
}
}
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 1e181cf2ce..a3f77469c3 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -63,6 +63,8 @@ outb_cop_prepare(struct rte_crypto_op *cop,
{
struct rte_crypto_sym_op *sop;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint32_t algo;
@@ -80,6 +82,15 @@ outb_cop_prepare(struct rte_crypto_op *cop,
/* NULL case */
sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
break;
+ case ALGO_TYPE_AES_GMAC:
+ /* GMAC case */
+ sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+ sa->iv_ofs);
+ aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_GCM:
/* AEAD (AES_GCM) case */
sop_aead_prepare(sop, sa, icv, hlen, plen);
@@ -89,6 +100,26 @@ outb_cop_prepare(struct rte_crypto_op *cop,
sa->iv_ofs);
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ /* AEAD (AES_CCM) case */
+ sop_aead_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ ccm = rte_crypto_op_ctod_offset(cop, struct aead_ccm_iv *,
+ sa->iv_ofs);
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ /* AEAD (CHACHA20_POLY) case */
+ sop_aead_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ chacha20_poly1305 = rte_crypto_op_ctod_offset(cop,
+ struct aead_chacha20_poly1305_iv *,
+ sa->iv_ofs);
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
/* Cipher-Auth (AES-CTR *) case */
sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
@@ -196,7 +227,9 @@ outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
const union sym_op_data *icv)
{
uint32_t *psqh;
- struct aead_gcm_aad *aad;
+ struct aead_gcm_aad *gaad;
+ struct aead_ccm_aad *caad;
+ struct aead_chacha20_poly1305_aad *chacha20_poly1305_aad;
/* insert SQN.hi between ESP trailer and ICV */
if (sa->sqh_len != 0) {
@@ -208,9 +241,29 @@ outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
* fill IV and AAD fields, if any (aad fields are placed after icv),
* right now we support only one AEAD algorithm: AES-GCM .
*/
+ switch (sa->algo_type) {
+ case ALGO_TYPE_AES_GCM:
if (sa->aad_len != 0) {
- aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
- aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ gaad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+ aead_gcm_aad_fill(gaad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_AES_CCM:
+ if (sa->aad_len != 0) {
+ caad = (struct aead_ccm_aad *)(icv->va + sa->icv_len);
+ aead_ccm_aad_fill(caad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ if (sa->aad_len != 0) {
+ chacha20_poly1305_aad = (struct aead_chacha20_poly1305_aad *)
+ (icv->va + sa->icv_len);
+ aead_chacha20_poly1305_aad_fill(chacha20_poly1305_aad,
+ sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ default:
+ break;
}
}
@@ -418,6 +471,8 @@ outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
{
uint64_t *ivp = iv;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint32_t clen;
@@ -426,6 +481,15 @@ outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
gcm = iv;
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ ccm = iv;
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ chacha20_poly1305 = iv;
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
ctr = iv;
aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt);
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index e59189d215..720e0f365b 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -47,6 +47,15 @@ fill_crypto_xform(struct crypto_xform *xform, uint64_t type,
if (xfn != NULL)
return -EINVAL;
xform->aead = &xf->aead;
+
+ /* GMAC has only auth */
+ } else if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+ xf->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ if (xfn != NULL)
+ return -EINVAL;
+ xform->auth = &xf->auth;
+ xform->cipher = &xfn->cipher;
+
/*
* CIPHER+AUTH xforms are expected in strict order,
* depending on SA direction:
@@ -247,12 +256,13 @@ esp_inb_init(struct rte_ipsec_sa *sa)
sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
/*
- * for AEAD and NULL algorithms we can assume that
+ * for AEAD algorithms we can assume that
* auth and cipher offsets would be equal.
*/
switch (sa->algo_type) {
case ALGO_TYPE_AES_GCM:
- case ALGO_TYPE_NULL:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
sa->ctp.auth.raw = sa->ctp.cipher.raw;
break;
default:
@@ -294,6 +304,8 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
switch (algo_type) {
case ALGO_TYPE_AES_GCM:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
case ALGO_TYPE_AES_CTR:
case ALGO_TYPE_NULL:
sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr) +
@@ -305,15 +317,20 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr);
sa->ctp.cipher.length = sa->iv_len;
break;
+ case ALGO_TYPE_AES_GMAC:
+ sa->ctp.cipher.offset = 0;
+ sa->ctp.cipher.length = 0;
+ break;
}
/*
- * for AEAD and NULL algorithms we can assume that
+ * for AEAD algorithms we can assume that
* auth and cipher offsets would be equal.
*/
switch (algo_type) {
case ALGO_TYPE_AES_GCM:
- case ALGO_TYPE_NULL:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
sa->ctp.auth.raw = sa->ctp.cipher.raw;
break;
default:
@@ -374,13 +391,39 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->pad_align = IPSEC_PAD_AES_GCM;
sa->algo_type = ALGO_TYPE_AES_GCM;
break;
+ case RTE_CRYPTO_AEAD_AES_CCM:
+ /* RFC 4309 */
+ sa->aad_len = sizeof(struct aead_ccm_aad);
+ sa->icv_len = cxf->aead->digest_length;
+ sa->iv_ofs = cxf->aead->iv.offset;
+ sa->iv_len = sizeof(uint64_t);
+ sa->pad_align = IPSEC_PAD_AES_CCM;
+ sa->algo_type = ALGO_TYPE_AES_CCM;
+ break;
+ case RTE_CRYPTO_AEAD_CHACHA20_POLY1305:
+ /* RFC 7634 & 8439*/
+ sa->aad_len = sizeof(struct aead_chacha20_poly1305_aad);
+ sa->icv_len = cxf->aead->digest_length;
+ sa->iv_ofs = cxf->aead->iv.offset;
+ sa->iv_len = sizeof(uint64_t);
+ sa->pad_align = IPSEC_PAD_CHACHA20_POLY1305;
+ sa->algo_type = ALGO_TYPE_CHACHA20_POLY1305;
+ break;
default:
return -EINVAL;
}
+ } else if (cxf->auth->algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ /* RFC 4543 */
+ /* AES-GMAC is a special case of auth that needs IV */
+ sa->pad_align = IPSEC_PAD_AES_GMAC;
+ sa->iv_len = sizeof(uint64_t);
+ sa->icv_len = cxf->auth->digest_length;
+ sa->iv_ofs = cxf->auth->iv.offset;
+ sa->algo_type = ALGO_TYPE_AES_GMAC;
+
} else {
sa->icv_len = cxf->auth->digest_length;
sa->iv_ofs = cxf->cipher->iv.offset;
- sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
switch (cxf->cipher->algo) {
case RTE_CRYPTO_CIPHER_NULL:
@@ -414,6 +457,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
}
}
+ sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
sa->udata = prm->userdata;
sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi);
sa->salt = prm->ipsec_xform.salt;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 1bffe751f5..107ebd1519 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -19,7 +19,10 @@ enum {
IPSEC_PAD_AES_CBC = IPSEC_MAX_IV_SIZE,
IPSEC_PAD_AES_CTR = IPSEC_PAD_DEFAULT,
IPSEC_PAD_AES_GCM = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_AES_CCM = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_CHACHA20_POLY1305 = IPSEC_PAD_DEFAULT,
IPSEC_PAD_NULL = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_AES_GMAC = IPSEC_PAD_DEFAULT,
};
/* iv sizes for different algorithms */
@@ -67,6 +70,9 @@ enum sa_algo_type {
ALGO_TYPE_AES_CBC,
ALGO_TYPE_AES_CTR,
ALGO_TYPE_AES_GCM,
+ ALGO_TYPE_AES_CCM,
+ ALGO_TYPE_CHACHA20_POLY1305,
+ ALGO_TYPE_AES_GMAC,
ALGO_TYPE_MAX
};
--
2.25.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [dpdk-dev] [RFC 06/10] ipsec: add transmit segmentation offload support
2021-07-13 13:32 [dpdk-dev] [RFC 00/10] new features for ipsec and security libraries Radu Nicolau
` (4 preceding siblings ...)
2021-07-13 13:32 ` [dpdk-dev] [RFC 05/10] ipsec: add support for AEAD algorithms Radu Nicolau
@ 2021-07-13 13:32 ` Radu Nicolau
2021-07-13 13:32 ` [dpdk-dev] [RFC 07/10] ipsec: add support for NAT-T Radu Nicolau
` (3 subsequent siblings)
9 siblings, 0 replies; 12+ messages in thread
From: Radu Nicolau @ 2021-07-13 13:32 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, Radu Nicolau, Declan Doherty, Abhijit Sinha, Daniel Martin Buckley
Add support for transmit segmentation offload to inline crypto processing
mode. This offload is not supported by other offload modes, as at a
minimum it requires inline crypto for IPsec to be supported on the
network interface.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijits.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/esp_inb.c | 4 +-
lib/ipsec/esp_outb.c | 115 +++++++++++++++++++++++++++++++++++--------
lib/ipsec/iph.h | 10 +++-
lib/ipsec/sa.c | 6 +++
lib/ipsec/sa.h | 4 ++
5 files changed, 114 insertions(+), 25 deletions(-)
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index d66c88f05d..a6ab8fbdd5 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -668,8 +668,8 @@ trs_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
/* modify packet's layout */
np = trs_process_step2(mb[i], ml[i], hl[i], cofs,
to[i], tl, sqn + k);
- update_trs_l3hdr(sa, np + l2, mb[i]->pkt_len,
- l2, hl[i] - l2, espt[i].next_proto);
+ update_trs_l34hdrs(sa, np + l2, mb[i]->pkt_len,
+ l2, hl[i] - l2, espt[i].next_proto, 0);
/* update mbuf's metadata */
trs_process_step3(mb[i]);
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index a3f77469c3..e550d320da 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -2,6 +2,8 @@
* Copyright(c) 2018-2020 Intel Corporation
*/
+#include <math.h>
+
#include <rte_ipsec.h>
#include <rte_esp.h>
#include <rte_ip.h>
@@ -156,11 +158,20 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* number of bytes to encrypt */
clen = plen + sizeof(*espt);
- clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+ /* We don't need to pad/ailgn packet when using TSO offload */
+ if (likely(!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))))
+ clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
/* pad length + esp tail */
pdlen = clen - plen;
- tlen = pdlen + sa->icv_len + sqh_len;
+
+ /* We don't append ICV length when using TSO offload */
+ if (likely(!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))))
+ tlen = pdlen + sa->icv_len + sqh_len;
+ else
+ tlen = pdlen + sqh_len;
/* do append and prepend */
ml = rte_pktmbuf_lastseg(mb);
@@ -337,6 +348,7 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
char *ph, *pt;
uint64_t *iv;
uint32_t l2len, l3len;
+ uint8_t tso = mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG) ? 1 : 0;
l2len = mb->l2_len;
l3len = mb->l3_len;
@@ -349,11 +361,19 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* number of bytes to encrypt */
clen = plen + sizeof(*espt);
- clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+ /* We don't need to pad/ailgn packet when using TSO offload */
+ if (likely(!tso))
+ clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
/* pad length + esp tail */
pdlen = clen - plen;
- tlen = pdlen + sa->icv_len + sqh_len;
+
+ /* We don't append ICV length when using TSO offload */
+ if (likely(!tso))
+ tlen = pdlen + sa->icv_len + sqh_len;
+ else
+ tlen = pdlen + sqh_len;
/* do append and insert */
ml = rte_pktmbuf_lastseg(mb);
@@ -375,8 +395,8 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
insert_esph(ph, ph + hlen, uhlen);
/* update ip header fields */
- np = update_trs_l3hdr(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
- l3len, IPPROTO_ESP);
+ np = update_trs_l34hdrs(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
+ l3len, IPPROTO_ESP, tso);
/* update spi, seqn and iv */
esph = (struct rte_esp_hdr *)(ph + uhlen);
@@ -651,6 +671,33 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
}
}
+/* check if packet will exceed MSS and segmentation is required */
+static inline int
+esn_outb_nb_segments(const struct rte_ipsec_sa *sa, struct rte_mbuf *m) {
+ uint16_t segements = 1;
+ uint16_t pkt_l3len = m->pkt_len - m->l2_len;
+
+ /* Only support segmentation for UDP/TCP flows */
+ if (!(m->packet_type & (RTE_PTYPE_L4_UDP | RTE_PTYPE_L4_TCP)))
+ return segements;
+
+ if (sa->tso.enabled && pkt_l3len > sa->tso.mss) {
+ segements = ceil((float)pkt_l3len / sa->tso.mss);
+
+ if (m->packet_type & RTE_PTYPE_L4_TCP) {
+ m->ol_flags |= (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM);
+ m->l4_len = sizeof(struct rte_tcp_hdr);
+ } else {
+ m->ol_flags |= (PKT_TX_UDP_SEG | PKT_TX_UDP_CKSUM);
+ m->l4_len = sizeof(struct rte_udp_hdr);
+ }
+
+ m->tso_segsz = sa->tso.mss;
+ }
+
+ return segements;
+}
+
/*
* process group of ESP outbound tunnel packets destined for
* INLINE_CRYPTO type of device.
@@ -660,24 +707,29 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
int32_t rc;
- uint32_t i, k, n;
+ uint32_t i, k, nb_sqn = 0, nb_sqn_alloc;
uint64_t sqn;
rte_be64_t sqc;
struct rte_ipsec_sa *sa;
union sym_op_data icv;
uint64_t iv[IPSEC_MAX_IV_QWORD];
uint32_t dr[num];
+ uint16_t nb_segs[num];
sa = ss->sa;
- n = num;
- sqn = esn_outb_update_sqn(sa, &n);
- if (n != num)
+ for (i = 0; i != num; i++) {
+ nb_segs[i] = esn_outb_nb_segments(sa, mb[i]);
+ nb_sqn += nb_segs[i];
+ }
+
+ nb_sqn_alloc = nb_sqn;
+ sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
+ if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
k = 0;
- for (i = 0; i != n; i++) {
-
+ for (i = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -691,11 +743,18 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
dr[i - k] = i;
rte_errno = -rc;
}
+
+ /**
+ * If packet is using tso, increment sqn by the number of
+ * segments for packet
+ */
+ if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
+ sqn += nb_segs[i] - 1;
}
/* copy not processed mbufs beyond good ones */
- if (k != n && k != 0)
- move_bad_mbufs(mb, dr, n, n - k);
+ if (k != num && k != 0)
+ move_bad_mbufs(mb, dr, num, num - k);
inline_outb_mbuf_prepare(ss, mb, k);
return k;
@@ -710,23 +769,30 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
int32_t rc;
- uint32_t i, k, n;
+ uint32_t i, k, nb_sqn, nb_sqn_alloc;
uint64_t sqn;
rte_be64_t sqc;
struct rte_ipsec_sa *sa;
union sym_op_data icv;
uint64_t iv[IPSEC_MAX_IV_QWORD];
uint32_t dr[num];
+ uint16_t nb_segs[num];
sa = ss->sa;
- n = num;
- sqn = esn_outb_update_sqn(sa, &n);
- if (n != num)
+ /* Calculate number of sequence numbers required */
+ for (i = 0, nb_sqn = 0; i != num; i++) {
+ nb_segs[i] = esn_outb_nb_segments(sa, mb[i]);
+ nb_sqn += nb_segs[i];
+ }
+
+ nb_sqn_alloc = nb_sqn;
+ sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
+ if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
k = 0;
- for (i = 0; i != n; i++) {
+ for (i = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -741,11 +807,18 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
dr[i - k] = i;
rte_errno = -rc;
}
+
+ /**
+ * If packet is using tso, increment sqn by the number of
+ * segments for packet
+ */
+ if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
+ sqn += nb_segs[i] - 1;
}
/* copy not processed mbufs beyond good ones */
- if (k != n && k != 0)
- move_bad_mbufs(mb, dr, n, n - k);
+ if (k != num && k != 0)
+ move_bad_mbufs(mb, dr, num, num - k);
inline_outb_mbuf_prepare(ss, mb, k);
return k;
diff --git a/lib/ipsec/iph.h b/lib/ipsec/iph.h
index 861f16905a..2d223199ac 100644
--- a/lib/ipsec/iph.h
+++ b/lib/ipsec/iph.h
@@ -6,6 +6,8 @@
#define _IPH_H_
#include <rte_ip.h>
+#include <rte_udp.h>
+#include <rte_tcp.h>
/**
* @file iph.h
@@ -39,8 +41,8 @@ insert_esph(char *np, char *op, uint32_t hlen)
/* update original ip header fields for transport case */
static inline int
-update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
- uint32_t l2len, uint32_t l3len, uint8_t proto)
+update_trs_l34hdrs(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
+ uint32_t l2len, uint32_t l3len, uint8_t proto, uint8_t tso)
{
int32_t rc;
@@ -51,6 +53,10 @@ update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
v4h = p;
rc = v4h->next_proto_id;
v4h->next_proto_id = proto;
+ if (tso) {
+ v4h->hdr_checksum = 0;
+ v4h->total_length = 0;
+ }
v4h->total_length = rte_cpu_to_be_16(plen - l2len);
/* IPv6 */
} else {
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 720e0f365b..2ecbbce0a4 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -565,6 +565,12 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->type = type;
sa->size = sz;
+
+ if (prm->ipsec_xform.options.tso == 1) {
+ sa->tso.enabled = 1;
+ sa->tso.mss = prm->ipsec_xform.mss;
+ }
+
/* check for ESN flag */
sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
UINT32_MAX : UINT64_MAX;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 107ebd1519..5e237f3525 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -113,6 +113,10 @@ struct rte_ipsec_sa {
uint8_t iv_len;
uint8_t pad_align;
uint8_t tos_mask;
+ struct {
+ uint8_t enabled:1;
+ uint16_t mss;
+ } tso;
/* template for tunnel header */
uint8_t hdr[IPSEC_MAX_HDR_SIZE];
--
2.25.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [dpdk-dev] [RFC 07/10] ipsec: add support for NAT-T
2021-07-13 13:32 [dpdk-dev] [RFC 00/10] new features for ipsec and security libraries Radu Nicolau
` (5 preceding siblings ...)
2021-07-13 13:32 ` [dpdk-dev] [RFC 06/10] ipsec: add transmit segmentation offload support Radu Nicolau
@ 2021-07-13 13:32 ` Radu Nicolau
2021-07-13 13:32 ` [dpdk-dev] [RFC 08/10] ipsec: add support for SA telemetry Radu Nicolau
` (2 subsequent siblings)
9 siblings, 0 replies; 12+ messages in thread
From: Radu Nicolau @ 2021-07-13 13:32 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, Radu Nicolau, Declan Doherty, Abhijit Sinha, Daniel Martin Buckley
Add support for the IPsec NAT-Traversal use case for Tunnel mode
packets.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijits.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/iph.h | 13 +++++++++++++
lib/ipsec/rte_ipsec_sa.h | 8 +++++++-
lib/ipsec/sa.c | 13 ++++++++++++-
lib/ipsec/sa.h | 4 ++++
4 files changed, 36 insertions(+), 2 deletions(-)
diff --git a/lib/ipsec/iph.h b/lib/ipsec/iph.h
index 2d223199ac..093f86d34a 100644
--- a/lib/ipsec/iph.h
+++ b/lib/ipsec/iph.h
@@ -251,6 +251,7 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa *sa, void *outh,
{
struct rte_ipv4_hdr *v4h;
struct rte_ipv6_hdr *v6h;
+ struct rte_udp_hdr *udph;
uint8_t is_outh_ipv4;
if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
@@ -258,11 +259,23 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa *sa, void *outh,
v4h = outh;
v4h->packet_id = pid;
v4h->total_length = rte_cpu_to_be_16(plen - l2len);
+
+ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
+ udph = (struct rte_udp_hdr *)(v4h + 1);
+ udph->dgram_len = rte_cpu_to_be_16(plen - l2len -
+ (sizeof(*v4h) + sizeof(*udph)));
+ }
} else {
is_outh_ipv4 = 0;
v6h = outh;
v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
sizeof(*v6h));
+
+ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
+ udph = (struct rte_udp_hdr *)(v6h + 1);
+ udph->dgram_len = rte_cpu_to_be_16(plen - l2len -
+ (sizeof(*v6h) + sizeof(*udph)));
+ }
}
if (sa->type & TUN_HDR_MSK)
diff --git a/lib/ipsec/rte_ipsec_sa.h b/lib/ipsec/rte_ipsec_sa.h
index cf51ad8338..40d1e70d45 100644
--- a/lib/ipsec/rte_ipsec_sa.h
+++ b/lib/ipsec/rte_ipsec_sa.h
@@ -76,6 +76,7 @@ struct rte_ipsec_sa_prm {
* - inbound/outbound
* - mode (TRANSPORT/TUNNEL)
* - for TUNNEL outer IP version (IPv4/IPv6)
+ * - NAT-T UDP encapsulated (TUNNEL mode only)
* - are SA SQN operations 'atomic'
* - ESN enabled/disabled
* ...
@@ -86,7 +87,8 @@ enum {
RTE_SATP_LOG2_PROTO,
RTE_SATP_LOG2_DIR,
RTE_SATP_LOG2_MODE,
- RTE_SATP_LOG2_SQN = RTE_SATP_LOG2_MODE + 2,
+ RTE_SATP_LOG2_NATT = RTE_SATP_LOG2_MODE + 2,
+ RTE_SATP_LOG2_SQN,
RTE_SATP_LOG2_ESN,
RTE_SATP_LOG2_ECN,
RTE_SATP_LOG2_DSCP
@@ -109,6 +111,10 @@ enum {
#define RTE_IPSEC_SATP_MODE_TUNLV4 (1ULL << RTE_SATP_LOG2_MODE)
#define RTE_IPSEC_SATP_MODE_TUNLV6 (2ULL << RTE_SATP_LOG2_MODE)
+#define RTE_IPSEC_SATP_NATT_MASK (1ULL << RTE_SATP_LOG2_NATT)
+#define RTE_IPSEC_SATP_NATT_DISABLE (0ULL << RTE_SATP_LOG2_NATT)
+#define RTE_IPSEC_SATP_NATT_ENABLE (1ULL << RTE_SATP_LOG2_NATT)
+
#define RTE_IPSEC_SATP_SQN_MASK (1ULL << RTE_SATP_LOG2_SQN)
#define RTE_IPSEC_SATP_SQN_RAW (0ULL << RTE_SATP_LOG2_SQN)
#define RTE_IPSEC_SATP_SQN_ATOM (1ULL << RTE_SATP_LOG2_SQN)
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 2ecbbce0a4..8e369e4618 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -217,6 +217,10 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
} else
return -EINVAL;
+ /* check for UDP encapsulation flag */
+ if (prm->ipsec_xform.options.udp_encap == 1)
+ tp |= RTE_IPSEC_SATP_NATT_ENABLE;
+
/* check for ESN flag */
if (prm->ipsec_xform.options.esn == 0)
tp |= RTE_IPSEC_SATP_ESN_DISABLE;
@@ -372,7 +376,8 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
const struct crypto_xform *cxf)
{
static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
- RTE_IPSEC_SATP_MODE_MASK;
+ RTE_IPSEC_SATP_MODE_MASK |
+ RTE_IPSEC_SATP_NATT_MASK;
if (prm->ipsec_xform.options.ecn)
sa->tos_mask |= RTE_IPV4_HDR_ECN_MASK;
@@ -475,10 +480,16 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
esp_inb_init(sa);
break;
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4 |
+ RTE_IPSEC_SATP_NATT_ENABLE):
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6 |
+ RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
esp_outb_tun_init(sa, prm);
break;
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
+ RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
esp_outb_init(sa, 0);
break;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 5e237f3525..3f38921eb3 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -101,6 +101,10 @@ struct rte_ipsec_sa {
uint64_t msk;
uint64_t val;
} tx_offload;
+ struct {
+ uint16_t sport;
+ uint16_t dport;
+ } natt;
uint32_t salt;
uint8_t algo_type;
uint8_t proto; /* next proto */
--
2.25.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [dpdk-dev] [RFC 08/10] ipsec: add support for SA telemetry
2021-07-13 13:32 [dpdk-dev] [RFC 00/10] new features for ipsec and security libraries Radu Nicolau
` (6 preceding siblings ...)
2021-07-13 13:32 ` [dpdk-dev] [RFC 07/10] ipsec: add support for NAT-T Radu Nicolau
@ 2021-07-13 13:32 ` Radu Nicolau
2021-07-13 13:32 ` [dpdk-dev] [RFC 09/10] ipsec: add support for initial SQN value Radu Nicolau
2021-07-13 13:32 ` [dpdk-dev] [RFC 10/10] ipsec: add ol_flags support Radu Nicolau
9 siblings, 0 replies; 12+ messages in thread
From: Radu Nicolau @ 2021-07-13 13:32 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin, Ray Kinsella
Cc: dev, Radu Nicolau, Declan Doherty, Abhijit Sinha, Daniel Martin Buckley
Add telemetry support for ipsec SAs
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijits.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/esp_inb.c | 1 +
lib/ipsec/esp_outb.c | 12 +-
lib/ipsec/meson.build | 2 +-
lib/ipsec/rte_ipsec.h | 11 ++
lib/ipsec/sa.c | 255 +++++++++++++++++++++++++++++++++++++++++-
lib/ipsec/sa.h | 21 ++++
lib/ipsec/version.map | 8 ++
7 files changed, 304 insertions(+), 6 deletions(-)
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index a6ab8fbdd5..8cb4c16302 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -722,6 +722,7 @@ esp_inb_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
/* process packets, extract seq numbers */
k = process(sa, mb, sqn, dr, num, sqh_len);
+ sa->statistics.count += k;
/* handle unprocessed mbufs */
if (k != num && k != 0)
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index e550d320da..dc92dd7aab 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -617,7 +617,7 @@ uint16_t
esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
uint16_t num)
{
- uint32_t i, k, icv_len, *icv;
+ uint32_t i, k, icv_len, *icv, bytes;
struct rte_mbuf *ml;
struct rte_ipsec_sa *sa;
uint32_t dr[num];
@@ -626,10 +626,12 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
k = 0;
icv_len = sa->icv_len;
+ bytes = 0;
for (i = 0; i != num; i++) {
if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
ml = rte_pktmbuf_lastseg(mb[i]);
+ bytes += mb[i]->data_len;
/* remove high-order 32 bits of esn from packet len */
mb[i]->pkt_len -= sa->sqh_len;
ml->data_len -= sa->sqh_len;
@@ -640,6 +642,8 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
} else
dr[i - k] = i;
}
+ sa->statistics.count += k;
+ sa->statistics.bytes += bytes - (sa->hdr_len * k);
/* handle unprocessed mbufs */
if (k != num) {
@@ -659,16 +663,19 @@ static inline void
inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- uint32_t i, ol_flags;
+ uint32_t i, ol_flags, bytes = 0;
ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
for (i = 0; i != num; i++) {
mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
+ bytes += mb[i]->data_len;
if (ol_flags != 0)
rte_security_set_pkt_metadata(ss->security.ctx,
ss->security.ses, mb[i], NULL);
}
+ ss->sa->statistics.count += num;
+ ss->sa->statistics.bytes += bytes - (ss->sa->hdr_len * num);
}
/* check if packet will exceed MSS and segmentation is required */
@@ -752,6 +759,7 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
sqn += nb_segs[i] - 1;
}
+
/* copy not processed mbufs beyond good ones */
if (k != num && k != 0)
move_bad_mbufs(mb, dr, num, num - k);
diff --git a/lib/ipsec/meson.build b/lib/ipsec/meson.build
index 1497f573bb..f5e44cfe47 100644
--- a/lib/ipsec/meson.build
+++ b/lib/ipsec/meson.build
@@ -6,4 +6,4 @@ sources = files('esp_inb.c', 'esp_outb.c', 'sa.c', 'ses.c', 'ipsec_sad.c')
headers = files('rte_ipsec.h', 'rte_ipsec_sa.h', 'rte_ipsec_sad.h')
indirect_headers += files('rte_ipsec_group.h')
-deps += ['mbuf', 'net', 'cryptodev', 'security', 'hash']
+deps += ['mbuf', 'net', 'cryptodev', 'security', 'hash', 'telemetry']
diff --git a/lib/ipsec/rte_ipsec.h b/lib/ipsec/rte_ipsec.h
index dd60d95915..d34798bc7f 100644
--- a/lib/ipsec/rte_ipsec.h
+++ b/lib/ipsec/rte_ipsec.h
@@ -158,6 +158,17 @@ rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
return ss->pkt_func.process(ss, mb, num);
}
+
+struct rte_ipsec_telemetry;
+
+__rte_experimental
+int
+rte_ipsec_telemetry_init(void);
+
+__rte_experimental
+int
+rte_ipsec_telemetry_sa_add(struct rte_ipsec_sa *sa);
+
#include <rte_ipsec_group.h>
#ifdef __cplusplus
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 8e369e4618..bbbf673d8b 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -7,7 +7,7 @@
#include <rte_ip.h>
#include <rte_errno.h>
#include <rte_cryptodev.h>
-
+#include <rte_telemetry.h>
#include "sa.h"
#include "ipsec_sqn.h"
#include "crypto.h"
@@ -25,6 +25,7 @@ struct crypto_xform {
struct rte_crypto_aead_xform *aead;
};
+
/*
* helper routine, fills internal crypto_xform structure.
*/
@@ -532,6 +533,249 @@ rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
wsz = prm->ipsec_xform.replay_win_sz;
return ipsec_sa_size(type, &wsz, &nb);
}
+struct rte_ipsec_telemetry {
+ bool initialized;
+ LIST_HEAD(, rte_ipsec_sa) sa_list_head;
+};
+
+#include <rte_malloc.h>
+
+static struct rte_ipsec_telemetry rte_ipsec_telemetry_instance = {
+ .initialized = false };
+
+static int
+handle_telemetry_cmd_ipsec_sa_list(const char *cmd __rte_unused,
+ const char *params __rte_unused,
+ struct rte_tel_data *data)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ struct rte_ipsec_sa *sa;
+
+ rte_tel_data_start_array(data, RTE_TEL_U64_VAL);
+
+ LIST_FOREACH(sa, &telemetry->sa_list_head, telemetry_next) {
+ rte_tel_data_add_array_u64(data, htonl(sa->spi));
+ }
+
+ return 0;
+}
+
+/**
+ * Handle IPsec SA statistics telemetry request
+ *
+ * Return dict of SA's with dict of key/value counters
+ *
+ * {
+ * "SA_SPI_XX": {"count": 0, "bytes": 0, "errors": 0},
+ * "SA_SPI_YY": {"count": 0, "bytes": 0, "errors": 0}
+ * }
+ *
+ */
+static int
+handle_telemetry_cmd_ipsec_sa_stats(const char *cmd __rte_unused,
+ const char *params,
+ struct rte_tel_data *data)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ struct rte_ipsec_sa *sa;
+ bool user_specified_spi = false;
+ uint32_t sa_spi;
+
+ if (params) {
+ user_specified_spi = true;
+ sa_spi = htonl((uint32_t)atoi(params));
+ }
+
+ rte_tel_data_start_dict(data);
+
+ LIST_FOREACH(sa, &telemetry->sa_list_head, telemetry_next) {
+ char sa_name[64];
+
+ static const char *name_pkt_cnt = "count";
+ static const char *name_byte_cnt = "bytes";
+ static const char *name_error_cnt = "errors";
+ struct rte_tel_data *sa_data;
+
+ /* If user provided SPI only get telemetry for that SA */
+ if (user_specified_spi && (sa_spi != sa->spi))
+ continue;
+
+ /* allocate telemetry data struct for SA telemetry */
+ sa_data = rte_tel_data_alloc();
+ if (!sa_data)
+ return -ENOMEM;
+
+ rte_tel_data_start_dict(sa_data);
+
+ /* add telemetry key/values pairs */
+ rte_tel_data_add_dict_u64(sa_data, name_pkt_cnt,
+ sa->statistics.count);
+
+ rte_tel_data_add_dict_u64(sa_data, name_byte_cnt,
+ sa->statistics.bytes);
+
+ rte_tel_data_add_dict_u64(sa_data, name_error_cnt,
+ sa->statistics.errors.count);
+
+ /* generate telemetry label */
+ snprintf(sa_name, sizeof(sa_name), "SA_SPI_%i", htonl(sa->spi));
+
+ /* add SA telemetry to dictionary container */
+ rte_tel_data_add_dict_container(data, sa_name, sa_data, 0);
+ }
+
+ return 0;
+}
+
+static int
+handle_telemetry_cmd_ipsec_sa_configuration(const char *cmd __rte_unused,
+ const char *params,
+ struct rte_tel_data *data)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ struct rte_ipsec_sa *sa;
+ uint32_t sa_spi;
+
+ if (params)
+ sa_spi = htonl((uint32_t)atoi(params));
+ else
+ return -EINVAL;
+
+ rte_tel_data_start_dict(data);
+
+ LIST_FOREACH(sa, &telemetry->sa_list_head, telemetry_next) {
+ uint64_t mode;
+
+ if (sa_spi != sa->spi)
+ continue;
+
+ /* add SA configuration key/values pairs */
+ rte_tel_data_add_dict_string(data, "Type",
+ (sa->type & RTE_IPSEC_SATP_PROTO_MASK) ==
+ RTE_IPSEC_SATP_PROTO_AH ? "AH" : "ESP");
+
+ rte_tel_data_add_dict_string(data, "Direction",
+ (sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+ RTE_IPSEC_SATP_DIR_IB ? "Inbound" : "Outbound");
+
+ mode = sa->type & RTE_IPSEC_SATP_MODE_MASK;
+
+ if (mode == RTE_IPSEC_SATP_MODE_TRANS) {
+ rte_tel_data_add_dict_string(data, "Mode", "Transport");
+ } else {
+ rte_tel_data_add_dict_string(data, "Mode", "Tunnel");
+
+ if ((sa->type & RTE_IPSEC_SATP_NATT_MASK) ==
+ RTE_IPSEC_SATP_NATT_ENABLE) {
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ } else if (sa->type &
+ RTE_IPSEC_SATP_MODE_TUNLV6) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ }
+ } else {
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ } else if (sa->type &
+ RTE_IPSEC_SATP_MODE_TUNLV6) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ }
+ }
+ }
+
+ rte_tel_data_add_dict_string(data,
+ "extended-sequence-number",
+ (sa->type & RTE_IPSEC_SATP_ESN_MASK) ==
+ RTE_IPSEC_SATP_ESN_ENABLE ?
+ "enabled" : "disabled");
+
+ if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+ RTE_IPSEC_SATP_DIR_IB)
+
+ if (sa->sqn.inb.rsn[sa->sqn.inb.rdidx])
+ rte_tel_data_add_dict_u64(data,
+ "sequence-number",
+ sa->sqn.inb.rsn[sa->sqn.inb.rdidx]->sqn);
+ else
+ rte_tel_data_add_dict_u64(data,
+ "sequence-number", 0);
+ else
+ rte_tel_data_add_dict_u64(data, "sequence-number",
+ sa->sqn.outb);
+
+ rte_tel_data_add_dict_string(data,
+ "explicit-congestion-notification",
+ (sa->type & RTE_IPSEC_SATP_ECN_MASK) ==
+ RTE_IPSEC_SATP_ECN_ENABLE ?
+ "enabled" : "disabled");
+
+ rte_tel_data_add_dict_string(data,
+ "copy-DSCP",
+ (sa->type & RTE_IPSEC_SATP_DSCP_MASK) ==
+ RTE_IPSEC_SATP_DSCP_ENABLE ?
+ "enabled" : "disabled");
+
+ rte_tel_data_add_dict_string(data, "TSO",
+ sa->tso.enabled ? "enabled" : "disabled");
+
+ if (sa->tso.enabled)
+ rte_tel_data_add_dict_u64(data, "TSO-MSS", sa->tso.mss);
+
+ }
+
+ return 0;
+}
+int
+rte_ipsec_telemetry_init(void)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ int rc = 0;
+
+ if (telemetry->initialized)
+ return rc;
+
+ LIST_INIT(&telemetry->sa_list_head);
+
+ rc = rte_telemetry_register_cmd("/ipsec/sa/list",
+ handle_telemetry_cmd_ipsec_sa_list,
+ "Return list of IPsec Security Associations with telemetry enabled.");
+ if (rc)
+ return rc;
+
+ rc = rte_telemetry_register_cmd("/ipsec/sa/stats",
+ handle_telemetry_cmd_ipsec_sa_stats,
+ "Returns IPsec Security Assoication stastistics. Parameters: int sa_spi");
+ if (rc)
+ return rc;
+
+ rc = rte_telemetry_register_cmd("/ipsec/sa/details",
+ handle_telemetry_cmd_ipsec_sa_configuration,
+ "Returns IPsec Security Assoication configuration. Parameters: int sa_spi");
+ if (rc)
+ return rc;
+
+ telemetry->initialized = true;
+
+ return rc;
+}
+
+int
+rte_ipsec_telemetry_sa_add(struct rte_ipsec_sa *sa)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+
+ LIST_INSERT_HEAD(&telemetry->sa_list_head, sa, telemetry_next);
+
+ return 0;
+}
int
rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
@@ -644,19 +888,24 @@ uint16_t
pkt_flag_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- uint32_t i, k;
+ uint32_t i, k, bytes = 0;
uint32_t dr[num];
RTE_SET_USED(ss);
k = 0;
for (i = 0; i != num; i++) {
- if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
+ if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
k++;
+ bytes += mb[i]->data_len;
+ }
else
dr[i - k] = i;
}
+ ss->sa->statistics.count += k;
+ ss->sa->statistics.bytes += bytes - (ss->sa->hdr_len * k);
+
/* handle unprocessed mbufs */
if (k != num) {
rte_errno = EBADMSG;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 3f38921eb3..b9b7ebec5b 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -122,9 +122,30 @@ struct rte_ipsec_sa {
uint16_t mss;
} tso;
+ LIST_ENTRY(rte_ipsec_sa) telemetry_next;
+ /**< list entry for telemetry enabled SA */
+
+
+ RTE_MARKER cachealign_statistics __rte_cache_min_aligned;
+
+ /* Statistics */
+ struct {
+ uint64_t count;
+ uint64_t bytes;
+
+ struct {
+ uint64_t count;
+ uint64_t authentication_failed;
+ } errors;
+ } statistics;
+
+ RTE_MARKER cachealign_tunnel_header __rte_cache_min_aligned;
+
/* template for tunnel header */
uint8_t hdr[IPSEC_MAX_HDR_SIZE];
+
+ RTE_MARKER cachealign_tunnel_seq_num_replay_win __rte_cache_min_aligned;
/*
* sqn and replay window
* In case of SA handled by multiple threads *sqn* cacheline
diff --git a/lib/ipsec/version.map b/lib/ipsec/version.map
index ad3e38b7c8..c181c1fb04 100644
--- a/lib/ipsec/version.map
+++ b/lib/ipsec/version.map
@@ -19,3 +19,11 @@ DPDK_21 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ rte_ipsec_telemetry_init;
+ rte_ipsec_telemetry_sa_add;
+
+};
--
2.25.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [dpdk-dev] [RFC 09/10] ipsec: add support for initial SQN value
2021-07-13 13:32 [dpdk-dev] [RFC 00/10] new features for ipsec and security libraries Radu Nicolau
` (7 preceding siblings ...)
2021-07-13 13:32 ` [dpdk-dev] [RFC 08/10] ipsec: add support for SA telemetry Radu Nicolau
@ 2021-07-13 13:32 ` Radu Nicolau
2021-07-13 13:32 ` [dpdk-dev] [RFC 10/10] ipsec: add ol_flags support Radu Nicolau
9 siblings, 0 replies; 12+ messages in thread
From: Radu Nicolau @ 2021-07-13 13:32 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, Radu Nicolau, Declan Doherty, Abhijit Sinha, Daniel Martin Buckley
Update IPsec library to support initial SQN value.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijits.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/esp_outb.c | 19 ++++++++++++-------
lib/ipsec/sa.c | 29 ++++++++++++++++++++++-------
2 files changed, 34 insertions(+), 14 deletions(-)
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index dc92dd7aab..07eec9a905 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -661,7 +661,7 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
*/
static inline void
inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
- struct rte_mbuf *mb[], uint16_t num)
+ struct rte_mbuf *mb[], uint16_t num, uint64_t *sqn)
{
uint32_t i, ol_flags, bytes = 0;
@@ -672,7 +672,7 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
bytes += mb[i]->data_len;
if (ol_flags != 0)
rte_security_set_pkt_metadata(ss->security.ctx,
- ss->security.ses, mb[i], NULL);
+ ss->security.ses, mb[i], sqn);
}
ss->sa->statistics.count += num;
ss->sa->statistics.bytes += bytes - (ss->sa->hdr_len * num);
@@ -764,7 +764,10 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
if (k != num && k != 0)
move_bad_mbufs(mb, dr, num, num - k);
- inline_outb_mbuf_prepare(ss, mb, k);
+ if (sa->sqn_mask > UINT32_MAX)
+ inline_outb_mbuf_prepare(ss, mb, k, &sqn);
+ else
+ inline_outb_mbuf_prepare(ss, mb, k, NULL);
return k;
}
@@ -799,8 +802,7 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
- k = 0;
- for (i = 0; i != num; i++) {
+ for (i = 0, k = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -828,7 +830,10 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
if (k != num && k != 0)
move_bad_mbufs(mb, dr, num, num - k);
- inline_outb_mbuf_prepare(ss, mb, k);
+ if (sa->sqn_mask > UINT32_MAX)
+ inline_outb_mbuf_prepare(ss, mb, k, &sqn);
+ else
+ inline_outb_mbuf_prepare(ss, mb, k, NULL);
return k;
}
@@ -840,6 +845,6 @@ uint16_t
inline_proto_outb_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- inline_outb_mbuf_prepare(ss, mb, num);
+ inline_outb_mbuf_prepare(ss, mb, num, NULL);
return num;
}
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index bbbf673d8b..dd18b44c38 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -294,11 +294,11 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
* Init ESP outbound specific things.
*/
static void
-esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
+esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen, uint64_t sqn)
{
uint8_t algo_type;
- sa->sqn.outb = 1;
+ sa->sqn.outb = sqn;
algo_type = sa->algo_type;
@@ -356,6 +356,8 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
static void
esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
{
+ uint64_t sqn = prm->ipsec_xform.esn.value > 0 ?
+ prm->ipsec_xform.esn.value : 0;
sa->proto = prm->tun.next_proto;
sa->hdr_len = prm->tun.hdr_len;
sa->hdr_l3_off = prm->tun.hdr_l3_off;
@@ -366,7 +368,7 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
- esp_outb_init(sa, sa->hdr_len);
+ esp_outb_init(sa, sa->hdr_len, sqn);
}
/*
@@ -376,6 +378,8 @@ static int
esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
const struct crypto_xform *cxf)
{
+ uint64_t sqn = prm->ipsec_xform.esn.value > 0 ?
+ prm->ipsec_xform.esn.value : 0;
static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
RTE_IPSEC_SATP_MODE_MASK |
RTE_IPSEC_SATP_NATT_MASK;
@@ -492,7 +496,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
- esp_outb_init(sa, 0);
+ esp_outb_init(sa, 0, sqn);
break;
}
@@ -503,15 +507,19 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
* helper function, init SA replay structure.
*/
static void
-fill_sa_replay(struct rte_ipsec_sa *sa, uint32_t wnd_sz, uint32_t nb_bucket)
+fill_sa_replay(struct rte_ipsec_sa *sa,
+ uint32_t wnd_sz, uint32_t nb_bucket, uint64_t sqn)
{
sa->replay.win_sz = wnd_sz;
sa->replay.nb_bucket = nb_bucket;
sa->replay.bucket_index_mask = nb_bucket - 1;
sa->sqn.inb.rsn[0] = (struct replay_sqn *)(sa + 1);
- if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+ sa->sqn.inb.rsn[0]->sqn = sqn;
+ if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM) {
sa->sqn.inb.rsn[1] = (struct replay_sqn *)
((uintptr_t)sa->sqn.inb.rsn[0] + rsn_size(nb_bucket));
+ sa->sqn.inb.rsn[1]->sqn = sqn;
+ }
}
int
@@ -830,13 +838,20 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
UINT32_MAX : UINT64_MAX;
+ /* if we are starting from a non-zero sn value */
+ if (prm->ipsec_xform.esn.value > 0) {
+ if (prm->ipsec_xform.direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+ sa->sqn.outb = prm->ipsec_xform.esn.value;
+ }
+
rc = esp_sa_init(sa, prm, &cxf);
if (rc != 0)
rte_ipsec_sa_fini(sa);
/* fill replay window related fields */
if (nb != 0)
- fill_sa_replay(sa, wsz, nb);
+ fill_sa_replay(sa, wsz, nb, prm->ipsec_xform.esn.value);
return sz;
}
--
2.25.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [dpdk-dev] [RFC 10/10] ipsec: add ol_flags support
2021-07-13 13:32 [dpdk-dev] [RFC 00/10] new features for ipsec and security libraries Radu Nicolau
` (8 preceding siblings ...)
2021-07-13 13:32 ` [dpdk-dev] [RFC 09/10] ipsec: add support for initial SQN value Radu Nicolau
@ 2021-07-13 13:32 ` Radu Nicolau
9 siblings, 0 replies; 12+ messages in thread
From: Radu Nicolau @ 2021-07-13 13:32 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, Radu Nicolau, Declan Doherty, Abhijit Sinha, Daniel Martin Buckley
Set mbuff->ol_flags for IPsec packets.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijits.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/esp_inb.c | 17 ++++++++++++--
lib/ipsec/esp_outb.c | 48 ++++++++++++++++++++++++++++++---------
lib/ipsec/rte_ipsec_sa.h | 3 ++-
lib/ipsec/sa.c | 49 ++++++++++++++++++++++++++++++++++++++--
lib/ipsec/sa.h | 8 +++++++
5 files changed, 109 insertions(+), 16 deletions(-)
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index 8cb4c16302..5fcb41297e 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -559,7 +559,8 @@ trs_process_step3(struct rte_mbuf *mb)
* - tx_offload
*/
static inline void
-tun_process_step3(struct rte_mbuf *mb, uint64_t txof_msk, uint64_t txof_val)
+tun_process_step3(struct rte_mbuf *mb, uint8_t is_ipv4, uint64_t txof_msk,
+ uint64_t txof_val)
{
/* reset mbuf metatdata: L2/L3 len, packet type */
mb->packet_type = RTE_PTYPE_UNKNOWN;
@@ -567,6 +568,14 @@ tun_process_step3(struct rte_mbuf *mb, uint64_t txof_msk, uint64_t txof_val)
/* clear the PKT_RX_SEC_OFFLOAD flag if set */
mb->ol_flags &= ~PKT_RX_SEC_OFFLOAD;
+
+ if (is_ipv4) {
+ mb->l3_len = sizeof(struct rte_ipv4_hdr);
+ mb->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ } else {
+ mb->l3_len = sizeof(struct rte_ipv6_hdr);
+ mb->ol_flags |= PKT_TX_IPV6;
+ }
}
/*
@@ -618,8 +627,12 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
update_tun_inb_l3hdr(sa, outh, inh);
/* update mbuf's metadata */
- tun_process_step3(mb[i], sa->tx_offload.msk,
+ tun_process_step3(mb[i],
+ (sa->type & RTE_IPSEC_SATP_IPV_MASK) ==
+ RTE_IPSEC_SATP_IPV4 ? 1 : 0,
+ sa->tx_offload.msk,
sa->tx_offload.val);
+
k++;
} else
dr[i - k] = i;
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 07eec9a905..acbb86845e 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -19,7 +19,7 @@
typedef int32_t (*esp_outb_prepare_t)(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len);
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t icrypto);
/*
* helper function to fill crypto_sym op for cipher+auth algorithms.
@@ -140,9 +140,9 @@ outb_cop_prepare(struct rte_crypto_op *cop,
static inline int32_t
outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len)
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t icrypto)
{
- uint32_t clen, hlen, l2len, pdlen, pdofs, plen, tlen;
+ uint32_t clen, hlen, l2len, l3len, pdlen, pdofs, plen, tlen;
struct rte_mbuf *ml;
struct rte_esp_hdr *esph;
struct rte_esp_tail *espt;
@@ -154,6 +154,8 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* size of ipsec protected data */
l2len = mb->l2_len;
+ l3len = mb->l3_len;
+
plen = mb->pkt_len - l2len;
/* number of bytes to encrypt */
@@ -190,8 +192,26 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
/* update pkt l2/l3 len */
- mb->tx_offload = (mb->tx_offload & sa->tx_offload.msk) |
- sa->tx_offload.val;
+ if (icrypto) {
+ mb->tx_offload =
+ (mb->tx_offload & sa->inline_crypto.tx_offload.msk) |
+ sa->inline_crypto.tx_offload.val;
+ mb->l3_len = l3len;
+
+ mb->ol_flags |= sa->inline_crypto.tx_ol_flags;
+
+ /* set ip checksum offload for inner */
+ if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4)
+ mb->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if ((sa->type & RTE_IPSEC_SATP_IPV_MASK)
+ == RTE_IPSEC_SATP_IPV6)
+ mb->ol_flags |= PKT_TX_IPV6;
+ } else {
+ mb->tx_offload = (mb->tx_offload & sa->tx_offload.msk) |
+ sa->tx_offload.val;
+
+ mb->ol_flags |= sa->tx_ol_flags;
+ }
/* copy tunnel pkt header */
rte_memcpy(ph, sa->hdr, sa->hdr_len);
@@ -311,7 +331,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
/* try to update the packet itself */
rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv,
- sa->sqh_len);
+ sa->sqh_len, 0);
/* success, setup crypto op */
if (rc >= 0) {
outb_pkt_xprepare(sa, sqc, &icv);
@@ -338,7 +358,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
static inline int32_t
outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len)
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t icrypto __rte_unused)
{
uint8_t np;
uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen;
@@ -394,10 +414,16 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* shift L2/L3 headers */
insert_esph(ph, ph + hlen, uhlen);
+ if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4)
+ mb->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV6)
+ mb->ol_flags |= PKT_TX_IPV6;
+
/* update ip header fields */
np = update_trs_l34hdrs(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
l3len, IPPROTO_ESP, tso);
+
/* update spi, seqn and iv */
esph = (struct rte_esp_hdr *)(ph + uhlen);
iv = (uint64_t *)(esph + 1);
@@ -463,7 +489,7 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
/* try to update the packet itself */
rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv,
- sa->sqh_len);
+ sa->sqh_len, 0);
/* success, setup crypto op */
if (rc >= 0) {
outb_pkt_xprepare(sa, sqc, &icv);
@@ -560,7 +586,7 @@ cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss,
gen_iv(ivbuf[k], sqc);
/* try to update the packet itself */
- rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len);
+ rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len, 0);
/* success, proceed with preparations */
if (rc >= 0) {
@@ -741,7 +767,7 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
gen_iv(iv, sqc);
/* try to update the packet itself */
- rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
+ rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0, 1);
k += (rc >= 0);
@@ -808,7 +834,7 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
gen_iv(iv, sqc);
/* try to update the packet itself */
- rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
+ rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0, 0);
k += (rc >= 0);
diff --git a/lib/ipsec/rte_ipsec_sa.h b/lib/ipsec/rte_ipsec_sa.h
index 40d1e70d45..3c36dcaa77 100644
--- a/lib/ipsec/rte_ipsec_sa.h
+++ b/lib/ipsec/rte_ipsec_sa.h
@@ -38,7 +38,8 @@ struct rte_ipsec_sa_prm {
union {
struct {
uint8_t hdr_len; /**< tunnel header len */
- uint8_t hdr_l3_off; /**< offset for IPv4/IPv6 header */
+ uint8_t hdr_l3_off; /**< tunnel l3 header len */
+ uint8_t hdr_l3_len; /**< tunnel l3 header len */
uint8_t next_proto; /**< next header protocol */
const void *hdr; /**< tunnel header template */
} tun; /**< tunnel mode related parameters */
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index dd18b44c38..67264c0d05 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -17,6 +17,8 @@
#define MBUF_MAX_L2_LEN RTE_LEN2MASK(RTE_MBUF_L2_LEN_BITS, uint64_t)
#define MBUF_MAX_L3_LEN RTE_LEN2MASK(RTE_MBUF_L3_LEN_BITS, uint64_t)
+#define MBUF_MAX_TSO_LEN RTE_LEN2MASK(RTE_MBUF_TSO_SEGSZ_BITS, uint64_t)
+#define MBUF_MAX_OL3_LEN RTE_LEN2MASK(RTE_MBUF_OUTL3_LEN_BITS, uint64_t)
/* some helper structures */
struct crypto_xform {
@@ -348,6 +350,11 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen, uint64_t sqn)
sa->cofs.ofs.cipher.head = sa->ctp.cipher.offset - sa->ctp.auth.offset;
sa->cofs.ofs.cipher.tail = (sa->ctp.auth.offset + sa->ctp.auth.length) -
(sa->ctp.cipher.offset + sa->ctp.cipher.length);
+
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
+ sa->tx_ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV6)
+ sa->tx_ol_flags |= PKT_TX_IPV6;
}
/*
@@ -362,9 +369,43 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
sa->hdr_len = prm->tun.hdr_len;
sa->hdr_l3_off = prm->tun.hdr_l3_off;
+
+ /* update l2_len and l3_len fields for outbound mbuf */
+ sa->inline_crypto.tx_offload.val = rte_mbuf_tx_offload(
+ 0, /* iL2_LEN */
+ 0, /* iL3_LEN */
+ 0, /* iL4_LEN */
+ 0, /* TSO_SEG_SZ */
+ prm->tun.hdr_l3_len, /* oL3_LEN */
+ prm->tun.hdr_l3_off, /* oL2_LEN */
+ 0);
+
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_TUNNEL_ESP;
+
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_OUTER_IPV4;
+ else if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV6)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_OUTER_IPV6;
+
+ if (sa->inline_crypto.tx_ol_flags & PKT_TX_OUTER_IPV4)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_OUTER_IP_CKSUM;
+ if (sa->tx_ol_flags & PKT_TX_IPV4)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_IP_CKSUM;
+
/* update l2_len and l3_len fields for outbound mbuf */
- sa->tx_offload.val = rte_mbuf_tx_offload(sa->hdr_l3_off,
- sa->hdr_len - sa->hdr_l3_off, 0, 0, 0, 0, 0);
+ sa->tx_offload.val = rte_mbuf_tx_offload(
+ prm->tun.hdr_l3_off, /* iL2_LEN */
+ prm->tun.hdr_l3_len, /* iL3_LEN */
+ 0, /* iL4_LEN */
+ 0, /* TSO_SEG_SZ */
+ 0, /* oL3_LEN */
+ 0, /* oL2_LEN */
+ 0);
+
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
+ sa->tx_ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV6)
+ sa->tx_ol_flags |= PKT_TX_IPV6;
memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
@@ -473,6 +514,10 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->salt = prm->ipsec_xform.salt;
/* preserve all values except l2_len and l3_len */
+ sa->inline_crypto.tx_offload.msk =
+ ~rte_mbuf_tx_offload(MBUF_MAX_L2_LEN, MBUF_MAX_L3_LEN,
+ 0, 0, MBUF_MAX_OL3_LEN, 0, 0);
+
sa->tx_offload.msk =
~rte_mbuf_tx_offload(MBUF_MAX_L2_LEN, MBUF_MAX_L3_LEN,
0, 0, 0, 0, 0);
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index b9b7ebec5b..172d094c4b 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -101,6 +101,14 @@ struct rte_ipsec_sa {
uint64_t msk;
uint64_t val;
} tx_offload;
+ uint64_t tx_ol_flags;
+ struct {
+ uint64_t tx_ol_flags;
+ struct {
+ uint64_t msk;
+ uint64_t val;
+ } tx_offload;
+ } inline_crypto;
struct {
uint16_t sport;
uint16_t dport;
--
2.25.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [dpdk-dev] [RFC 06/10] ipsec: add transmit segmentation offload support
2021-07-06 11:28 [dpdk-dev] [RFC 00/10] new features for ipsec and security libraries Radu Nicolau
@ 2021-07-06 11:29 ` Radu Nicolau
0 siblings, 0 replies; 12+ messages in thread
From: Radu Nicolau @ 2021-07-06 11:29 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, Radu Nicolau, Declan Doherty, Abhijit Sinha, Daniel Martin Buckley
Add support for transmit segmentation offload to inline crypto processing
mode. This offload is not supported by other offload modes, as at a
minimum it requires inline crypto for IPsec to be supported on the
network interface.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijits.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/esp_inb.c | 4 +-
lib/ipsec/esp_outb.c | 115 +++++++++++++++++++++++++++++++++++--------
lib/ipsec/iph.h | 10 +++-
lib/ipsec/sa.c | 6 +++
lib/ipsec/sa.h | 4 ++
5 files changed, 114 insertions(+), 25 deletions(-)
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index d66c88f05d..a6ab8fbdd5 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -668,8 +668,8 @@ trs_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
/* modify packet's layout */
np = trs_process_step2(mb[i], ml[i], hl[i], cofs,
to[i], tl, sqn + k);
- update_trs_l3hdr(sa, np + l2, mb[i]->pkt_len,
- l2, hl[i] - l2, espt[i].next_proto);
+ update_trs_l34hdrs(sa, np + l2, mb[i]->pkt_len,
+ l2, hl[i] - l2, espt[i].next_proto, 0);
/* update mbuf's metadata */
trs_process_step3(mb[i]);
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index a3f77469c3..e550d320da 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -2,6 +2,8 @@
* Copyright(c) 2018-2020 Intel Corporation
*/
+#include <math.h>
+
#include <rte_ipsec.h>
#include <rte_esp.h>
#include <rte_ip.h>
@@ -156,11 +158,20 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* number of bytes to encrypt */
clen = plen + sizeof(*espt);
- clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+ /* We don't need to pad/ailgn packet when using TSO offload */
+ if (likely(!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))))
+ clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
/* pad length + esp tail */
pdlen = clen - plen;
- tlen = pdlen + sa->icv_len + sqh_len;
+
+ /* We don't append ICV length when using TSO offload */
+ if (likely(!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))))
+ tlen = pdlen + sa->icv_len + sqh_len;
+ else
+ tlen = pdlen + sqh_len;
/* do append and prepend */
ml = rte_pktmbuf_lastseg(mb);
@@ -337,6 +348,7 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
char *ph, *pt;
uint64_t *iv;
uint32_t l2len, l3len;
+ uint8_t tso = mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG) ? 1 : 0;
l2len = mb->l2_len;
l3len = mb->l3_len;
@@ -349,11 +361,19 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* number of bytes to encrypt */
clen = plen + sizeof(*espt);
- clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+ /* We don't need to pad/ailgn packet when using TSO offload */
+ if (likely(!tso))
+ clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
/* pad length + esp tail */
pdlen = clen - plen;
- tlen = pdlen + sa->icv_len + sqh_len;
+
+ /* We don't append ICV length when using TSO offload */
+ if (likely(!tso))
+ tlen = pdlen + sa->icv_len + sqh_len;
+ else
+ tlen = pdlen + sqh_len;
/* do append and insert */
ml = rte_pktmbuf_lastseg(mb);
@@ -375,8 +395,8 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
insert_esph(ph, ph + hlen, uhlen);
/* update ip header fields */
- np = update_trs_l3hdr(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
- l3len, IPPROTO_ESP);
+ np = update_trs_l34hdrs(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
+ l3len, IPPROTO_ESP, tso);
/* update spi, seqn and iv */
esph = (struct rte_esp_hdr *)(ph + uhlen);
@@ -651,6 +671,33 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
}
}
+/* check if packet will exceed MSS and segmentation is required */
+static inline int
+esn_outb_nb_segments(const struct rte_ipsec_sa *sa, struct rte_mbuf *m) {
+ uint16_t segements = 1;
+ uint16_t pkt_l3len = m->pkt_len - m->l2_len;
+
+ /* Only support segmentation for UDP/TCP flows */
+ if (!(m->packet_type & (RTE_PTYPE_L4_UDP | RTE_PTYPE_L4_TCP)))
+ return segements;
+
+ if (sa->tso.enabled && pkt_l3len > sa->tso.mss) {
+ segements = ceil((float)pkt_l3len / sa->tso.mss);
+
+ if (m->packet_type & RTE_PTYPE_L4_TCP) {
+ m->ol_flags |= (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM);
+ m->l4_len = sizeof(struct rte_tcp_hdr);
+ } else {
+ m->ol_flags |= (PKT_TX_UDP_SEG | PKT_TX_UDP_CKSUM);
+ m->l4_len = sizeof(struct rte_udp_hdr);
+ }
+
+ m->tso_segsz = sa->tso.mss;
+ }
+
+ return segements;
+}
+
/*
* process group of ESP outbound tunnel packets destined for
* INLINE_CRYPTO type of device.
@@ -660,24 +707,29 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
int32_t rc;
- uint32_t i, k, n;
+ uint32_t i, k, nb_sqn = 0, nb_sqn_alloc;
uint64_t sqn;
rte_be64_t sqc;
struct rte_ipsec_sa *sa;
union sym_op_data icv;
uint64_t iv[IPSEC_MAX_IV_QWORD];
uint32_t dr[num];
+ uint16_t nb_segs[num];
sa = ss->sa;
- n = num;
- sqn = esn_outb_update_sqn(sa, &n);
- if (n != num)
+ for (i = 0; i != num; i++) {
+ nb_segs[i] = esn_outb_nb_segments(sa, mb[i]);
+ nb_sqn += nb_segs[i];
+ }
+
+ nb_sqn_alloc = nb_sqn;
+ sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
+ if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
k = 0;
- for (i = 0; i != n; i++) {
-
+ for (i = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -691,11 +743,18 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
dr[i - k] = i;
rte_errno = -rc;
}
+
+ /**
+ * If packet is using tso, increment sqn by the number of
+ * segments for packet
+ */
+ if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
+ sqn += nb_segs[i] - 1;
}
/* copy not processed mbufs beyond good ones */
- if (k != n && k != 0)
- move_bad_mbufs(mb, dr, n, n - k);
+ if (k != num && k != 0)
+ move_bad_mbufs(mb, dr, num, num - k);
inline_outb_mbuf_prepare(ss, mb, k);
return k;
@@ -710,23 +769,30 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
int32_t rc;
- uint32_t i, k, n;
+ uint32_t i, k, nb_sqn, nb_sqn_alloc;
uint64_t sqn;
rte_be64_t sqc;
struct rte_ipsec_sa *sa;
union sym_op_data icv;
uint64_t iv[IPSEC_MAX_IV_QWORD];
uint32_t dr[num];
+ uint16_t nb_segs[num];
sa = ss->sa;
- n = num;
- sqn = esn_outb_update_sqn(sa, &n);
- if (n != num)
+ /* Calculate number of sequence numbers required */
+ for (i = 0, nb_sqn = 0; i != num; i++) {
+ nb_segs[i] = esn_outb_nb_segments(sa, mb[i]);
+ nb_sqn += nb_segs[i];
+ }
+
+ nb_sqn_alloc = nb_sqn;
+ sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
+ if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
k = 0;
- for (i = 0; i != n; i++) {
+ for (i = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -741,11 +807,18 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
dr[i - k] = i;
rte_errno = -rc;
}
+
+ /**
+ * If packet is using tso, increment sqn by the number of
+ * segments for packet
+ */
+ if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
+ sqn += nb_segs[i] - 1;
}
/* copy not processed mbufs beyond good ones */
- if (k != n && k != 0)
- move_bad_mbufs(mb, dr, n, n - k);
+ if (k != num && k != 0)
+ move_bad_mbufs(mb, dr, num, num - k);
inline_outb_mbuf_prepare(ss, mb, k);
return k;
diff --git a/lib/ipsec/iph.h b/lib/ipsec/iph.h
index 861f16905a..2d223199ac 100644
--- a/lib/ipsec/iph.h
+++ b/lib/ipsec/iph.h
@@ -6,6 +6,8 @@
#define _IPH_H_
#include <rte_ip.h>
+#include <rte_udp.h>
+#include <rte_tcp.h>
/**
* @file iph.h
@@ -39,8 +41,8 @@ insert_esph(char *np, char *op, uint32_t hlen)
/* update original ip header fields for transport case */
static inline int
-update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
- uint32_t l2len, uint32_t l3len, uint8_t proto)
+update_trs_l34hdrs(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
+ uint32_t l2len, uint32_t l3len, uint8_t proto, uint8_t tso)
{
int32_t rc;
@@ -51,6 +53,10 @@ update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
v4h = p;
rc = v4h->next_proto_id;
v4h->next_proto_id = proto;
+ if (tso) {
+ v4h->hdr_checksum = 0;
+ v4h->total_length = 0;
+ }
v4h->total_length = rte_cpu_to_be_16(plen - l2len);
/* IPv6 */
} else {
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 720e0f365b..2ecbbce0a4 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -565,6 +565,12 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->type = type;
sa->size = sz;
+
+ if (prm->ipsec_xform.options.tso == 1) {
+ sa->tso.enabled = 1;
+ sa->tso.mss = prm->ipsec_xform.mss;
+ }
+
/* check for ESN flag */
sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
UINT32_MAX : UINT64_MAX;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 107ebd1519..5e237f3525 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -113,6 +113,10 @@ struct rte_ipsec_sa {
uint8_t iv_len;
uint8_t pad_align;
uint8_t tos_mask;
+ struct {
+ uint8_t enabled:1;
+ uint16_t mss;
+ } tso;
/* template for tunnel header */
uint8_t hdr[IPSEC_MAX_HDR_SIZE];
--
2.25.1
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2021-07-13 13:45 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-13 13:32 [dpdk-dev] [RFC 00/10] new features for ipsec and security libraries Radu Nicolau
2021-07-13 13:32 ` [dpdk-dev] [RFC 01/10] security: add support for TSO on IPsec session Radu Nicolau
2021-07-13 13:32 ` [dpdk-dev] [RFC 02/10] security: add UDP params for IPsec NAT-T Radu Nicolau
2021-07-13 13:32 ` [dpdk-dev] [RFC 03/10] security: add ESN field to ipsec_xform Radu Nicolau
2021-07-13 13:32 ` [dpdk-dev] [RFC 04/10] mbuf: add IPsec ESP tunnel type Radu Nicolau
2021-07-13 13:32 ` [dpdk-dev] [RFC 05/10] ipsec: add support for AEAD algorithms Radu Nicolau
2021-07-13 13:32 ` [dpdk-dev] [RFC 06/10] ipsec: add transmit segmentation offload support Radu Nicolau
2021-07-13 13:32 ` [dpdk-dev] [RFC 07/10] ipsec: add support for NAT-T Radu Nicolau
2021-07-13 13:32 ` [dpdk-dev] [RFC 08/10] ipsec: add support for SA telemetry Radu Nicolau
2021-07-13 13:32 ` [dpdk-dev] [RFC 09/10] ipsec: add support for initial SQN value Radu Nicolau
2021-07-13 13:32 ` [dpdk-dev] [RFC 10/10] ipsec: add ol_flags support Radu Nicolau
-- strict thread matches above, loose matches on Subject: below --
2021-07-06 11:28 [dpdk-dev] [RFC 00/10] new features for ipsec and security libraries Radu Nicolau
2021-07-06 11:29 ` [dpdk-dev] [RFC 06/10] ipsec: add transmit segmentation offload support Radu Nicolau
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).