* [dpdk-dev] [PATCH 01/10] security: add support for TSO on IPsec session
2021-07-13 13:35 [dpdk-dev] [PATCH 00/10] new features for ipsec and security libraries Radu Nicolau
@ 2021-07-13 13:35 ` Radu Nicolau
2021-07-27 18:34 ` [dpdk-dev] [EXT] " Akhil Goyal
2021-07-13 13:35 ` [dpdk-dev] [PATCH 02/10] security: add UDP params for IPsec NAT-T Radu Nicolau
` (17 subsequent siblings)
18 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-07-13 13:35 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty
Cc: dev, Radu Nicolau, Abhijit Sinha, Daniel Martin Buckley
Allow user to provision a per security session maximum segment size
(MSS) for use when Transmit Segmentation Offload (TSO) is supported.
The MSS value will be used when PKT_TX_TCP_SEG or PKT_TX_UDP_SEG
ol_flags are specified in mbuf.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/security/rte_security.h | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 88d31de0a6..45896a77d0 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -181,6 +181,19 @@ struct rte_security_ipsec_sa_options {
* * 0: Disable per session security statistics collection for this SA.
*/
uint32_t stats : 1;
+
+ /** Transmit Segmentation Offload (TSO)
+ *
+ * * 1: Enable per session security TSO support, use MSS value provide
+ * in IPsec security session when PKT_TX_TCP_SEG or PKT_TX_UDP_SEG
+ * ol_flags are set in mbuf.
+ * this SA, if supported by the driver.
+ * * 0: No TSO support for offload IPsec packets. Hardware will not
+ * attempt to segment packet, and packet transmission will fail if
+ * larger than MTU of interface
+ */
+ uint32_t tso : 1;
+
};
/** IPSec security association direction */
@@ -217,6 +230,8 @@ struct rte_security_ipsec_xform {
/**< Anti replay window size to enable sequence replay attack handling.
* replay checking is disabled if the window size is 0.
*/
+ uint32_t mss;
+ /**< IPsec payload Maximum Segment Size */
};
/**
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH 01/10] security: add support for TSO on IPsec session
2021-07-13 13:35 ` [dpdk-dev] [PATCH 01/10] security: add support for TSO on IPsec session Radu Nicolau
@ 2021-07-27 18:34 ` Akhil Goyal
2021-07-29 8:37 ` Nicolau, Radu
2021-07-31 17:50 ` Akhil Goyal
0 siblings, 2 replies; 184+ messages in thread
From: Akhil Goyal @ 2021-07-27 18:34 UTC (permalink / raw)
To: Radu Nicolau, Tejasree Kondoj, Declan Doherty
Cc: Anoob Joseph, dev, Abhijit Sinha, Daniel Martin Buckley, Ankur Dwivedi
> Allow user to provision a per security session maximum segment size
> (MSS) for use when Transmit Segmentation Offload (TSO) is supported.
> The MSS value will be used when PKT_TX_TCP_SEG or PKT_TX_UDP_SEG
> ol_flags are specified in mbuf.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> ---
Can we have deprecation notice for the changes introduced in this series.
Also there are 2 other features which modify same struct. Can we have a
Single deprecation notice for all the changes in the rte_security_ipsec_sa_options?
The notice can be something like:
+* security: The IPsec SA config options structure ``struct rte_security_ipsec_sa_options``
+ will be updated to support more features.
And we may have a reserved bit fields for rest of the vacant bits so that ABI is not broken
When a new bit field is added.
http://patches.dpdk.org/project/dpdk/patch/20210630112049.3747-1-marchana@marvell.com/
http://patches.dpdk.org/project/dpdk/patch/20210705131335.21070-1-ktejasree@marvell.com/
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH 01/10] security: add support for TSO on IPsec session
2021-07-27 18:34 ` [dpdk-dev] [EXT] " Akhil Goyal
@ 2021-07-29 8:37 ` Nicolau, Radu
2021-07-31 17:50 ` Akhil Goyal
1 sibling, 0 replies; 184+ messages in thread
From: Nicolau, Radu @ 2021-07-29 8:37 UTC (permalink / raw)
To: Akhil Goyal, Tejasree Kondoj, Declan Doherty
Cc: Anoob Joseph, dev, Abhijit Sinha, Daniel Martin Buckley, Ankur Dwivedi
Hi, thanks for reviewing. I'm OOO at the moment, I will send an updated
patchset next week.
On 7/27/2021 9:34 PM, Akhil Goyal wrote:
>> Allow user to provision a per security session maximum segment size
>> (MSS) for use when Transmit Segmentation Offload (TSO) is supported.
>> The MSS value will be used when PKT_TX_TCP_SEG or PKT_TX_UDP_SEG
>> ol_flags are specified in mbuf.
>>
>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
>> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
>> ---
> Can we have deprecation notice for the changes introduced in this series.
>
> Also there are 2 other features which modify same struct. Can we have a
> Single deprecation notice for all the changes in the rte_security_ipsec_sa_options?
> The notice can be something like:
> +* security: The IPsec SA config options structure ``struct rte_security_ipsec_sa_options``
> + will be updated to support more features.
> And we may have a reserved bit fields for rest of the vacant bits so that ABI is not broken
> When a new bit field is added.
>
> http://patches.dpdk.org/project/dpdk/patch/20210630112049.3747-1-marchana@marvell.com/
> http://patches.dpdk.org/project/dpdk/patch/20210705131335.21070-1-ktejasree@marvell.com/
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH 01/10] security: add support for TSO on IPsec session
2021-07-27 18:34 ` [dpdk-dev] [EXT] " Akhil Goyal
2021-07-29 8:37 ` Nicolau, Radu
@ 2021-07-31 17:50 ` Akhil Goyal
1 sibling, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2021-07-31 17:50 UTC (permalink / raw)
To: Radu Nicolau, Declan Doherty, Abhijit Sinha, Daniel Martin Buckley
Cc: Anoob Joseph, dev, Ankur Dwivedi, Tejasree Kondoj
> > Allow user to provision a per security session maximum segment size
> > (MSS) for use when Transmit Segmentation Offload (TSO) is supported.
> > The MSS value will be used when PKT_TX_TCP_SEG or PKT_TX_UDP_SEG
> > ol_flags are specified in mbuf.
> >
> > Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> > Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> > Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> > Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> > ---
> Can we have deprecation notice for the changes introduced in this series.
>
> Also there are 2 other features which modify same struct. Can we have a
> Single deprecation notice for all the changes in the
> rte_security_ipsec_sa_options?
> The notice can be something like:
> +* security: The IPsec SA config options structure ``struct
> rte_security_ipsec_sa_options``
> + will be updated to support more features.
> And we may have a reserved bit fields for rest of the vacant bits so that ABI is
> not broken
> When a new bit field is added.
>
> http://patches.dpdk.org/project/dpdk/patch/20210630112049.3747-1-
> marchana@marvell.com/
> http://patches.dpdk.org/project/dpdk/patch/20210705131335.21070-1-
> ktejasree@marvell.com/
I have sent the consolidated deprecation notice for all three features.
Can you guys Ack it?
https://mails.dpdk.org/archives/dev/2021-July/215906.html
Also, please send deprecation notice for changes in ipsec xform as well.
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH 02/10] security: add UDP params for IPsec NAT-T
2021-07-13 13:35 [dpdk-dev] [PATCH 00/10] new features for ipsec and security libraries Radu Nicolau
2021-07-13 13:35 ` [dpdk-dev] [PATCH 01/10] security: add support for TSO on IPsec session Radu Nicolau
@ 2021-07-13 13:35 ` Radu Nicolau
2021-07-13 13:35 ` [dpdk-dev] [PATCH 03/10] security: add ESN field to ipsec_xform Radu Nicolau
` (16 subsequent siblings)
18 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-07-13 13:35 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty
Cc: dev, Radu Nicolau, Abhijit Sinha, Daniel Martin Buckley
Add support for specifying UDP port params for UDP encapsulation option.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/security/rte_security.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 45896a77d0..03572b10ab 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -112,6 +112,12 @@ struct rte_security_ipsec_tunnel_param {
};
};
+struct rte_security_ipsec_udp_param {
+
+ uint16_t sport;
+ uint16_t dport;
+};
+
/**
* IPsec Security Association option flags
*/
@@ -224,6 +230,8 @@ struct rte_security_ipsec_xform {
/**< IPsec SA Mode - transport/tunnel */
struct rte_security_ipsec_tunnel_param tunnel;
/**< Tunnel parameters, NULL for transport mode */
+ struct rte_security_ipsec_udp_param udp;
+ /**< UDP parameters, ignored when udp_encap option not specified */
uint64_t esn_soft_limit;
/**< ESN for which the overflow event need to be raised */
uint32_t replay_win_sz;
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH 03/10] security: add ESN field to ipsec_xform
2021-07-13 13:35 [dpdk-dev] [PATCH 00/10] new features for ipsec and security libraries Radu Nicolau
2021-07-13 13:35 ` [dpdk-dev] [PATCH 01/10] security: add support for TSO on IPsec session Radu Nicolau
2021-07-13 13:35 ` [dpdk-dev] [PATCH 02/10] security: add UDP params for IPsec NAT-T Radu Nicolau
@ 2021-07-13 13:35 ` Radu Nicolau
2021-07-13 13:35 ` [dpdk-dev] [PATCH 04/10] mbuf: add IPsec ESP tunnel type Radu Nicolau
` (15 subsequent siblings)
18 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-07-13 13:35 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty
Cc: dev, Radu Nicolau, Abhijit Sinha, Daniel Martin Buckley
Update ipsec_xform definition to include ESN field.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/security/rte_security.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 03572b10ab..702de58b48 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -240,6 +240,14 @@ struct rte_security_ipsec_xform {
*/
uint32_t mss;
/**< IPsec payload Maximum Segment Size */
+ union {
+ uint64_t value;
+ struct {
+ uint32_t low;
+ uint32_t hi;
+ };
+ } esn;
+ /**< Extended Sequence Number */
};
/**
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH 04/10] mbuf: add IPsec ESP tunnel type
2021-07-13 13:35 [dpdk-dev] [PATCH 00/10] new features for ipsec and security libraries Radu Nicolau
` (2 preceding siblings ...)
2021-07-13 13:35 ` [dpdk-dev] [PATCH 03/10] security: add ESN field to ipsec_xform Radu Nicolau
@ 2021-07-13 13:35 ` Radu Nicolau
2021-07-13 13:35 ` [dpdk-dev] [PATCH 05/10] ipsec: add support for AEAD algorithms Radu Nicolau
` (14 subsequent siblings)
18 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-07-13 13:35 UTC (permalink / raw)
To: Olivier Matz
Cc: dev, Radu Nicolau, Declan Doherty, Abhijit Sinha, Daniel Martin Buckley
Add tunnel type for IPsec ESP tunnels
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/mbuf/rte_mbuf_core.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index bb38d7f581..a4d95deee6 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -253,6 +253,7 @@ extern "C" {
#define PKT_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
#define PKT_TX_TUNNEL_VXLAN_GPE (0x6ULL << 45)
#define PKT_TX_TUNNEL_GTP (0x7ULL << 45)
+#define PKT_TX_TUNNEL_ESP (0x8ULL << 45)
/**
* Generic IP encapsulated tunnel type, used for TSO and checksum offload.
* It can be used for tunnels which are not standards or listed above.
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH 05/10] ipsec: add support for AEAD algorithms
2021-07-13 13:35 [dpdk-dev] [PATCH 00/10] new features for ipsec and security libraries Radu Nicolau
` (3 preceding siblings ...)
2021-07-13 13:35 ` [dpdk-dev] [PATCH 04/10] mbuf: add IPsec ESP tunnel type Radu Nicolau
@ 2021-07-13 13:35 ` Radu Nicolau
2021-07-13 13:35 ` [dpdk-dev] [PATCH 06/10] ipsec: add transmit segmentation offload support Radu Nicolau
` (13 subsequent siblings)
18 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-07-13 13:35 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, Radu Nicolau, Declan Doherty, Abhijit Sinha, Daniel Martin Buckley
Add support for AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/crypto.h | 137 +++++++++++++++++++++++++++++++++++++++++++
lib/ipsec/esp_inb.c | 66 ++++++++++++++++++++-
lib/ipsec/esp_outb.c | 70 +++++++++++++++++++++-
lib/ipsec/sa.c | 54 +++++++++++++++--
lib/ipsec/sa.h | 6 ++
5 files changed, 322 insertions(+), 11 deletions(-)
diff --git a/lib/ipsec/crypto.h b/lib/ipsec/crypto.h
index 3d03034590..598ee9cebd 100644
--- a/lib/ipsec/crypto.h
+++ b/lib/ipsec/crypto.h
@@ -21,6 +21,37 @@ struct aesctr_cnt_blk {
uint32_t cnt;
} __rte_packed;
+ /*
+ * CHACHA20-POLY1305 devices have some specific requirements
+ * for IV and AAD formats.
+ * Ideally that to be done by the driver itself.
+ */
+
+struct aead_chacha20_poly1305_iv {
+ uint32_t salt;
+ uint64_t iv;
+ uint32_t cnt;
+} __rte_packed;
+
+struct aead_chacha20_poly1305_aad {
+ uint32_t spi;
+ /*
+ * RFC 4106, section 5:
+ * Two formats of the AAD are defined:
+ * one for 32-bit sequence numbers, and one for 64-bit ESN.
+ */
+ union {
+ uint32_t u32[2];
+ uint64_t u64;
+ } sqn;
+ uint32_t align0; /* align to 16B boundary */
+} __rte_packed;
+
+struct chacha20_poly1305_esph_iv {
+ struct rte_esp_hdr esph;
+ uint64_t iv;
+} __rte_packed;
+
/*
* AES-GCM devices have some specific requirements for IV and AAD formats.
* Ideally that to be done by the driver itself.
@@ -51,6 +82,47 @@ struct gcm_esph_iv {
uint64_t iv;
} __rte_packed;
+ /*
+ * AES-CCM devices have some specific requirements for IV and AAD formats.
+ * Ideally that to be done by the driver itself.
+ */
+union aead_ccm_salt {
+ uint32_t salt;
+ struct inner {
+ uint8_t salt8[3];
+ uint8_t ccm_flags;
+ } inner;
+} salt_union;
+
+
+struct aead_ccm_iv {
+ uint8_t ccm_flags;
+ uint8_t salt[3];
+ uint64_t iv;
+ uint32_t cnt;
+} __rte_packed;
+
+struct aead_ccm_aad {
+ uint8_t padding[18];
+ uint32_t spi;
+ /*
+ * RFC 4309, section 5:
+ * Two formats of the AAD are defined:
+ * one for 32-bit sequence numbers, and one for 64-bit ESN.
+ */
+ union {
+ uint32_t u32[2];
+ uint64_t u64;
+ } sqn;
+ uint32_t align0; /* align to 16B boundary */
+} __rte_packed;
+
+struct ccm_esph_iv {
+ struct rte_esp_hdr esph;
+ uint64_t iv;
+} __rte_packed;
+
+
static inline void
aes_ctr_cnt_blk_fill(struct aesctr_cnt_blk *ctr, uint64_t iv, uint32_t nonce)
{
@@ -59,6 +131,16 @@ aes_ctr_cnt_blk_fill(struct aesctr_cnt_blk *ctr, uint64_t iv, uint32_t nonce)
ctr->cnt = rte_cpu_to_be_32(1);
}
+static inline void
+aead_chacha20_poly1305_iv_fill(struct aead_chacha20_poly1305_iv
+ *chacha20_poly1305,
+ uint64_t iv, uint32_t salt)
+{
+ chacha20_poly1305->salt = salt;
+ chacha20_poly1305->iv = iv;
+ chacha20_poly1305->cnt = rte_cpu_to_be_32(1);
+}
+
static inline void
aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
{
@@ -67,6 +149,21 @@ aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
gcm->cnt = rte_cpu_to_be_32(1);
}
+static inline void
+aead_ccm_iv_fill(struct aead_ccm_iv *ccm, uint64_t iv, uint32_t salt)
+{
+ union aead_ccm_salt tsalt;
+
+ tsalt.salt = salt;
+ ccm->ccm_flags = tsalt.inner.ccm_flags;
+ ccm->salt[0] = tsalt.inner.salt8[0];
+ ccm->salt[1] = tsalt.inner.salt8[1];
+ ccm->salt[2] = tsalt.inner.salt8[2];
+ ccm->iv = iv;
+ ccm->cnt = rte_cpu_to_be_32(1);
+}
+
+
/*
* RFC 4106, 5 AAD Construction
* spi and sqn should already be converted into network byte order.
@@ -86,6 +183,25 @@ aead_gcm_aad_fill(struct aead_gcm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
aad->align0 = 0;
}
+/*
+ * RFC 4309, 5 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_ccm_aad_fill(struct aead_ccm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
+ int esn)
+{
+ aad->spi = spi;
+ if (esn)
+ aad->sqn.u64 = sqn;
+ else {
+ aad->sqn.u32[0] = sqn_low32(sqn);
+ aad->sqn.u32[1] = 0;
+ }
+ aad->align0 = 0;
+}
+
static inline void
gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
{
@@ -93,6 +209,27 @@ gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
iv[1] = 0;
}
+
+/*
+ * RFC 4106, 5 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_chacha20_poly1305_aad_fill(struct aead_chacha20_poly1305_aad *aad,
+ rte_be32_t spi, rte_be64_t sqn,
+ int esn)
+{
+ aad->spi = spi;
+ if (esn)
+ aad->sqn.u64 = sqn;
+ else {
+ aad->sqn.u32[0] = sqn_low32(sqn);
+ aad->sqn.u32[1] = 0;
+ }
+ aad->align0 = 0;
+}
+
/*
* Helper routine to copy IV
* Right now we support only algorithms with IV length equals 0/8/16 bytes.
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index 2b1df6a032..d66c88f05d 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -63,6 +63,8 @@ inb_cop_prepare(struct rte_crypto_op *cop,
{
struct rte_crypto_sym_op *sop;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint64_t *ivc, *ivp;
uint32_t algo;
@@ -83,6 +85,24 @@ inb_cop_prepare(struct rte_crypto_op *cop,
sa->iv_ofs);
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ sop_aead_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ ccm = rte_crypto_op_ctod_offset(cop, struct aead_ccm_iv *,
+ sa->iv_ofs);
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ sop_aead_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ chacha20_poly1305 = rte_crypto_op_ctod_offset(cop,
+ struct aead_chacha20_poly1305_iv *,
+ sa->iv_ofs);
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CBC:
case ALGO_TYPE_3DES_CBC:
sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
@@ -91,6 +111,14 @@ inb_cop_prepare(struct rte_crypto_op *cop,
ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs);
copy_iv(ivc, ivp, sa->iv_len);
break;
+ case ALGO_TYPE_AES_GMAC:
+ sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+ sa->iv_ofs);
+ aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
@@ -110,6 +138,8 @@ inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
uint32_t *pofs, uint32_t plen, void *iv)
{
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint64_t *ivp;
uint32_t clen;
@@ -120,9 +150,19 @@ inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
switch (sa->algo_type) {
case ALGO_TYPE_AES_GCM:
+ case ALGO_TYPE_AES_GMAC:
gcm = (struct aead_gcm_iv *)iv;
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ ccm = (struct aead_ccm_iv *)iv;
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ chacha20_poly1305 = (struct aead_chacha20_poly1305_iv *)iv;
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CBC:
case ALGO_TYPE_3DES_CBC:
copy_iv(iv, ivp, sa->iv_len);
@@ -175,6 +215,8 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
const union sym_op_data *icv)
{
struct aead_gcm_aad *aad;
+ struct aead_ccm_aad *caad;
+ struct aead_chacha20_poly1305_aad *chacha_aad;
/* insert SQN.hi between ESP trailer and ICV */
if (sa->sqh_len != 0)
@@ -184,9 +226,27 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
* fill AAD fields, if any (aad fields are placed after icv),
* right now we support only one AEAD algorithm: AES-GCM.
*/
- if (sa->aad_len != 0) {
- aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
- aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ switch (sa->algo_type) {
+ case ALGO_TYPE_AES_GCM:
+ if (sa->aad_len != 0) {
+ aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+ aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_AES_CCM:
+ if (sa->aad_len != 0) {
+ caad = (struct aead_ccm_aad *)(icv->va + sa->icv_len);
+ aead_ccm_aad_fill(caad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ if (sa->aad_len != 0) {
+ chacha_aad = (struct aead_chacha20_poly1305_aad *)
+ (icv->va + sa->icv_len);
+ aead_chacha20_poly1305_aad_fill(chacha_aad,
+ sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
}
}
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 1e181cf2ce..a3f77469c3 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -63,6 +63,8 @@ outb_cop_prepare(struct rte_crypto_op *cop,
{
struct rte_crypto_sym_op *sop;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint32_t algo;
@@ -80,6 +82,15 @@ outb_cop_prepare(struct rte_crypto_op *cop,
/* NULL case */
sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
break;
+ case ALGO_TYPE_AES_GMAC:
+ /* GMAC case */
+ sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+ sa->iv_ofs);
+ aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_GCM:
/* AEAD (AES_GCM) case */
sop_aead_prepare(sop, sa, icv, hlen, plen);
@@ -89,6 +100,26 @@ outb_cop_prepare(struct rte_crypto_op *cop,
sa->iv_ofs);
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ /* AEAD (AES_CCM) case */
+ sop_aead_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ ccm = rte_crypto_op_ctod_offset(cop, struct aead_ccm_iv *,
+ sa->iv_ofs);
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ /* AEAD (CHACHA20_POLY) case */
+ sop_aead_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ chacha20_poly1305 = rte_crypto_op_ctod_offset(cop,
+ struct aead_chacha20_poly1305_iv *,
+ sa->iv_ofs);
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
/* Cipher-Auth (AES-CTR *) case */
sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
@@ -196,7 +227,9 @@ outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
const union sym_op_data *icv)
{
uint32_t *psqh;
- struct aead_gcm_aad *aad;
+ struct aead_gcm_aad *gaad;
+ struct aead_ccm_aad *caad;
+ struct aead_chacha20_poly1305_aad *chacha20_poly1305_aad;
/* insert SQN.hi between ESP trailer and ICV */
if (sa->sqh_len != 0) {
@@ -208,9 +241,29 @@ outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
* fill IV and AAD fields, if any (aad fields are placed after icv),
* right now we support only one AEAD algorithm: AES-GCM .
*/
+ switch (sa->algo_type) {
+ case ALGO_TYPE_AES_GCM:
if (sa->aad_len != 0) {
- aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
- aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ gaad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+ aead_gcm_aad_fill(gaad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_AES_CCM:
+ if (sa->aad_len != 0) {
+ caad = (struct aead_ccm_aad *)(icv->va + sa->icv_len);
+ aead_ccm_aad_fill(caad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ if (sa->aad_len != 0) {
+ chacha20_poly1305_aad = (struct aead_chacha20_poly1305_aad *)
+ (icv->va + sa->icv_len);
+ aead_chacha20_poly1305_aad_fill(chacha20_poly1305_aad,
+ sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ default:
+ break;
}
}
@@ -418,6 +471,8 @@ outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
{
uint64_t *ivp = iv;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint32_t clen;
@@ -426,6 +481,15 @@ outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
gcm = iv;
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ ccm = iv;
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ chacha20_poly1305 = iv;
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
ctr = iv;
aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt);
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index e59189d215..720e0f365b 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -47,6 +47,15 @@ fill_crypto_xform(struct crypto_xform *xform, uint64_t type,
if (xfn != NULL)
return -EINVAL;
xform->aead = &xf->aead;
+
+ /* GMAC has only auth */
+ } else if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+ xf->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ if (xfn != NULL)
+ return -EINVAL;
+ xform->auth = &xf->auth;
+ xform->cipher = &xfn->cipher;
+
/*
* CIPHER+AUTH xforms are expected in strict order,
* depending on SA direction:
@@ -247,12 +256,13 @@ esp_inb_init(struct rte_ipsec_sa *sa)
sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
/*
- * for AEAD and NULL algorithms we can assume that
+ * for AEAD algorithms we can assume that
* auth and cipher offsets would be equal.
*/
switch (sa->algo_type) {
case ALGO_TYPE_AES_GCM:
- case ALGO_TYPE_NULL:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
sa->ctp.auth.raw = sa->ctp.cipher.raw;
break;
default:
@@ -294,6 +304,8 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
switch (algo_type) {
case ALGO_TYPE_AES_GCM:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
case ALGO_TYPE_AES_CTR:
case ALGO_TYPE_NULL:
sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr) +
@@ -305,15 +317,20 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr);
sa->ctp.cipher.length = sa->iv_len;
break;
+ case ALGO_TYPE_AES_GMAC:
+ sa->ctp.cipher.offset = 0;
+ sa->ctp.cipher.length = 0;
+ break;
}
/*
- * for AEAD and NULL algorithms we can assume that
+ * for AEAD algorithms we can assume that
* auth and cipher offsets would be equal.
*/
switch (algo_type) {
case ALGO_TYPE_AES_GCM:
- case ALGO_TYPE_NULL:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
sa->ctp.auth.raw = sa->ctp.cipher.raw;
break;
default:
@@ -374,13 +391,39 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->pad_align = IPSEC_PAD_AES_GCM;
sa->algo_type = ALGO_TYPE_AES_GCM;
break;
+ case RTE_CRYPTO_AEAD_AES_CCM:
+ /* RFC 4309 */
+ sa->aad_len = sizeof(struct aead_ccm_aad);
+ sa->icv_len = cxf->aead->digest_length;
+ sa->iv_ofs = cxf->aead->iv.offset;
+ sa->iv_len = sizeof(uint64_t);
+ sa->pad_align = IPSEC_PAD_AES_CCM;
+ sa->algo_type = ALGO_TYPE_AES_CCM;
+ break;
+ case RTE_CRYPTO_AEAD_CHACHA20_POLY1305:
+ /* RFC 7634 & 8439*/
+ sa->aad_len = sizeof(struct aead_chacha20_poly1305_aad);
+ sa->icv_len = cxf->aead->digest_length;
+ sa->iv_ofs = cxf->aead->iv.offset;
+ sa->iv_len = sizeof(uint64_t);
+ sa->pad_align = IPSEC_PAD_CHACHA20_POLY1305;
+ sa->algo_type = ALGO_TYPE_CHACHA20_POLY1305;
+ break;
default:
return -EINVAL;
}
+ } else if (cxf->auth->algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ /* RFC 4543 */
+ /* AES-GMAC is a special case of auth that needs IV */
+ sa->pad_align = IPSEC_PAD_AES_GMAC;
+ sa->iv_len = sizeof(uint64_t);
+ sa->icv_len = cxf->auth->digest_length;
+ sa->iv_ofs = cxf->auth->iv.offset;
+ sa->algo_type = ALGO_TYPE_AES_GMAC;
+
} else {
sa->icv_len = cxf->auth->digest_length;
sa->iv_ofs = cxf->cipher->iv.offset;
- sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
switch (cxf->cipher->algo) {
case RTE_CRYPTO_CIPHER_NULL:
@@ -414,6 +457,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
}
}
+ sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
sa->udata = prm->userdata;
sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi);
sa->salt = prm->ipsec_xform.salt;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 1bffe751f5..107ebd1519 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -19,7 +19,10 @@ enum {
IPSEC_PAD_AES_CBC = IPSEC_MAX_IV_SIZE,
IPSEC_PAD_AES_CTR = IPSEC_PAD_DEFAULT,
IPSEC_PAD_AES_GCM = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_AES_CCM = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_CHACHA20_POLY1305 = IPSEC_PAD_DEFAULT,
IPSEC_PAD_NULL = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_AES_GMAC = IPSEC_PAD_DEFAULT,
};
/* iv sizes for different algorithms */
@@ -67,6 +70,9 @@ enum sa_algo_type {
ALGO_TYPE_AES_CBC,
ALGO_TYPE_AES_CTR,
ALGO_TYPE_AES_GCM,
+ ALGO_TYPE_AES_CCM,
+ ALGO_TYPE_CHACHA20_POLY1305,
+ ALGO_TYPE_AES_GMAC,
ALGO_TYPE_MAX
};
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH 06/10] ipsec: add transmit segmentation offload support
2021-07-13 13:35 [dpdk-dev] [PATCH 00/10] new features for ipsec and security libraries Radu Nicolau
` (4 preceding siblings ...)
2021-07-13 13:35 ` [dpdk-dev] [PATCH 05/10] ipsec: add support for AEAD algorithms Radu Nicolau
@ 2021-07-13 13:35 ` Radu Nicolau
2021-07-13 13:35 ` [dpdk-dev] [PATCH 07/10] ipsec: add support for NAT-T Radu Nicolau
` (12 subsequent siblings)
18 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-07-13 13:35 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, Radu Nicolau, Declan Doherty, Abhijit Sinha, Daniel Martin Buckley
Add support for transmit segmentation offload to inline crypto processing
mode. This offload is not supported by other offload modes, as at a
minimum it requires inline crypto for IPsec to be supported on the
network interface.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/esp_inb.c | 4 +-
lib/ipsec/esp_outb.c | 115 +++++++++++++++++++++++++++++++++++--------
lib/ipsec/iph.h | 10 +++-
lib/ipsec/sa.c | 6 +++
lib/ipsec/sa.h | 4 ++
5 files changed, 114 insertions(+), 25 deletions(-)
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index d66c88f05d..a6ab8fbdd5 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -668,8 +668,8 @@ trs_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
/* modify packet's layout */
np = trs_process_step2(mb[i], ml[i], hl[i], cofs,
to[i], tl, sqn + k);
- update_trs_l3hdr(sa, np + l2, mb[i]->pkt_len,
- l2, hl[i] - l2, espt[i].next_proto);
+ update_trs_l34hdrs(sa, np + l2, mb[i]->pkt_len,
+ l2, hl[i] - l2, espt[i].next_proto, 0);
/* update mbuf's metadata */
trs_process_step3(mb[i]);
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index a3f77469c3..9fc7075796 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -2,6 +2,8 @@
* Copyright(c) 2018-2020 Intel Corporation
*/
+#include <math.h>
+
#include <rte_ipsec.h>
#include <rte_esp.h>
#include <rte_ip.h>
@@ -156,11 +158,20 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* number of bytes to encrypt */
clen = plen + sizeof(*espt);
- clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+ /* We don't need to pad/ailgn packet when using TSO offload */
+ if (likely(!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))))
+ clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
/* pad length + esp tail */
pdlen = clen - plen;
- tlen = pdlen + sa->icv_len + sqh_len;
+
+ /* We don't append ICV length when using TSO offload */
+ if (likely(!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))))
+ tlen = pdlen + sa->icv_len + sqh_len;
+ else
+ tlen = pdlen + sqh_len;
/* do append and prepend */
ml = rte_pktmbuf_lastseg(mb);
@@ -337,6 +348,7 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
char *ph, *pt;
uint64_t *iv;
uint32_t l2len, l3len;
+ uint8_t tso = mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG) ? 1 : 0;
l2len = mb->l2_len;
l3len = mb->l3_len;
@@ -349,11 +361,19 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* number of bytes to encrypt */
clen = plen + sizeof(*espt);
- clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+ /* We don't need to pad/ailgn packet when using TSO offload */
+ if (likely(!tso))
+ clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
/* pad length + esp tail */
pdlen = clen - plen;
- tlen = pdlen + sa->icv_len + sqh_len;
+
+ /* We don't append ICV length when using TSO offload */
+ if (likely(!tso))
+ tlen = pdlen + sa->icv_len + sqh_len;
+ else
+ tlen = pdlen + sqh_len;
/* do append and insert */
ml = rte_pktmbuf_lastseg(mb);
@@ -375,8 +395,8 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
insert_esph(ph, ph + hlen, uhlen);
/* update ip header fields */
- np = update_trs_l3hdr(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
- l3len, IPPROTO_ESP);
+ np = update_trs_l34hdrs(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
+ l3len, IPPROTO_ESP, tso);
/* update spi, seqn and iv */
esph = (struct rte_esp_hdr *)(ph + uhlen);
@@ -651,6 +671,33 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
}
}
+/* check if packet will exceed MSS and segmentation is required */
+static inline int
+esn_outb_nb_segments(const struct rte_ipsec_sa *sa, struct rte_mbuf *m) {
+ uint16_t segments = 1;
+ uint16_t pkt_l3len = m->pkt_len - m->l2_len;
+
+ /* Only support segmentation for UDP/TCP flows */
+ if (!(m->packet_type & (RTE_PTYPE_L4_UDP | RTE_PTYPE_L4_TCP)))
+ return segments;
+
+ if (sa->tso.enabled && pkt_l3len > sa->tso.mss) {
+ segments = ceil((float)pkt_l3len / sa->tso.mss);
+
+ if (m->packet_type & RTE_PTYPE_L4_TCP) {
+ m->ol_flags |= (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM);
+ m->l4_len = sizeof(struct rte_tcp_hdr);
+ } else {
+ m->ol_flags |= (PKT_TX_UDP_SEG | PKT_TX_UDP_CKSUM);
+ m->l4_len = sizeof(struct rte_udp_hdr);
+ }
+
+ m->tso_segsz = sa->tso.mss;
+ }
+
+ return segments;
+}
+
/*
* process group of ESP outbound tunnel packets destined for
* INLINE_CRYPTO type of device.
@@ -660,24 +707,29 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
int32_t rc;
- uint32_t i, k, n;
+ uint32_t i, k, nb_sqn = 0, nb_sqn_alloc;
uint64_t sqn;
rte_be64_t sqc;
struct rte_ipsec_sa *sa;
union sym_op_data icv;
uint64_t iv[IPSEC_MAX_IV_QWORD];
uint32_t dr[num];
+ uint16_t nb_segs[num];
sa = ss->sa;
- n = num;
- sqn = esn_outb_update_sqn(sa, &n);
- if (n != num)
+ for (i = 0; i != num; i++) {
+ nb_segs[i] = esn_outb_nb_segments(sa, mb[i]);
+ nb_sqn += nb_segs[i];
+ }
+
+ nb_sqn_alloc = nb_sqn;
+ sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
+ if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
k = 0;
- for (i = 0; i != n; i++) {
-
+ for (i = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -691,11 +743,18 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
dr[i - k] = i;
rte_errno = -rc;
}
+
+ /**
+ * If packet is using tso, increment sqn by the number of
+ * segments for packet
+ */
+ if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
+ sqn += nb_segs[i] - 1;
}
/* copy not processed mbufs beyond good ones */
- if (k != n && k != 0)
- move_bad_mbufs(mb, dr, n, n - k);
+ if (k != num && k != 0)
+ move_bad_mbufs(mb, dr, num, num - k);
inline_outb_mbuf_prepare(ss, mb, k);
return k;
@@ -710,23 +769,30 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
int32_t rc;
- uint32_t i, k, n;
+ uint32_t i, k, nb_sqn, nb_sqn_alloc;
uint64_t sqn;
rte_be64_t sqc;
struct rte_ipsec_sa *sa;
union sym_op_data icv;
uint64_t iv[IPSEC_MAX_IV_QWORD];
uint32_t dr[num];
+ uint16_t nb_segs[num];
sa = ss->sa;
- n = num;
- sqn = esn_outb_update_sqn(sa, &n);
- if (n != num)
+ /* Calculate number of sequence numbers required */
+ for (i = 0, nb_sqn = 0; i != num; i++) {
+ nb_segs[i] = esn_outb_nb_segments(sa, mb[i]);
+ nb_sqn += nb_segs[i];
+ }
+
+ nb_sqn_alloc = nb_sqn;
+ sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
+ if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
k = 0;
- for (i = 0; i != n; i++) {
+ for (i = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -741,11 +807,18 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
dr[i - k] = i;
rte_errno = -rc;
}
+
+ /**
+ * If packet is using tso, increment sqn by the number of
+ * segments for packet
+ */
+ if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
+ sqn += nb_segs[i] - 1;
}
/* copy not processed mbufs beyond good ones */
- if (k != n && k != 0)
- move_bad_mbufs(mb, dr, n, n - k);
+ if (k != num && k != 0)
+ move_bad_mbufs(mb, dr, num, num - k);
inline_outb_mbuf_prepare(ss, mb, k);
return k;
diff --git a/lib/ipsec/iph.h b/lib/ipsec/iph.h
index 861f16905a..2d223199ac 100644
--- a/lib/ipsec/iph.h
+++ b/lib/ipsec/iph.h
@@ -6,6 +6,8 @@
#define _IPH_H_
#include <rte_ip.h>
+#include <rte_udp.h>
+#include <rte_tcp.h>
/**
* @file iph.h
@@ -39,8 +41,8 @@ insert_esph(char *np, char *op, uint32_t hlen)
/* update original ip header fields for transport case */
static inline int
-update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
- uint32_t l2len, uint32_t l3len, uint8_t proto)
+update_trs_l34hdrs(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
+ uint32_t l2len, uint32_t l3len, uint8_t proto, uint8_t tso)
{
int32_t rc;
@@ -51,6 +53,10 @@ update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
v4h = p;
rc = v4h->next_proto_id;
v4h->next_proto_id = proto;
+ if (tso) {
+ v4h->hdr_checksum = 0;
+ v4h->total_length = 0;
+ }
v4h->total_length = rte_cpu_to_be_16(plen - l2len);
/* IPv6 */
} else {
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 720e0f365b..2ecbbce0a4 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -565,6 +565,12 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->type = type;
sa->size = sz;
+
+ if (prm->ipsec_xform.options.tso == 1) {
+ sa->tso.enabled = 1;
+ sa->tso.mss = prm->ipsec_xform.mss;
+ }
+
/* check for ESN flag */
sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
UINT32_MAX : UINT64_MAX;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 107ebd1519..5e237f3525 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -113,6 +113,10 @@ struct rte_ipsec_sa {
uint8_t iv_len;
uint8_t pad_align;
uint8_t tos_mask;
+ struct {
+ uint8_t enabled:1;
+ uint16_t mss;
+ } tso;
/* template for tunnel header */
uint8_t hdr[IPSEC_MAX_HDR_SIZE];
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH 07/10] ipsec: add support for NAT-T
2021-07-13 13:35 [dpdk-dev] [PATCH 00/10] new features for ipsec and security libraries Radu Nicolau
` (5 preceding siblings ...)
2021-07-13 13:35 ` [dpdk-dev] [PATCH 06/10] ipsec: add transmit segmentation offload support Radu Nicolau
@ 2021-07-13 13:35 ` Radu Nicolau
2021-07-13 13:35 ` [dpdk-dev] [PATCH 08/10] ipsec: add support for SA telemetry Radu Nicolau
` (11 subsequent siblings)
18 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-07-13 13:35 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, Radu Nicolau, Declan Doherty, Abhijit Sinha, Daniel Martin Buckley
Add support for the IPsec NAT-Traversal use case for Tunnel mode
packets.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/iph.h | 13 +++++++++++++
lib/ipsec/rte_ipsec_sa.h | 8 +++++++-
lib/ipsec/sa.c | 13 ++++++++++++-
lib/ipsec/sa.h | 4 ++++
4 files changed, 36 insertions(+), 2 deletions(-)
diff --git a/lib/ipsec/iph.h b/lib/ipsec/iph.h
index 2d223199ac..093f86d34a 100644
--- a/lib/ipsec/iph.h
+++ b/lib/ipsec/iph.h
@@ -251,6 +251,7 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa *sa, void *outh,
{
struct rte_ipv4_hdr *v4h;
struct rte_ipv6_hdr *v6h;
+ struct rte_udp_hdr *udph;
uint8_t is_outh_ipv4;
if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
@@ -258,11 +259,23 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa *sa, void *outh,
v4h = outh;
v4h->packet_id = pid;
v4h->total_length = rte_cpu_to_be_16(plen - l2len);
+
+ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
+ udph = (struct rte_udp_hdr *)(v4h + 1);
+ udph->dgram_len = rte_cpu_to_be_16(plen - l2len -
+ (sizeof(*v4h) + sizeof(*udph)));
+ }
} else {
is_outh_ipv4 = 0;
v6h = outh;
v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
sizeof(*v6h));
+
+ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
+ udph = (struct rte_udp_hdr *)(v6h + 1);
+ udph->dgram_len = rte_cpu_to_be_16(plen - l2len -
+ (sizeof(*v6h) + sizeof(*udph)));
+ }
}
if (sa->type & TUN_HDR_MSK)
diff --git a/lib/ipsec/rte_ipsec_sa.h b/lib/ipsec/rte_ipsec_sa.h
index cf51ad8338..40d1e70d45 100644
--- a/lib/ipsec/rte_ipsec_sa.h
+++ b/lib/ipsec/rte_ipsec_sa.h
@@ -76,6 +76,7 @@ struct rte_ipsec_sa_prm {
* - inbound/outbound
* - mode (TRANSPORT/TUNNEL)
* - for TUNNEL outer IP version (IPv4/IPv6)
+ * - NAT-T UDP encapsulated (TUNNEL mode only)
* - are SA SQN operations 'atomic'
* - ESN enabled/disabled
* ...
@@ -86,7 +87,8 @@ enum {
RTE_SATP_LOG2_PROTO,
RTE_SATP_LOG2_DIR,
RTE_SATP_LOG2_MODE,
- RTE_SATP_LOG2_SQN = RTE_SATP_LOG2_MODE + 2,
+ RTE_SATP_LOG2_NATT = RTE_SATP_LOG2_MODE + 2,
+ RTE_SATP_LOG2_SQN,
RTE_SATP_LOG2_ESN,
RTE_SATP_LOG2_ECN,
RTE_SATP_LOG2_DSCP
@@ -109,6 +111,10 @@ enum {
#define RTE_IPSEC_SATP_MODE_TUNLV4 (1ULL << RTE_SATP_LOG2_MODE)
#define RTE_IPSEC_SATP_MODE_TUNLV6 (2ULL << RTE_SATP_LOG2_MODE)
+#define RTE_IPSEC_SATP_NATT_MASK (1ULL << RTE_SATP_LOG2_NATT)
+#define RTE_IPSEC_SATP_NATT_DISABLE (0ULL << RTE_SATP_LOG2_NATT)
+#define RTE_IPSEC_SATP_NATT_ENABLE (1ULL << RTE_SATP_LOG2_NATT)
+
#define RTE_IPSEC_SATP_SQN_MASK (1ULL << RTE_SATP_LOG2_SQN)
#define RTE_IPSEC_SATP_SQN_RAW (0ULL << RTE_SATP_LOG2_SQN)
#define RTE_IPSEC_SATP_SQN_ATOM (1ULL << RTE_SATP_LOG2_SQN)
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 2ecbbce0a4..8e369e4618 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -217,6 +217,10 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
} else
return -EINVAL;
+ /* check for UDP encapsulation flag */
+ if (prm->ipsec_xform.options.udp_encap == 1)
+ tp |= RTE_IPSEC_SATP_NATT_ENABLE;
+
/* check for ESN flag */
if (prm->ipsec_xform.options.esn == 0)
tp |= RTE_IPSEC_SATP_ESN_DISABLE;
@@ -372,7 +376,8 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
const struct crypto_xform *cxf)
{
static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
- RTE_IPSEC_SATP_MODE_MASK;
+ RTE_IPSEC_SATP_MODE_MASK |
+ RTE_IPSEC_SATP_NATT_MASK;
if (prm->ipsec_xform.options.ecn)
sa->tos_mask |= RTE_IPV4_HDR_ECN_MASK;
@@ -475,10 +480,16 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
esp_inb_init(sa);
break;
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4 |
+ RTE_IPSEC_SATP_NATT_ENABLE):
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6 |
+ RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
esp_outb_tun_init(sa, prm);
break;
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
+ RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
esp_outb_init(sa, 0);
break;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 5e237f3525..3f38921eb3 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -101,6 +101,10 @@ struct rte_ipsec_sa {
uint64_t msk;
uint64_t val;
} tx_offload;
+ struct {
+ uint16_t sport;
+ uint16_t dport;
+ } natt;
uint32_t salt;
uint8_t algo_type;
uint8_t proto; /* next proto */
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH 08/10] ipsec: add support for SA telemetry
2021-07-13 13:35 [dpdk-dev] [PATCH 00/10] new features for ipsec and security libraries Radu Nicolau
` (6 preceding siblings ...)
2021-07-13 13:35 ` [dpdk-dev] [PATCH 07/10] ipsec: add support for NAT-T Radu Nicolau
@ 2021-07-13 13:35 ` Radu Nicolau
2021-07-13 13:35 ` [dpdk-dev] [PATCH 09/10] ipsec: add support for initial SQN value Radu Nicolau
` (10 subsequent siblings)
18 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-07-13 13:35 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin, Ray Kinsella
Cc: dev, Radu Nicolau, Declan Doherty, Abhijit Sinha, Daniel Martin Buckley
Add telemetry support for ipsec SAs
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/esp_inb.c | 1 +
lib/ipsec/esp_outb.c | 12 +-
lib/ipsec/meson.build | 2 +-
lib/ipsec/rte_ipsec.h | 11 ++
lib/ipsec/sa.c | 255 +++++++++++++++++++++++++++++++++++++++++-
lib/ipsec/sa.h | 21 ++++
lib/ipsec/version.map | 8 ++
7 files changed, 304 insertions(+), 6 deletions(-)
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index a6ab8fbdd5..8cb4c16302 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -722,6 +722,7 @@ esp_inb_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
/* process packets, extract seq numbers */
k = process(sa, mb, sqn, dr, num, sqh_len);
+ sa->statistics.count += k;
/* handle unprocessed mbufs */
if (k != num && k != 0)
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 9fc7075796..2c02c3bb12 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -617,7 +617,7 @@ uint16_t
esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
uint16_t num)
{
- uint32_t i, k, icv_len, *icv;
+ uint32_t i, k, icv_len, *icv, bytes;
struct rte_mbuf *ml;
struct rte_ipsec_sa *sa;
uint32_t dr[num];
@@ -626,10 +626,12 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
k = 0;
icv_len = sa->icv_len;
+ bytes = 0;
for (i = 0; i != num; i++) {
if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
ml = rte_pktmbuf_lastseg(mb[i]);
+ bytes += mb[i]->data_len;
/* remove high-order 32 bits of esn from packet len */
mb[i]->pkt_len -= sa->sqh_len;
ml->data_len -= sa->sqh_len;
@@ -640,6 +642,8 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
} else
dr[i - k] = i;
}
+ sa->statistics.count += k;
+ sa->statistics.bytes += bytes - (sa->hdr_len * k);
/* handle unprocessed mbufs */
if (k != num) {
@@ -659,16 +663,19 @@ static inline void
inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- uint32_t i, ol_flags;
+ uint32_t i, ol_flags, bytes = 0;
ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
for (i = 0; i != num; i++) {
mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
+ bytes += mb[i]->data_len;
if (ol_flags != 0)
rte_security_set_pkt_metadata(ss->security.ctx,
ss->security.ses, mb[i], NULL);
}
+ ss->sa->statistics.count += num;
+ ss->sa->statistics.bytes += bytes - (ss->sa->hdr_len * num);
}
/* check if packet will exceed MSS and segmentation is required */
@@ -752,6 +759,7 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
sqn += nb_segs[i] - 1;
}
+
/* copy not processed mbufs beyond good ones */
if (k != num && k != 0)
move_bad_mbufs(mb, dr, num, num - k);
diff --git a/lib/ipsec/meson.build b/lib/ipsec/meson.build
index 1497f573bb..f5e44cfe47 100644
--- a/lib/ipsec/meson.build
+++ b/lib/ipsec/meson.build
@@ -6,4 +6,4 @@ sources = files('esp_inb.c', 'esp_outb.c', 'sa.c', 'ses.c', 'ipsec_sad.c')
headers = files('rte_ipsec.h', 'rte_ipsec_sa.h', 'rte_ipsec_sad.h')
indirect_headers += files('rte_ipsec_group.h')
-deps += ['mbuf', 'net', 'cryptodev', 'security', 'hash']
+deps += ['mbuf', 'net', 'cryptodev', 'security', 'hash', 'telemetry']
diff --git a/lib/ipsec/rte_ipsec.h b/lib/ipsec/rte_ipsec.h
index dd60d95915..d34798bc7f 100644
--- a/lib/ipsec/rte_ipsec.h
+++ b/lib/ipsec/rte_ipsec.h
@@ -158,6 +158,17 @@ rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
return ss->pkt_func.process(ss, mb, num);
}
+
+struct rte_ipsec_telemetry;
+
+__rte_experimental
+int
+rte_ipsec_telemetry_init(void);
+
+__rte_experimental
+int
+rte_ipsec_telemetry_sa_add(struct rte_ipsec_sa *sa);
+
#include <rte_ipsec_group.h>
#ifdef __cplusplus
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 8e369e4618..5b55bbc098 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -7,7 +7,7 @@
#include <rte_ip.h>
#include <rte_errno.h>
#include <rte_cryptodev.h>
-
+#include <rte_telemetry.h>
#include "sa.h"
#include "ipsec_sqn.h"
#include "crypto.h"
@@ -25,6 +25,7 @@ struct crypto_xform {
struct rte_crypto_aead_xform *aead;
};
+
/*
* helper routine, fills internal crypto_xform structure.
*/
@@ -532,6 +533,249 @@ rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
wsz = prm->ipsec_xform.replay_win_sz;
return ipsec_sa_size(type, &wsz, &nb);
}
+struct rte_ipsec_telemetry {
+ bool initialized;
+ LIST_HEAD(, rte_ipsec_sa) sa_list_head;
+};
+
+#include <rte_malloc.h>
+
+static struct rte_ipsec_telemetry rte_ipsec_telemetry_instance = {
+ .initialized = false };
+
+static int
+handle_telemetry_cmd_ipsec_sa_list(const char *cmd __rte_unused,
+ const char *params __rte_unused,
+ struct rte_tel_data *data)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ struct rte_ipsec_sa *sa;
+
+ rte_tel_data_start_array(data, RTE_TEL_U64_VAL);
+
+ LIST_FOREACH(sa, &telemetry->sa_list_head, telemetry_next) {
+ rte_tel_data_add_array_u64(data, htonl(sa->spi));
+ }
+
+ return 0;
+}
+
+/**
+ * Handle IPsec SA statistics telemetry request
+ *
+ * Return dict of SA's with dict of key/value counters
+ *
+ * {
+ * "SA_SPI_XX": {"count": 0, "bytes": 0, "errors": 0},
+ * "SA_SPI_YY": {"count": 0, "bytes": 0, "errors": 0}
+ * }
+ *
+ */
+static int
+handle_telemetry_cmd_ipsec_sa_stats(const char *cmd __rte_unused,
+ const char *params,
+ struct rte_tel_data *data)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ struct rte_ipsec_sa *sa;
+ bool user_specified_spi = false;
+ uint32_t sa_spi;
+
+ if (params) {
+ user_specified_spi = true;
+ sa_spi = htonl((uint32_t)atoi(params));
+ }
+
+ rte_tel_data_start_dict(data);
+
+ LIST_FOREACH(sa, &telemetry->sa_list_head, telemetry_next) {
+ char sa_name[64];
+
+ static const char *name_pkt_cnt = "count";
+ static const char *name_byte_cnt = "bytes";
+ static const char *name_error_cnt = "errors";
+ struct rte_tel_data *sa_data;
+
+ /* If user provided SPI only get telemetry for that SA */
+ if (user_specified_spi && (sa_spi != sa->spi))
+ continue;
+
+ /* allocate telemetry data struct for SA telemetry */
+ sa_data = rte_tel_data_alloc();
+ if (!sa_data)
+ return -ENOMEM;
+
+ rte_tel_data_start_dict(sa_data);
+
+ /* add telemetry key/values pairs */
+ rte_tel_data_add_dict_u64(sa_data, name_pkt_cnt,
+ sa->statistics.count);
+
+ rte_tel_data_add_dict_u64(sa_data, name_byte_cnt,
+ sa->statistics.bytes);
+
+ rte_tel_data_add_dict_u64(sa_data, name_error_cnt,
+ sa->statistics.errors.count);
+
+ /* generate telemetry label */
+ snprintf(sa_name, sizeof(sa_name), "SA_SPI_%i", htonl(sa->spi));
+
+ /* add SA telemetry to dictionary container */
+ rte_tel_data_add_dict_container(data, sa_name, sa_data, 0);
+ }
+
+ return 0;
+}
+
+static int
+handle_telemetry_cmd_ipsec_sa_configuration(const char *cmd __rte_unused,
+ const char *params,
+ struct rte_tel_data *data)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ struct rte_ipsec_sa *sa;
+ uint32_t sa_spi;
+
+ if (params)
+ sa_spi = htonl((uint32_t)atoi(params));
+ else
+ return -EINVAL;
+
+ rte_tel_data_start_dict(data);
+
+ LIST_FOREACH(sa, &telemetry->sa_list_head, telemetry_next) {
+ uint64_t mode;
+
+ if (sa_spi != sa->spi)
+ continue;
+
+ /* add SA configuration key/values pairs */
+ rte_tel_data_add_dict_string(data, "Type",
+ (sa->type & RTE_IPSEC_SATP_PROTO_MASK) ==
+ RTE_IPSEC_SATP_PROTO_AH ? "AH" : "ESP");
+
+ rte_tel_data_add_dict_string(data, "Direction",
+ (sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+ RTE_IPSEC_SATP_DIR_IB ? "Inbound" : "Outbound");
+
+ mode = sa->type & RTE_IPSEC_SATP_MODE_MASK;
+
+ if (mode == RTE_IPSEC_SATP_MODE_TRANS) {
+ rte_tel_data_add_dict_string(data, "Mode", "Transport");
+ } else {
+ rte_tel_data_add_dict_string(data, "Mode", "Tunnel");
+
+ if ((sa->type & RTE_IPSEC_SATP_NATT_MASK) ==
+ RTE_IPSEC_SATP_NATT_ENABLE) {
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ } else if (sa->type &
+ RTE_IPSEC_SATP_MODE_TUNLV6) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ }
+ } else {
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ } else if (sa->type &
+ RTE_IPSEC_SATP_MODE_TUNLV6) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ }
+ }
+ }
+
+ rte_tel_data_add_dict_string(data,
+ "extended-sequence-number",
+ (sa->type & RTE_IPSEC_SATP_ESN_MASK) ==
+ RTE_IPSEC_SATP_ESN_ENABLE ?
+ "enabled" : "disabled");
+
+ if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+ RTE_IPSEC_SATP_DIR_IB)
+
+ if (sa->sqn.inb.rsn[sa->sqn.inb.rdidx])
+ rte_tel_data_add_dict_u64(data,
+ "sequence-number",
+ sa->sqn.inb.rsn[sa->sqn.inb.rdidx]->sqn);
+ else
+ rte_tel_data_add_dict_u64(data,
+ "sequence-number", 0);
+ else
+ rte_tel_data_add_dict_u64(data, "sequence-number",
+ sa->sqn.outb);
+
+ rte_tel_data_add_dict_string(data,
+ "explicit-congestion-notification",
+ (sa->type & RTE_IPSEC_SATP_ECN_MASK) ==
+ RTE_IPSEC_SATP_ECN_ENABLE ?
+ "enabled" : "disabled");
+
+ rte_tel_data_add_dict_string(data,
+ "copy-DSCP",
+ (sa->type & RTE_IPSEC_SATP_DSCP_MASK) ==
+ RTE_IPSEC_SATP_DSCP_ENABLE ?
+ "enabled" : "disabled");
+
+ rte_tel_data_add_dict_string(data, "TSO",
+ sa->tso.enabled ? "enabled" : "disabled");
+
+ if (sa->tso.enabled)
+ rte_tel_data_add_dict_u64(data, "TSO-MSS", sa->tso.mss);
+
+ }
+
+ return 0;
+}
+int
+rte_ipsec_telemetry_init(void)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ int rc = 0;
+
+ if (telemetry->initialized)
+ return rc;
+
+ LIST_INIT(&telemetry->sa_list_head);
+
+ rc = rte_telemetry_register_cmd("/ipsec/sa/list",
+ handle_telemetry_cmd_ipsec_sa_list,
+ "Return list of IPsec Security Associations with telemetry enabled.");
+ if (rc)
+ return rc;
+
+ rc = rte_telemetry_register_cmd("/ipsec/sa/stats",
+ handle_telemetry_cmd_ipsec_sa_stats,
+ "Returns IPsec Security Association stastistics. Parameters: int sa_spi");
+ if (rc)
+ return rc;
+
+ rc = rte_telemetry_register_cmd("/ipsec/sa/details",
+ handle_telemetry_cmd_ipsec_sa_configuration,
+ "Returns IPsec Security Association configuration. Parameters: int sa_spi");
+ if (rc)
+ return rc;
+
+ telemetry->initialized = true;
+
+ return rc;
+}
+
+int
+rte_ipsec_telemetry_sa_add(struct rte_ipsec_sa *sa)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+
+ LIST_INSERT_HEAD(&telemetry->sa_list_head, sa, telemetry_next);
+
+ return 0;
+}
int
rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
@@ -644,19 +888,24 @@ uint16_t
pkt_flag_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- uint32_t i, k;
+ uint32_t i, k, bytes = 0;
uint32_t dr[num];
RTE_SET_USED(ss);
k = 0;
for (i = 0; i != num; i++) {
- if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
+ if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
k++;
+ bytes += mb[i]->data_len;
+ }
else
dr[i - k] = i;
}
+ ss->sa->statistics.count += k;
+ ss->sa->statistics.bytes += bytes - (ss->sa->hdr_len * k);
+
/* handle unprocessed mbufs */
if (k != num) {
rte_errno = EBADMSG;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 3f38921eb3..b9b7ebec5b 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -122,9 +122,30 @@ struct rte_ipsec_sa {
uint16_t mss;
} tso;
+ LIST_ENTRY(rte_ipsec_sa) telemetry_next;
+ /**< list entry for telemetry enabled SA */
+
+
+ RTE_MARKER cachealign_statistics __rte_cache_min_aligned;
+
+ /* Statistics */
+ struct {
+ uint64_t count;
+ uint64_t bytes;
+
+ struct {
+ uint64_t count;
+ uint64_t authentication_failed;
+ } errors;
+ } statistics;
+
+ RTE_MARKER cachealign_tunnel_header __rte_cache_min_aligned;
+
/* template for tunnel header */
uint8_t hdr[IPSEC_MAX_HDR_SIZE];
+
+ RTE_MARKER cachealign_tunnel_seq_num_replay_win __rte_cache_min_aligned;
/*
* sqn and replay window
* In case of SA handled by multiple threads *sqn* cacheline
diff --git a/lib/ipsec/version.map b/lib/ipsec/version.map
index ad3e38b7c8..c181c1fb04 100644
--- a/lib/ipsec/version.map
+++ b/lib/ipsec/version.map
@@ -19,3 +19,11 @@ DPDK_21 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ rte_ipsec_telemetry_init;
+ rte_ipsec_telemetry_sa_add;
+
+};
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH 09/10] ipsec: add support for initial SQN value
2021-07-13 13:35 [dpdk-dev] [PATCH 00/10] new features for ipsec and security libraries Radu Nicolau
` (7 preceding siblings ...)
2021-07-13 13:35 ` [dpdk-dev] [PATCH 08/10] ipsec: add support for SA telemetry Radu Nicolau
@ 2021-07-13 13:35 ` Radu Nicolau
2021-07-13 13:35 ` [dpdk-dev] [PATCH 10/10] ipsec: add ol_flags support Radu Nicolau
` (9 subsequent siblings)
18 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-07-13 13:35 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, Radu Nicolau, Declan Doherty, Abhijit Sinha, Daniel Martin Buckley
Update IPsec library to support initial SQN value.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/esp_outb.c | 19 ++++++++++++-------
lib/ipsec/sa.c | 29 ++++++++++++++++++++++-------
2 files changed, 34 insertions(+), 14 deletions(-)
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 2c02c3bb12..8a6d09558f 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -661,7 +661,7 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
*/
static inline void
inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
- struct rte_mbuf *mb[], uint16_t num)
+ struct rte_mbuf *mb[], uint16_t num, uint64_t *sqn)
{
uint32_t i, ol_flags, bytes = 0;
@@ -672,7 +672,7 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
bytes += mb[i]->data_len;
if (ol_flags != 0)
rte_security_set_pkt_metadata(ss->security.ctx,
- ss->security.ses, mb[i], NULL);
+ ss->security.ses, mb[i], sqn);
}
ss->sa->statistics.count += num;
ss->sa->statistics.bytes += bytes - (ss->sa->hdr_len * num);
@@ -764,7 +764,10 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
if (k != num && k != 0)
move_bad_mbufs(mb, dr, num, num - k);
- inline_outb_mbuf_prepare(ss, mb, k);
+ if (sa->sqn_mask > UINT32_MAX)
+ inline_outb_mbuf_prepare(ss, mb, k, &sqn);
+ else
+ inline_outb_mbuf_prepare(ss, mb, k, NULL);
return k;
}
@@ -799,8 +802,7 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
- k = 0;
- for (i = 0; i != num; i++) {
+ for (i = 0, k = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -828,7 +830,10 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
if (k != num && k != 0)
move_bad_mbufs(mb, dr, num, num - k);
- inline_outb_mbuf_prepare(ss, mb, k);
+ if (sa->sqn_mask > UINT32_MAX)
+ inline_outb_mbuf_prepare(ss, mb, k, &sqn);
+ else
+ inline_outb_mbuf_prepare(ss, mb, k, NULL);
return k;
}
@@ -840,6 +845,6 @@ uint16_t
inline_proto_outb_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- inline_outb_mbuf_prepare(ss, mb, num);
+ inline_outb_mbuf_prepare(ss, mb, num, NULL);
return num;
}
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 5b55bbc098..242fdcd461 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -294,11 +294,11 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
* Init ESP outbound specific things.
*/
static void
-esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
+esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen, uint64_t sqn)
{
uint8_t algo_type;
- sa->sqn.outb = 1;
+ sa->sqn.outb = sqn;
algo_type = sa->algo_type;
@@ -356,6 +356,8 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
static void
esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
{
+ uint64_t sqn = prm->ipsec_xform.esn.value > 0 ?
+ prm->ipsec_xform.esn.value : 0;
sa->proto = prm->tun.next_proto;
sa->hdr_len = prm->tun.hdr_len;
sa->hdr_l3_off = prm->tun.hdr_l3_off;
@@ -366,7 +368,7 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
- esp_outb_init(sa, sa->hdr_len);
+ esp_outb_init(sa, sa->hdr_len, sqn);
}
/*
@@ -376,6 +378,8 @@ static int
esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
const struct crypto_xform *cxf)
{
+ uint64_t sqn = prm->ipsec_xform.esn.value > 0 ?
+ prm->ipsec_xform.esn.value : 0;
static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
RTE_IPSEC_SATP_MODE_MASK |
RTE_IPSEC_SATP_NATT_MASK;
@@ -492,7 +496,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
- esp_outb_init(sa, 0);
+ esp_outb_init(sa, 0, sqn);
break;
}
@@ -503,15 +507,19 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
* helper function, init SA replay structure.
*/
static void
-fill_sa_replay(struct rte_ipsec_sa *sa, uint32_t wnd_sz, uint32_t nb_bucket)
+fill_sa_replay(struct rte_ipsec_sa *sa,
+ uint32_t wnd_sz, uint32_t nb_bucket, uint64_t sqn)
{
sa->replay.win_sz = wnd_sz;
sa->replay.nb_bucket = nb_bucket;
sa->replay.bucket_index_mask = nb_bucket - 1;
sa->sqn.inb.rsn[0] = (struct replay_sqn *)(sa + 1);
- if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+ sa->sqn.inb.rsn[0]->sqn = sqn;
+ if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM) {
sa->sqn.inb.rsn[1] = (struct replay_sqn *)
((uintptr_t)sa->sqn.inb.rsn[0] + rsn_size(nb_bucket));
+ sa->sqn.inb.rsn[1]->sqn = sqn;
+ }
}
int
@@ -830,13 +838,20 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
UINT32_MAX : UINT64_MAX;
+ /* if we are starting from a non-zero sn value */
+ if (prm->ipsec_xform.esn.value > 0) {
+ if (prm->ipsec_xform.direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+ sa->sqn.outb = prm->ipsec_xform.esn.value;
+ }
+
rc = esp_sa_init(sa, prm, &cxf);
if (rc != 0)
rte_ipsec_sa_fini(sa);
/* fill replay window related fields */
if (nb != 0)
- fill_sa_replay(sa, wsz, nb);
+ fill_sa_replay(sa, wsz, nb, prm->ipsec_xform.esn.value);
return sz;
}
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH 10/10] ipsec: add ol_flags support
2021-07-13 13:35 [dpdk-dev] [PATCH 00/10] new features for ipsec and security libraries Radu Nicolau
` (8 preceding siblings ...)
2021-07-13 13:35 ` [dpdk-dev] [PATCH 09/10] ipsec: add support for initial SQN value Radu Nicolau
@ 2021-07-13 13:35 ` Radu Nicolau
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 00/10] new features for ipsec and security libraries Radu Nicolau
` (8 subsequent siblings)
18 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-07-13 13:35 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, Radu Nicolau, Declan Doherty, Abhijit Sinha, Daniel Martin Buckley
Set mbuff->ol_flags for IPsec packets.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/esp_inb.c | 17 ++++++++++++--
lib/ipsec/esp_outb.c | 48 ++++++++++++++++++++++++++++++---------
lib/ipsec/rte_ipsec_sa.h | 3 ++-
lib/ipsec/sa.c | 49 ++++++++++++++++++++++++++++++++++++++--
lib/ipsec/sa.h | 8 +++++++
5 files changed, 109 insertions(+), 16 deletions(-)
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index 8cb4c16302..5fcb41297e 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -559,7 +559,8 @@ trs_process_step3(struct rte_mbuf *mb)
* - tx_offload
*/
static inline void
-tun_process_step3(struct rte_mbuf *mb, uint64_t txof_msk, uint64_t txof_val)
+tun_process_step3(struct rte_mbuf *mb, uint8_t is_ipv4, uint64_t txof_msk,
+ uint64_t txof_val)
{
/* reset mbuf metatdata: L2/L3 len, packet type */
mb->packet_type = RTE_PTYPE_UNKNOWN;
@@ -567,6 +568,14 @@ tun_process_step3(struct rte_mbuf *mb, uint64_t txof_msk, uint64_t txof_val)
/* clear the PKT_RX_SEC_OFFLOAD flag if set */
mb->ol_flags &= ~PKT_RX_SEC_OFFLOAD;
+
+ if (is_ipv4) {
+ mb->l3_len = sizeof(struct rte_ipv4_hdr);
+ mb->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ } else {
+ mb->l3_len = sizeof(struct rte_ipv6_hdr);
+ mb->ol_flags |= PKT_TX_IPV6;
+ }
}
/*
@@ -618,8 +627,12 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
update_tun_inb_l3hdr(sa, outh, inh);
/* update mbuf's metadata */
- tun_process_step3(mb[i], sa->tx_offload.msk,
+ tun_process_step3(mb[i],
+ (sa->type & RTE_IPSEC_SATP_IPV_MASK) ==
+ RTE_IPSEC_SATP_IPV4 ? 1 : 0,
+ sa->tx_offload.msk,
sa->tx_offload.val);
+
k++;
} else
dr[i - k] = i;
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 8a6d09558f..d8e261e6fb 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -19,7 +19,7 @@
typedef int32_t (*esp_outb_prepare_t)(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len);
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t icrypto);
/*
* helper function to fill crypto_sym op for cipher+auth algorithms.
@@ -140,9 +140,9 @@ outb_cop_prepare(struct rte_crypto_op *cop,
static inline int32_t
outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len)
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t icrypto)
{
- uint32_t clen, hlen, l2len, pdlen, pdofs, plen, tlen;
+ uint32_t clen, hlen, l2len, l3len, pdlen, pdofs, plen, tlen;
struct rte_mbuf *ml;
struct rte_esp_hdr *esph;
struct rte_esp_tail *espt;
@@ -154,6 +154,8 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* size of ipsec protected data */
l2len = mb->l2_len;
+ l3len = mb->l3_len;
+
plen = mb->pkt_len - l2len;
/* number of bytes to encrypt */
@@ -190,8 +192,26 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
/* update pkt l2/l3 len */
- mb->tx_offload = (mb->tx_offload & sa->tx_offload.msk) |
- sa->tx_offload.val;
+ if (icrypto) {
+ mb->tx_offload =
+ (mb->tx_offload & sa->inline_crypto.tx_offload.msk) |
+ sa->inline_crypto.tx_offload.val;
+ mb->l3_len = l3len;
+
+ mb->ol_flags |= sa->inline_crypto.tx_ol_flags;
+
+ /* set ip checksum offload for inner */
+ if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4)
+ mb->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if ((sa->type & RTE_IPSEC_SATP_IPV_MASK)
+ == RTE_IPSEC_SATP_IPV6)
+ mb->ol_flags |= PKT_TX_IPV6;
+ } else {
+ mb->tx_offload = (mb->tx_offload & sa->tx_offload.msk) |
+ sa->tx_offload.val;
+
+ mb->ol_flags |= sa->tx_ol_flags;
+ }
/* copy tunnel pkt header */
rte_memcpy(ph, sa->hdr, sa->hdr_len);
@@ -311,7 +331,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
/* try to update the packet itself */
rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv,
- sa->sqh_len);
+ sa->sqh_len, 0);
/* success, setup crypto op */
if (rc >= 0) {
outb_pkt_xprepare(sa, sqc, &icv);
@@ -338,7 +358,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
static inline int32_t
outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len)
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t icrypto __rte_unused)
{
uint8_t np;
uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen;
@@ -394,10 +414,16 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* shift L2/L3 headers */
insert_esph(ph, ph + hlen, uhlen);
+ if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4)
+ mb->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV6)
+ mb->ol_flags |= PKT_TX_IPV6;
+
/* update ip header fields */
np = update_trs_l34hdrs(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
l3len, IPPROTO_ESP, tso);
+
/* update spi, seqn and iv */
esph = (struct rte_esp_hdr *)(ph + uhlen);
iv = (uint64_t *)(esph + 1);
@@ -463,7 +489,7 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
/* try to update the packet itself */
rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv,
- sa->sqh_len);
+ sa->sqh_len, 0);
/* success, setup crypto op */
if (rc >= 0) {
outb_pkt_xprepare(sa, sqc, &icv);
@@ -560,7 +586,7 @@ cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss,
gen_iv(ivbuf[k], sqc);
/* try to update the packet itself */
- rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len);
+ rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len, 0);
/* success, proceed with preparations */
if (rc >= 0) {
@@ -741,7 +767,7 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
gen_iv(iv, sqc);
/* try to update the packet itself */
- rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
+ rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0, 1);
k += (rc >= 0);
@@ -808,7 +834,7 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
gen_iv(iv, sqc);
/* try to update the packet itself */
- rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
+ rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0, 0);
k += (rc >= 0);
diff --git a/lib/ipsec/rte_ipsec_sa.h b/lib/ipsec/rte_ipsec_sa.h
index 40d1e70d45..3c36dcaa77 100644
--- a/lib/ipsec/rte_ipsec_sa.h
+++ b/lib/ipsec/rte_ipsec_sa.h
@@ -38,7 +38,8 @@ struct rte_ipsec_sa_prm {
union {
struct {
uint8_t hdr_len; /**< tunnel header len */
- uint8_t hdr_l3_off; /**< offset for IPv4/IPv6 header */
+ uint8_t hdr_l3_off; /**< tunnel l3 header len */
+ uint8_t hdr_l3_len; /**< tunnel l3 header len */
uint8_t next_proto; /**< next header protocol */
const void *hdr; /**< tunnel header template */
} tun; /**< tunnel mode related parameters */
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 242fdcd461..51f71b30c6 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -17,6 +17,8 @@
#define MBUF_MAX_L2_LEN RTE_LEN2MASK(RTE_MBUF_L2_LEN_BITS, uint64_t)
#define MBUF_MAX_L3_LEN RTE_LEN2MASK(RTE_MBUF_L3_LEN_BITS, uint64_t)
+#define MBUF_MAX_TSO_LEN RTE_LEN2MASK(RTE_MBUF_TSO_SEGSZ_BITS, uint64_t)
+#define MBUF_MAX_OL3_LEN RTE_LEN2MASK(RTE_MBUF_OUTL3_LEN_BITS, uint64_t)
/* some helper structures */
struct crypto_xform {
@@ -348,6 +350,11 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen, uint64_t sqn)
sa->cofs.ofs.cipher.head = sa->ctp.cipher.offset - sa->ctp.auth.offset;
sa->cofs.ofs.cipher.tail = (sa->ctp.auth.offset + sa->ctp.auth.length) -
(sa->ctp.cipher.offset + sa->ctp.cipher.length);
+
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
+ sa->tx_ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV6)
+ sa->tx_ol_flags |= PKT_TX_IPV6;
}
/*
@@ -362,9 +369,43 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
sa->hdr_len = prm->tun.hdr_len;
sa->hdr_l3_off = prm->tun.hdr_l3_off;
+
+ /* update l2_len and l3_len fields for outbound mbuf */
+ sa->inline_crypto.tx_offload.val = rte_mbuf_tx_offload(
+ 0, /* iL2_LEN */
+ 0, /* iL3_LEN */
+ 0, /* iL4_LEN */
+ 0, /* TSO_SEG_SZ */
+ prm->tun.hdr_l3_len, /* oL3_LEN */
+ prm->tun.hdr_l3_off, /* oL2_LEN */
+ 0);
+
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_TUNNEL_ESP;
+
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_OUTER_IPV4;
+ else if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV6)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_OUTER_IPV6;
+
+ if (sa->inline_crypto.tx_ol_flags & PKT_TX_OUTER_IPV4)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_OUTER_IP_CKSUM;
+ if (sa->tx_ol_flags & PKT_TX_IPV4)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_IP_CKSUM;
+
/* update l2_len and l3_len fields for outbound mbuf */
- sa->tx_offload.val = rte_mbuf_tx_offload(sa->hdr_l3_off,
- sa->hdr_len - sa->hdr_l3_off, 0, 0, 0, 0, 0);
+ sa->tx_offload.val = rte_mbuf_tx_offload(
+ prm->tun.hdr_l3_off, /* iL2_LEN */
+ prm->tun.hdr_l3_len, /* iL3_LEN */
+ 0, /* iL4_LEN */
+ 0, /* TSO_SEG_SZ */
+ 0, /* oL3_LEN */
+ 0, /* oL2_LEN */
+ 0);
+
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
+ sa->tx_ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV6)
+ sa->tx_ol_flags |= PKT_TX_IPV6;
memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
@@ -473,6 +514,10 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->salt = prm->ipsec_xform.salt;
/* preserve all values except l2_len and l3_len */
+ sa->inline_crypto.tx_offload.msk =
+ ~rte_mbuf_tx_offload(MBUF_MAX_L2_LEN, MBUF_MAX_L3_LEN,
+ 0, 0, MBUF_MAX_OL3_LEN, 0, 0);
+
sa->tx_offload.msk =
~rte_mbuf_tx_offload(MBUF_MAX_L2_LEN, MBUF_MAX_L3_LEN,
0, 0, 0, 0, 0);
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index b9b7ebec5b..172d094c4b 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -101,6 +101,14 @@ struct rte_ipsec_sa {
uint64_t msk;
uint64_t val;
} tx_offload;
+ uint64_t tx_ol_flags;
+ struct {
+ uint64_t tx_ol_flags;
+ struct {
+ uint64_t msk;
+ uint64_t val;
+ } tx_offload;
+ } inline_crypto;
struct {
uint16_t sport;
uint16_t dport;
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v2 00/10] new features for ipsec and security libraries
2021-07-13 13:35 [dpdk-dev] [PATCH 00/10] new features for ipsec and security libraries Radu Nicolau
` (9 preceding siblings ...)
2021-07-13 13:35 ` [dpdk-dev] [PATCH 10/10] ipsec: add ol_flags support Radu Nicolau
@ 2021-08-12 13:54 ` Radu Nicolau
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 01/10] security: add support for TSO on IPsec session Radu Nicolau
` (9 more replies)
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 00/10] new features for ipsec and security libraries Radu Nicolau
` (7 subsequent siblings)
18 siblings, 10 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-08-12 13:54 UTC (permalink / raw)
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, hemant.agrawal, gakhil, anoobj, declan.doherty,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau
Add support for:
TSO, NAT-T/UDP encapsulation, ESN
AES_CCM, CHACHA20_POLY1305 and AES_GMAC
SA telemetry
mbuf offload flags
Initial SQN value
Radu Nicolau (10):
security: add support for TSO on IPsec session
security: add UDP params for IPsec NAT-T
security: add ESN field to ipsec_xform
mbuf: add IPsec ESP tunnel type
ipsec: add support for AEAD algorithms
ipsec: add transmit segmentation offload support
ipsec: add support for NAT-T
ipsec: add support for SA telemetry
ipsec: add support for initial SQN value
ipsec: add ol_flags support
lib/ipsec/crypto.h | 137 ++++++++++++
lib/ipsec/esp_inb.c | 88 +++++++-
lib/ipsec/esp_outb.c | 262 +++++++++++++++++++----
lib/ipsec/iph.h | 23 +-
lib/ipsec/meson.build | 2 +-
lib/ipsec/rte_ipsec.h | 11 +
lib/ipsec/rte_ipsec_sa.h | 11 +-
lib/ipsec/sa.c | 406 ++++++++++++++++++++++++++++++++++--
lib/ipsec/sa.h | 43 ++++
lib/ipsec/version.map | 9 +
lib/mbuf/rte_mbuf_core.h | 1 +
lib/security/rte_security.h | 31 +++
12 files changed, 951 insertions(+), 73 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v2 01/10] security: add support for TSO on IPsec session
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 00/10] new features for ipsec and security libraries Radu Nicolau
@ 2021-08-12 13:54 ` Radu Nicolau
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 02/10] security: add UDP params for IPsec NAT-T Radu Nicolau
` (8 subsequent siblings)
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-08-12 13:54 UTC (permalink / raw)
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, hemant.agrawal, gakhil, anoobj, declan.doherty,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau, Abhijit Sinha
Allow user to provision a per security session maximum segment size
(MSS) for use when Transmit Segmentation Offload (TSO) is supported.
The MSS value will be used when PKT_TX_TCP_SEG or PKT_TX_UDP_SEG
ol_flags are specified in mbuf.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijits.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/security/rte_security.h | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 88d31de0a6..45896a77d0 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -181,6 +181,19 @@ struct rte_security_ipsec_sa_options {
* * 0: Disable per session security statistics collection for this SA.
*/
uint32_t stats : 1;
+
+ /** Transmit Segmentation Offload (TSO)
+ *
+ * * 1: Enable per session security TSO support, use MSS value provide
+ * in IPsec security session when PKT_TX_TCP_SEG or PKT_TX_UDP_SEG
+ * ol_flags are set in mbuf.
+ * this SA, if supported by the driver.
+ * * 0: No TSO support for offload IPsec packets. Hardware will not
+ * attempt to segment packet, and packet transmission will fail if
+ * larger than MTU of interface
+ */
+ uint32_t tso : 1;
+
};
/** IPSec security association direction */
@@ -217,6 +230,8 @@ struct rte_security_ipsec_xform {
/**< Anti replay window size to enable sequence replay attack handling.
* replay checking is disabled if the window size is 0.
*/
+ uint32_t mss;
+ /**< IPsec payload Maximum Segment Size */
};
/**
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v2 02/10] security: add UDP params for IPsec NAT-T
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 00/10] new features for ipsec and security libraries Radu Nicolau
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 01/10] security: add support for TSO on IPsec session Radu Nicolau
@ 2021-08-12 13:54 ` Radu Nicolau
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 03/10] security: add ESN field to ipsec_xform Radu Nicolau
` (7 subsequent siblings)
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-08-12 13:54 UTC (permalink / raw)
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, hemant.agrawal, gakhil, anoobj, declan.doherty,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau, Abhijit Sinha
Add support for specifying UDP port params for UDP encapsulation option.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijits.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/security/rte_security.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 45896a77d0..03572b10ab 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -112,6 +112,12 @@ struct rte_security_ipsec_tunnel_param {
};
};
+struct rte_security_ipsec_udp_param {
+
+ uint16_t sport;
+ uint16_t dport;
+};
+
/**
* IPsec Security Association option flags
*/
@@ -224,6 +230,8 @@ struct rte_security_ipsec_xform {
/**< IPsec SA Mode - transport/tunnel */
struct rte_security_ipsec_tunnel_param tunnel;
/**< Tunnel parameters, NULL for transport mode */
+ struct rte_security_ipsec_udp_param udp;
+ /**< UDP parameters, ignored when udp_encap option not specified */
uint64_t esn_soft_limit;
/**< ESN for which the overflow event need to be raised */
uint32_t replay_win_sz;
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v2 03/10] security: add ESN field to ipsec_xform
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 00/10] new features for ipsec and security libraries Radu Nicolau
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 01/10] security: add support for TSO on IPsec session Radu Nicolau
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 02/10] security: add UDP params for IPsec NAT-T Radu Nicolau
@ 2021-08-12 13:54 ` Radu Nicolau
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 04/10] mbuf: add IPsec ESP tunnel type Radu Nicolau
` (6 subsequent siblings)
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-08-12 13:54 UTC (permalink / raw)
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, hemant.agrawal, gakhil, anoobj, declan.doherty,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau, Abhijit Sinha
Update ipsec_xform definition to include ESN field.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijits.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/security/rte_security.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 03572b10ab..702de58b48 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -240,6 +240,14 @@ struct rte_security_ipsec_xform {
*/
uint32_t mss;
/**< IPsec payload Maximum Segment Size */
+ union {
+ uint64_t value;
+ struct {
+ uint32_t low;
+ uint32_t hi;
+ };
+ } esn;
+ /**< Extended Sequence Number */
};
/**
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v2 04/10] mbuf: add IPsec ESP tunnel type
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 00/10] new features for ipsec and security libraries Radu Nicolau
` (2 preceding siblings ...)
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 03/10] security: add ESN field to ipsec_xform Radu Nicolau
@ 2021-08-12 13:54 ` Radu Nicolau
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 05/10] ipsec: add support for AEAD algorithms Radu Nicolau
` (5 subsequent siblings)
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-08-12 13:54 UTC (permalink / raw)
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, hemant.agrawal, gakhil, anoobj, declan.doherty,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau, Abhijit Sinha
Add tunnel type for IPsec ESP tunnels
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijits.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/mbuf/rte_mbuf_core.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index bb38d7f581..a4d95deee6 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -253,6 +253,7 @@ extern "C" {
#define PKT_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
#define PKT_TX_TUNNEL_VXLAN_GPE (0x6ULL << 45)
#define PKT_TX_TUNNEL_GTP (0x7ULL << 45)
+#define PKT_TX_TUNNEL_ESP (0x8ULL << 45)
/**
* Generic IP encapsulated tunnel type, used for TSO and checksum offload.
* It can be used for tunnels which are not standards or listed above.
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v2 05/10] ipsec: add support for AEAD algorithms
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 00/10] new features for ipsec and security libraries Radu Nicolau
` (3 preceding siblings ...)
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 04/10] mbuf: add IPsec ESP tunnel type Radu Nicolau
@ 2021-08-12 13:54 ` Radu Nicolau
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 06/10] ipsec: add transmit segmentation offload support Radu Nicolau
` (4 subsequent siblings)
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-08-12 13:54 UTC (permalink / raw)
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, hemant.agrawal, gakhil, anoobj, declan.doherty,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau, Abhijit Sinha
Add support for AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijits.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/crypto.h | 137 +++++++++++++++++++++++++++++++++++++++++++
lib/ipsec/esp_inb.c | 66 ++++++++++++++++++++-
lib/ipsec/esp_outb.c | 70 +++++++++++++++++++++-
lib/ipsec/sa.c | 54 +++++++++++++++--
lib/ipsec/sa.h | 6 ++
5 files changed, 322 insertions(+), 11 deletions(-)
diff --git a/lib/ipsec/crypto.h b/lib/ipsec/crypto.h
index 3d03034590..598ee9cebd 100644
--- a/lib/ipsec/crypto.h
+++ b/lib/ipsec/crypto.h
@@ -21,6 +21,37 @@ struct aesctr_cnt_blk {
uint32_t cnt;
} __rte_packed;
+ /*
+ * CHACHA20-POLY1305 devices have some specific requirements
+ * for IV and AAD formats.
+ * Ideally that to be done by the driver itself.
+ */
+
+struct aead_chacha20_poly1305_iv {
+ uint32_t salt;
+ uint64_t iv;
+ uint32_t cnt;
+} __rte_packed;
+
+struct aead_chacha20_poly1305_aad {
+ uint32_t spi;
+ /*
+ * RFC 4106, section 5:
+ * Two formats of the AAD are defined:
+ * one for 32-bit sequence numbers, and one for 64-bit ESN.
+ */
+ union {
+ uint32_t u32[2];
+ uint64_t u64;
+ } sqn;
+ uint32_t align0; /* align to 16B boundary */
+} __rte_packed;
+
+struct chacha20_poly1305_esph_iv {
+ struct rte_esp_hdr esph;
+ uint64_t iv;
+} __rte_packed;
+
/*
* AES-GCM devices have some specific requirements for IV and AAD formats.
* Ideally that to be done by the driver itself.
@@ -51,6 +82,47 @@ struct gcm_esph_iv {
uint64_t iv;
} __rte_packed;
+ /*
+ * AES-CCM devices have some specific requirements for IV and AAD formats.
+ * Ideally that to be done by the driver itself.
+ */
+union aead_ccm_salt {
+ uint32_t salt;
+ struct inner {
+ uint8_t salt8[3];
+ uint8_t ccm_flags;
+ } inner;
+} salt_union;
+
+
+struct aead_ccm_iv {
+ uint8_t ccm_flags;
+ uint8_t salt[3];
+ uint64_t iv;
+ uint32_t cnt;
+} __rte_packed;
+
+struct aead_ccm_aad {
+ uint8_t padding[18];
+ uint32_t spi;
+ /*
+ * RFC 4309, section 5:
+ * Two formats of the AAD are defined:
+ * one for 32-bit sequence numbers, and one for 64-bit ESN.
+ */
+ union {
+ uint32_t u32[2];
+ uint64_t u64;
+ } sqn;
+ uint32_t align0; /* align to 16B boundary */
+} __rte_packed;
+
+struct ccm_esph_iv {
+ struct rte_esp_hdr esph;
+ uint64_t iv;
+} __rte_packed;
+
+
static inline void
aes_ctr_cnt_blk_fill(struct aesctr_cnt_blk *ctr, uint64_t iv, uint32_t nonce)
{
@@ -59,6 +131,16 @@ aes_ctr_cnt_blk_fill(struct aesctr_cnt_blk *ctr, uint64_t iv, uint32_t nonce)
ctr->cnt = rte_cpu_to_be_32(1);
}
+static inline void
+aead_chacha20_poly1305_iv_fill(struct aead_chacha20_poly1305_iv
+ *chacha20_poly1305,
+ uint64_t iv, uint32_t salt)
+{
+ chacha20_poly1305->salt = salt;
+ chacha20_poly1305->iv = iv;
+ chacha20_poly1305->cnt = rte_cpu_to_be_32(1);
+}
+
static inline void
aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
{
@@ -67,6 +149,21 @@ aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
gcm->cnt = rte_cpu_to_be_32(1);
}
+static inline void
+aead_ccm_iv_fill(struct aead_ccm_iv *ccm, uint64_t iv, uint32_t salt)
+{
+ union aead_ccm_salt tsalt;
+
+ tsalt.salt = salt;
+ ccm->ccm_flags = tsalt.inner.ccm_flags;
+ ccm->salt[0] = tsalt.inner.salt8[0];
+ ccm->salt[1] = tsalt.inner.salt8[1];
+ ccm->salt[2] = tsalt.inner.salt8[2];
+ ccm->iv = iv;
+ ccm->cnt = rte_cpu_to_be_32(1);
+}
+
+
/*
* RFC 4106, 5 AAD Construction
* spi and sqn should already be converted into network byte order.
@@ -86,6 +183,25 @@ aead_gcm_aad_fill(struct aead_gcm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
aad->align0 = 0;
}
+/*
+ * RFC 4309, 5 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_ccm_aad_fill(struct aead_ccm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
+ int esn)
+{
+ aad->spi = spi;
+ if (esn)
+ aad->sqn.u64 = sqn;
+ else {
+ aad->sqn.u32[0] = sqn_low32(sqn);
+ aad->sqn.u32[1] = 0;
+ }
+ aad->align0 = 0;
+}
+
static inline void
gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
{
@@ -93,6 +209,27 @@ gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
iv[1] = 0;
}
+
+/*
+ * RFC 4106, 5 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_chacha20_poly1305_aad_fill(struct aead_chacha20_poly1305_aad *aad,
+ rte_be32_t spi, rte_be64_t sqn,
+ int esn)
+{
+ aad->spi = spi;
+ if (esn)
+ aad->sqn.u64 = sqn;
+ else {
+ aad->sqn.u32[0] = sqn_low32(sqn);
+ aad->sqn.u32[1] = 0;
+ }
+ aad->align0 = 0;
+}
+
/*
* Helper routine to copy IV
* Right now we support only algorithms with IV length equals 0/8/16 bytes.
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index 2b1df6a032..d66c88f05d 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -63,6 +63,8 @@ inb_cop_prepare(struct rte_crypto_op *cop,
{
struct rte_crypto_sym_op *sop;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint64_t *ivc, *ivp;
uint32_t algo;
@@ -83,6 +85,24 @@ inb_cop_prepare(struct rte_crypto_op *cop,
sa->iv_ofs);
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ sop_aead_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ ccm = rte_crypto_op_ctod_offset(cop, struct aead_ccm_iv *,
+ sa->iv_ofs);
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ sop_aead_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ chacha20_poly1305 = rte_crypto_op_ctod_offset(cop,
+ struct aead_chacha20_poly1305_iv *,
+ sa->iv_ofs);
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CBC:
case ALGO_TYPE_3DES_CBC:
sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
@@ -91,6 +111,14 @@ inb_cop_prepare(struct rte_crypto_op *cop,
ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs);
copy_iv(ivc, ivp, sa->iv_len);
break;
+ case ALGO_TYPE_AES_GMAC:
+ sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+ sa->iv_ofs);
+ aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
@@ -110,6 +138,8 @@ inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
uint32_t *pofs, uint32_t plen, void *iv)
{
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint64_t *ivp;
uint32_t clen;
@@ -120,9 +150,19 @@ inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
switch (sa->algo_type) {
case ALGO_TYPE_AES_GCM:
+ case ALGO_TYPE_AES_GMAC:
gcm = (struct aead_gcm_iv *)iv;
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ ccm = (struct aead_ccm_iv *)iv;
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ chacha20_poly1305 = (struct aead_chacha20_poly1305_iv *)iv;
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CBC:
case ALGO_TYPE_3DES_CBC:
copy_iv(iv, ivp, sa->iv_len);
@@ -175,6 +215,8 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
const union sym_op_data *icv)
{
struct aead_gcm_aad *aad;
+ struct aead_ccm_aad *caad;
+ struct aead_chacha20_poly1305_aad *chacha_aad;
/* insert SQN.hi between ESP trailer and ICV */
if (sa->sqh_len != 0)
@@ -184,9 +226,27 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
* fill AAD fields, if any (aad fields are placed after icv),
* right now we support only one AEAD algorithm: AES-GCM.
*/
- if (sa->aad_len != 0) {
- aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
- aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ switch (sa->algo_type) {
+ case ALGO_TYPE_AES_GCM:
+ if (sa->aad_len != 0) {
+ aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+ aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_AES_CCM:
+ if (sa->aad_len != 0) {
+ caad = (struct aead_ccm_aad *)(icv->va + sa->icv_len);
+ aead_ccm_aad_fill(caad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ if (sa->aad_len != 0) {
+ chacha_aad = (struct aead_chacha20_poly1305_aad *)
+ (icv->va + sa->icv_len);
+ aead_chacha20_poly1305_aad_fill(chacha_aad,
+ sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
}
}
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 1e181cf2ce..a3f77469c3 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -63,6 +63,8 @@ outb_cop_prepare(struct rte_crypto_op *cop,
{
struct rte_crypto_sym_op *sop;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint32_t algo;
@@ -80,6 +82,15 @@ outb_cop_prepare(struct rte_crypto_op *cop,
/* NULL case */
sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
break;
+ case ALGO_TYPE_AES_GMAC:
+ /* GMAC case */
+ sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+ sa->iv_ofs);
+ aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_GCM:
/* AEAD (AES_GCM) case */
sop_aead_prepare(sop, sa, icv, hlen, plen);
@@ -89,6 +100,26 @@ outb_cop_prepare(struct rte_crypto_op *cop,
sa->iv_ofs);
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ /* AEAD (AES_CCM) case */
+ sop_aead_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ ccm = rte_crypto_op_ctod_offset(cop, struct aead_ccm_iv *,
+ sa->iv_ofs);
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ /* AEAD (CHACHA20_POLY) case */
+ sop_aead_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ chacha20_poly1305 = rte_crypto_op_ctod_offset(cop,
+ struct aead_chacha20_poly1305_iv *,
+ sa->iv_ofs);
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
/* Cipher-Auth (AES-CTR *) case */
sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
@@ -196,7 +227,9 @@ outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
const union sym_op_data *icv)
{
uint32_t *psqh;
- struct aead_gcm_aad *aad;
+ struct aead_gcm_aad *gaad;
+ struct aead_ccm_aad *caad;
+ struct aead_chacha20_poly1305_aad *chacha20_poly1305_aad;
/* insert SQN.hi between ESP trailer and ICV */
if (sa->sqh_len != 0) {
@@ -208,9 +241,29 @@ outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
* fill IV and AAD fields, if any (aad fields are placed after icv),
* right now we support only one AEAD algorithm: AES-GCM .
*/
+ switch (sa->algo_type) {
+ case ALGO_TYPE_AES_GCM:
if (sa->aad_len != 0) {
- aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
- aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ gaad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+ aead_gcm_aad_fill(gaad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_AES_CCM:
+ if (sa->aad_len != 0) {
+ caad = (struct aead_ccm_aad *)(icv->va + sa->icv_len);
+ aead_ccm_aad_fill(caad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ if (sa->aad_len != 0) {
+ chacha20_poly1305_aad = (struct aead_chacha20_poly1305_aad *)
+ (icv->va + sa->icv_len);
+ aead_chacha20_poly1305_aad_fill(chacha20_poly1305_aad,
+ sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ default:
+ break;
}
}
@@ -418,6 +471,8 @@ outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
{
uint64_t *ivp = iv;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint32_t clen;
@@ -426,6 +481,15 @@ outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
gcm = iv;
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ ccm = iv;
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ chacha20_poly1305 = iv;
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
ctr = iv;
aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt);
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index e59189d215..720e0f365b 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -47,6 +47,15 @@ fill_crypto_xform(struct crypto_xform *xform, uint64_t type,
if (xfn != NULL)
return -EINVAL;
xform->aead = &xf->aead;
+
+ /* GMAC has only auth */
+ } else if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+ xf->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ if (xfn != NULL)
+ return -EINVAL;
+ xform->auth = &xf->auth;
+ xform->cipher = &xfn->cipher;
+
/*
* CIPHER+AUTH xforms are expected in strict order,
* depending on SA direction:
@@ -247,12 +256,13 @@ esp_inb_init(struct rte_ipsec_sa *sa)
sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
/*
- * for AEAD and NULL algorithms we can assume that
+ * for AEAD algorithms we can assume that
* auth and cipher offsets would be equal.
*/
switch (sa->algo_type) {
case ALGO_TYPE_AES_GCM:
- case ALGO_TYPE_NULL:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
sa->ctp.auth.raw = sa->ctp.cipher.raw;
break;
default:
@@ -294,6 +304,8 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
switch (algo_type) {
case ALGO_TYPE_AES_GCM:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
case ALGO_TYPE_AES_CTR:
case ALGO_TYPE_NULL:
sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr) +
@@ -305,15 +317,20 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr);
sa->ctp.cipher.length = sa->iv_len;
break;
+ case ALGO_TYPE_AES_GMAC:
+ sa->ctp.cipher.offset = 0;
+ sa->ctp.cipher.length = 0;
+ break;
}
/*
- * for AEAD and NULL algorithms we can assume that
+ * for AEAD algorithms we can assume that
* auth and cipher offsets would be equal.
*/
switch (algo_type) {
case ALGO_TYPE_AES_GCM:
- case ALGO_TYPE_NULL:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
sa->ctp.auth.raw = sa->ctp.cipher.raw;
break;
default:
@@ -374,13 +391,39 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->pad_align = IPSEC_PAD_AES_GCM;
sa->algo_type = ALGO_TYPE_AES_GCM;
break;
+ case RTE_CRYPTO_AEAD_AES_CCM:
+ /* RFC 4309 */
+ sa->aad_len = sizeof(struct aead_ccm_aad);
+ sa->icv_len = cxf->aead->digest_length;
+ sa->iv_ofs = cxf->aead->iv.offset;
+ sa->iv_len = sizeof(uint64_t);
+ sa->pad_align = IPSEC_PAD_AES_CCM;
+ sa->algo_type = ALGO_TYPE_AES_CCM;
+ break;
+ case RTE_CRYPTO_AEAD_CHACHA20_POLY1305:
+ /* RFC 7634 & 8439*/
+ sa->aad_len = sizeof(struct aead_chacha20_poly1305_aad);
+ sa->icv_len = cxf->aead->digest_length;
+ sa->iv_ofs = cxf->aead->iv.offset;
+ sa->iv_len = sizeof(uint64_t);
+ sa->pad_align = IPSEC_PAD_CHACHA20_POLY1305;
+ sa->algo_type = ALGO_TYPE_CHACHA20_POLY1305;
+ break;
default:
return -EINVAL;
}
+ } else if (cxf->auth->algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ /* RFC 4543 */
+ /* AES-GMAC is a special case of auth that needs IV */
+ sa->pad_align = IPSEC_PAD_AES_GMAC;
+ sa->iv_len = sizeof(uint64_t);
+ sa->icv_len = cxf->auth->digest_length;
+ sa->iv_ofs = cxf->auth->iv.offset;
+ sa->algo_type = ALGO_TYPE_AES_GMAC;
+
} else {
sa->icv_len = cxf->auth->digest_length;
sa->iv_ofs = cxf->cipher->iv.offset;
- sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
switch (cxf->cipher->algo) {
case RTE_CRYPTO_CIPHER_NULL:
@@ -414,6 +457,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
}
}
+ sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
sa->udata = prm->userdata;
sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi);
sa->salt = prm->ipsec_xform.salt;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 1bffe751f5..107ebd1519 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -19,7 +19,10 @@ enum {
IPSEC_PAD_AES_CBC = IPSEC_MAX_IV_SIZE,
IPSEC_PAD_AES_CTR = IPSEC_PAD_DEFAULT,
IPSEC_PAD_AES_GCM = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_AES_CCM = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_CHACHA20_POLY1305 = IPSEC_PAD_DEFAULT,
IPSEC_PAD_NULL = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_AES_GMAC = IPSEC_PAD_DEFAULT,
};
/* iv sizes for different algorithms */
@@ -67,6 +70,9 @@ enum sa_algo_type {
ALGO_TYPE_AES_CBC,
ALGO_TYPE_AES_CTR,
ALGO_TYPE_AES_GCM,
+ ALGO_TYPE_AES_CCM,
+ ALGO_TYPE_CHACHA20_POLY1305,
+ ALGO_TYPE_AES_GMAC,
ALGO_TYPE_MAX
};
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v2 06/10] ipsec: add transmit segmentation offload support
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 00/10] new features for ipsec and security libraries Radu Nicolau
` (4 preceding siblings ...)
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 05/10] ipsec: add support for AEAD algorithms Radu Nicolau
@ 2021-08-12 13:54 ` Radu Nicolau
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 07/10] ipsec: add support for NAT-T Radu Nicolau
` (3 subsequent siblings)
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-08-12 13:54 UTC (permalink / raw)
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, hemant.agrawal, gakhil, anoobj, declan.doherty,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau, Abhijit Sinha
Add support for transmit segmentation offload to inline crypto processing
mode. This offload is not supported by other offload modes, as at a
minimum it requires inline crypto for IPsec to be supported on the
network interface.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijits.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/esp_inb.c | 4 +-
lib/ipsec/esp_outb.c | 115 +++++++++++++++++++++++++++++++++++--------
lib/ipsec/iph.h | 10 +++-
lib/ipsec/sa.c | 6 +++
lib/ipsec/sa.h | 4 ++
5 files changed, 114 insertions(+), 25 deletions(-)
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index d66c88f05d..a6ab8fbdd5 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -668,8 +668,8 @@ trs_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
/* modify packet's layout */
np = trs_process_step2(mb[i], ml[i], hl[i], cofs,
to[i], tl, sqn + k);
- update_trs_l3hdr(sa, np + l2, mb[i]->pkt_len,
- l2, hl[i] - l2, espt[i].next_proto);
+ update_trs_l34hdrs(sa, np + l2, mb[i]->pkt_len,
+ l2, hl[i] - l2, espt[i].next_proto, 0);
/* update mbuf's metadata */
trs_process_step3(mb[i]);
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index a3f77469c3..9fc7075796 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -2,6 +2,8 @@
* Copyright(c) 2018-2020 Intel Corporation
*/
+#include <math.h>
+
#include <rte_ipsec.h>
#include <rte_esp.h>
#include <rte_ip.h>
@@ -156,11 +158,20 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* number of bytes to encrypt */
clen = plen + sizeof(*espt);
- clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+ /* We don't need to pad/ailgn packet when using TSO offload */
+ if (likely(!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))))
+ clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
/* pad length + esp tail */
pdlen = clen - plen;
- tlen = pdlen + sa->icv_len + sqh_len;
+
+ /* We don't append ICV length when using TSO offload */
+ if (likely(!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))))
+ tlen = pdlen + sa->icv_len + sqh_len;
+ else
+ tlen = pdlen + sqh_len;
/* do append and prepend */
ml = rte_pktmbuf_lastseg(mb);
@@ -337,6 +348,7 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
char *ph, *pt;
uint64_t *iv;
uint32_t l2len, l3len;
+ uint8_t tso = mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG) ? 1 : 0;
l2len = mb->l2_len;
l3len = mb->l3_len;
@@ -349,11 +361,19 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* number of bytes to encrypt */
clen = plen + sizeof(*espt);
- clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+ /* We don't need to pad/ailgn packet when using TSO offload */
+ if (likely(!tso))
+ clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
/* pad length + esp tail */
pdlen = clen - plen;
- tlen = pdlen + sa->icv_len + sqh_len;
+
+ /* We don't append ICV length when using TSO offload */
+ if (likely(!tso))
+ tlen = pdlen + sa->icv_len + sqh_len;
+ else
+ tlen = pdlen + sqh_len;
/* do append and insert */
ml = rte_pktmbuf_lastseg(mb);
@@ -375,8 +395,8 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
insert_esph(ph, ph + hlen, uhlen);
/* update ip header fields */
- np = update_trs_l3hdr(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
- l3len, IPPROTO_ESP);
+ np = update_trs_l34hdrs(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
+ l3len, IPPROTO_ESP, tso);
/* update spi, seqn and iv */
esph = (struct rte_esp_hdr *)(ph + uhlen);
@@ -651,6 +671,33 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
}
}
+/* check if packet will exceed MSS and segmentation is required */
+static inline int
+esn_outb_nb_segments(const struct rte_ipsec_sa *sa, struct rte_mbuf *m) {
+ uint16_t segments = 1;
+ uint16_t pkt_l3len = m->pkt_len - m->l2_len;
+
+ /* Only support segmentation for UDP/TCP flows */
+ if (!(m->packet_type & (RTE_PTYPE_L4_UDP | RTE_PTYPE_L4_TCP)))
+ return segments;
+
+ if (sa->tso.enabled && pkt_l3len > sa->tso.mss) {
+ segments = ceil((float)pkt_l3len / sa->tso.mss);
+
+ if (m->packet_type & RTE_PTYPE_L4_TCP) {
+ m->ol_flags |= (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM);
+ m->l4_len = sizeof(struct rte_tcp_hdr);
+ } else {
+ m->ol_flags |= (PKT_TX_UDP_SEG | PKT_TX_UDP_CKSUM);
+ m->l4_len = sizeof(struct rte_udp_hdr);
+ }
+
+ m->tso_segsz = sa->tso.mss;
+ }
+
+ return segments;
+}
+
/*
* process group of ESP outbound tunnel packets destined for
* INLINE_CRYPTO type of device.
@@ -660,24 +707,29 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
int32_t rc;
- uint32_t i, k, n;
+ uint32_t i, k, nb_sqn = 0, nb_sqn_alloc;
uint64_t sqn;
rte_be64_t sqc;
struct rte_ipsec_sa *sa;
union sym_op_data icv;
uint64_t iv[IPSEC_MAX_IV_QWORD];
uint32_t dr[num];
+ uint16_t nb_segs[num];
sa = ss->sa;
- n = num;
- sqn = esn_outb_update_sqn(sa, &n);
- if (n != num)
+ for (i = 0; i != num; i++) {
+ nb_segs[i] = esn_outb_nb_segments(sa, mb[i]);
+ nb_sqn += nb_segs[i];
+ }
+
+ nb_sqn_alloc = nb_sqn;
+ sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
+ if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
k = 0;
- for (i = 0; i != n; i++) {
-
+ for (i = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -691,11 +743,18 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
dr[i - k] = i;
rte_errno = -rc;
}
+
+ /**
+ * If packet is using tso, increment sqn by the number of
+ * segments for packet
+ */
+ if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
+ sqn += nb_segs[i] - 1;
}
/* copy not processed mbufs beyond good ones */
- if (k != n && k != 0)
- move_bad_mbufs(mb, dr, n, n - k);
+ if (k != num && k != 0)
+ move_bad_mbufs(mb, dr, num, num - k);
inline_outb_mbuf_prepare(ss, mb, k);
return k;
@@ -710,23 +769,30 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
int32_t rc;
- uint32_t i, k, n;
+ uint32_t i, k, nb_sqn, nb_sqn_alloc;
uint64_t sqn;
rte_be64_t sqc;
struct rte_ipsec_sa *sa;
union sym_op_data icv;
uint64_t iv[IPSEC_MAX_IV_QWORD];
uint32_t dr[num];
+ uint16_t nb_segs[num];
sa = ss->sa;
- n = num;
- sqn = esn_outb_update_sqn(sa, &n);
- if (n != num)
+ /* Calculate number of sequence numbers required */
+ for (i = 0, nb_sqn = 0; i != num; i++) {
+ nb_segs[i] = esn_outb_nb_segments(sa, mb[i]);
+ nb_sqn += nb_segs[i];
+ }
+
+ nb_sqn_alloc = nb_sqn;
+ sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
+ if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
k = 0;
- for (i = 0; i != n; i++) {
+ for (i = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -741,11 +807,18 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
dr[i - k] = i;
rte_errno = -rc;
}
+
+ /**
+ * If packet is using tso, increment sqn by the number of
+ * segments for packet
+ */
+ if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
+ sqn += nb_segs[i] - 1;
}
/* copy not processed mbufs beyond good ones */
- if (k != n && k != 0)
- move_bad_mbufs(mb, dr, n, n - k);
+ if (k != num && k != 0)
+ move_bad_mbufs(mb, dr, num, num - k);
inline_outb_mbuf_prepare(ss, mb, k);
return k;
diff --git a/lib/ipsec/iph.h b/lib/ipsec/iph.h
index 861f16905a..2d223199ac 100644
--- a/lib/ipsec/iph.h
+++ b/lib/ipsec/iph.h
@@ -6,6 +6,8 @@
#define _IPH_H_
#include <rte_ip.h>
+#include <rte_udp.h>
+#include <rte_tcp.h>
/**
* @file iph.h
@@ -39,8 +41,8 @@ insert_esph(char *np, char *op, uint32_t hlen)
/* update original ip header fields for transport case */
static inline int
-update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
- uint32_t l2len, uint32_t l3len, uint8_t proto)
+update_trs_l34hdrs(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
+ uint32_t l2len, uint32_t l3len, uint8_t proto, uint8_t tso)
{
int32_t rc;
@@ -51,6 +53,10 @@ update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
v4h = p;
rc = v4h->next_proto_id;
v4h->next_proto_id = proto;
+ if (tso) {
+ v4h->hdr_checksum = 0;
+ v4h->total_length = 0;
+ }
v4h->total_length = rte_cpu_to_be_16(plen - l2len);
/* IPv6 */
} else {
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 720e0f365b..2ecbbce0a4 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -565,6 +565,12 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->type = type;
sa->size = sz;
+
+ if (prm->ipsec_xform.options.tso == 1) {
+ sa->tso.enabled = 1;
+ sa->tso.mss = prm->ipsec_xform.mss;
+ }
+
/* check for ESN flag */
sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
UINT32_MAX : UINT64_MAX;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 107ebd1519..5e237f3525 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -113,6 +113,10 @@ struct rte_ipsec_sa {
uint8_t iv_len;
uint8_t pad_align;
uint8_t tos_mask;
+ struct {
+ uint8_t enabled:1;
+ uint16_t mss;
+ } tso;
/* template for tunnel header */
uint8_t hdr[IPSEC_MAX_HDR_SIZE];
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v2 07/10] ipsec: add support for NAT-T
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 00/10] new features for ipsec and security libraries Radu Nicolau
` (5 preceding siblings ...)
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 06/10] ipsec: add transmit segmentation offload support Radu Nicolau
@ 2021-08-12 13:54 ` Radu Nicolau
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 08/10] ipsec: add support for SA telemetry Radu Nicolau
` (2 subsequent siblings)
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-08-12 13:54 UTC (permalink / raw)
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, hemant.agrawal, gakhil, anoobj, declan.doherty,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau, Abhijit Sinha
Add support for the IPsec NAT-Traversal use case for Tunnel mode
packets.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijits.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/iph.h | 13 +++++++++++++
lib/ipsec/rte_ipsec_sa.h | 8 +++++++-
lib/ipsec/sa.c | 13 ++++++++++++-
lib/ipsec/sa.h | 4 ++++
4 files changed, 36 insertions(+), 2 deletions(-)
diff --git a/lib/ipsec/iph.h b/lib/ipsec/iph.h
index 2d223199ac..093f86d34a 100644
--- a/lib/ipsec/iph.h
+++ b/lib/ipsec/iph.h
@@ -251,6 +251,7 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa *sa, void *outh,
{
struct rte_ipv4_hdr *v4h;
struct rte_ipv6_hdr *v6h;
+ struct rte_udp_hdr *udph;
uint8_t is_outh_ipv4;
if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
@@ -258,11 +259,23 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa *sa, void *outh,
v4h = outh;
v4h->packet_id = pid;
v4h->total_length = rte_cpu_to_be_16(plen - l2len);
+
+ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
+ udph = (struct rte_udp_hdr *)(v4h + 1);
+ udph->dgram_len = rte_cpu_to_be_16(plen - l2len -
+ (sizeof(*v4h) + sizeof(*udph)));
+ }
} else {
is_outh_ipv4 = 0;
v6h = outh;
v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
sizeof(*v6h));
+
+ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
+ udph = (struct rte_udp_hdr *)(v6h + 1);
+ udph->dgram_len = rte_cpu_to_be_16(plen - l2len -
+ (sizeof(*v6h) + sizeof(*udph)));
+ }
}
if (sa->type & TUN_HDR_MSK)
diff --git a/lib/ipsec/rte_ipsec_sa.h b/lib/ipsec/rte_ipsec_sa.h
index cf51ad8338..40d1e70d45 100644
--- a/lib/ipsec/rte_ipsec_sa.h
+++ b/lib/ipsec/rte_ipsec_sa.h
@@ -76,6 +76,7 @@ struct rte_ipsec_sa_prm {
* - inbound/outbound
* - mode (TRANSPORT/TUNNEL)
* - for TUNNEL outer IP version (IPv4/IPv6)
+ * - NAT-T UDP encapsulated (TUNNEL mode only)
* - are SA SQN operations 'atomic'
* - ESN enabled/disabled
* ...
@@ -86,7 +87,8 @@ enum {
RTE_SATP_LOG2_PROTO,
RTE_SATP_LOG2_DIR,
RTE_SATP_LOG2_MODE,
- RTE_SATP_LOG2_SQN = RTE_SATP_LOG2_MODE + 2,
+ RTE_SATP_LOG2_NATT = RTE_SATP_LOG2_MODE + 2,
+ RTE_SATP_LOG2_SQN,
RTE_SATP_LOG2_ESN,
RTE_SATP_LOG2_ECN,
RTE_SATP_LOG2_DSCP
@@ -109,6 +111,10 @@ enum {
#define RTE_IPSEC_SATP_MODE_TUNLV4 (1ULL << RTE_SATP_LOG2_MODE)
#define RTE_IPSEC_SATP_MODE_TUNLV6 (2ULL << RTE_SATP_LOG2_MODE)
+#define RTE_IPSEC_SATP_NATT_MASK (1ULL << RTE_SATP_LOG2_NATT)
+#define RTE_IPSEC_SATP_NATT_DISABLE (0ULL << RTE_SATP_LOG2_NATT)
+#define RTE_IPSEC_SATP_NATT_ENABLE (1ULL << RTE_SATP_LOG2_NATT)
+
#define RTE_IPSEC_SATP_SQN_MASK (1ULL << RTE_SATP_LOG2_SQN)
#define RTE_IPSEC_SATP_SQN_RAW (0ULL << RTE_SATP_LOG2_SQN)
#define RTE_IPSEC_SATP_SQN_ATOM (1ULL << RTE_SATP_LOG2_SQN)
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 2ecbbce0a4..8e369e4618 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -217,6 +217,10 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
} else
return -EINVAL;
+ /* check for UDP encapsulation flag */
+ if (prm->ipsec_xform.options.udp_encap == 1)
+ tp |= RTE_IPSEC_SATP_NATT_ENABLE;
+
/* check for ESN flag */
if (prm->ipsec_xform.options.esn == 0)
tp |= RTE_IPSEC_SATP_ESN_DISABLE;
@@ -372,7 +376,8 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
const struct crypto_xform *cxf)
{
static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
- RTE_IPSEC_SATP_MODE_MASK;
+ RTE_IPSEC_SATP_MODE_MASK |
+ RTE_IPSEC_SATP_NATT_MASK;
if (prm->ipsec_xform.options.ecn)
sa->tos_mask |= RTE_IPV4_HDR_ECN_MASK;
@@ -475,10 +480,16 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
esp_inb_init(sa);
break;
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4 |
+ RTE_IPSEC_SATP_NATT_ENABLE):
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6 |
+ RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
esp_outb_tun_init(sa, prm);
break;
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
+ RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
esp_outb_init(sa, 0);
break;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 5e237f3525..3f38921eb3 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -101,6 +101,10 @@ struct rte_ipsec_sa {
uint64_t msk;
uint64_t val;
} tx_offload;
+ struct {
+ uint16_t sport;
+ uint16_t dport;
+ } natt;
uint32_t salt;
uint8_t algo_type;
uint8_t proto; /* next proto */
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v2 08/10] ipsec: add support for SA telemetry
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 00/10] new features for ipsec and security libraries Radu Nicolau
` (6 preceding siblings ...)
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 07/10] ipsec: add support for NAT-T Radu Nicolau
@ 2021-08-12 13:54 ` Radu Nicolau
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 09/10] ipsec: add support for initial SQN value Radu Nicolau
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 10/10] ipsec: add ol_flags support Radu Nicolau
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-08-12 13:54 UTC (permalink / raw)
To: Ray Kinsella, Neil Horman
Cc: dev, konstantin.ananyev, vladimir.medvedkin, bruce.richardson,
hemant.agrawal, gakhil, anoobj, declan.doherty, abhijit.sinha,
daniel.m.buckley, marchana, ktejasree, matan, Radu Nicolau,
Abhijit Sinha
Add telemetry support for ipsec SAs
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijits.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/esp_inb.c | 1 +
lib/ipsec/esp_outb.c | 12 +-
lib/ipsec/meson.build | 2 +-
lib/ipsec/rte_ipsec.h | 11 ++
lib/ipsec/sa.c | 255 +++++++++++++++++++++++++++++++++++++++++-
lib/ipsec/sa.h | 21 ++++
lib/ipsec/version.map | 9 ++
7 files changed, 305 insertions(+), 6 deletions(-)
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index a6ab8fbdd5..8cb4c16302 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -722,6 +722,7 @@ esp_inb_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
/* process packets, extract seq numbers */
k = process(sa, mb, sqn, dr, num, sqh_len);
+ sa->statistics.count += k;
/* handle unprocessed mbufs */
if (k != num && k != 0)
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 9fc7075796..2c02c3bb12 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -617,7 +617,7 @@ uint16_t
esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
uint16_t num)
{
- uint32_t i, k, icv_len, *icv;
+ uint32_t i, k, icv_len, *icv, bytes;
struct rte_mbuf *ml;
struct rte_ipsec_sa *sa;
uint32_t dr[num];
@@ -626,10 +626,12 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
k = 0;
icv_len = sa->icv_len;
+ bytes = 0;
for (i = 0; i != num; i++) {
if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
ml = rte_pktmbuf_lastseg(mb[i]);
+ bytes += mb[i]->data_len;
/* remove high-order 32 bits of esn from packet len */
mb[i]->pkt_len -= sa->sqh_len;
ml->data_len -= sa->sqh_len;
@@ -640,6 +642,8 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
} else
dr[i - k] = i;
}
+ sa->statistics.count += k;
+ sa->statistics.bytes += bytes - (sa->hdr_len * k);
/* handle unprocessed mbufs */
if (k != num) {
@@ -659,16 +663,19 @@ static inline void
inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- uint32_t i, ol_flags;
+ uint32_t i, ol_flags, bytes = 0;
ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
for (i = 0; i != num; i++) {
mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
+ bytes += mb[i]->data_len;
if (ol_flags != 0)
rte_security_set_pkt_metadata(ss->security.ctx,
ss->security.ses, mb[i], NULL);
}
+ ss->sa->statistics.count += num;
+ ss->sa->statistics.bytes += bytes - (ss->sa->hdr_len * num);
}
/* check if packet will exceed MSS and segmentation is required */
@@ -752,6 +759,7 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
sqn += nb_segs[i] - 1;
}
+
/* copy not processed mbufs beyond good ones */
if (k != num && k != 0)
move_bad_mbufs(mb, dr, num, num - k);
diff --git a/lib/ipsec/meson.build b/lib/ipsec/meson.build
index 1497f573bb..f5e44cfe47 100644
--- a/lib/ipsec/meson.build
+++ b/lib/ipsec/meson.build
@@ -6,4 +6,4 @@ sources = files('esp_inb.c', 'esp_outb.c', 'sa.c', 'ses.c', 'ipsec_sad.c')
headers = files('rte_ipsec.h', 'rte_ipsec_sa.h', 'rte_ipsec_sad.h')
indirect_headers += files('rte_ipsec_group.h')
-deps += ['mbuf', 'net', 'cryptodev', 'security', 'hash']
+deps += ['mbuf', 'net', 'cryptodev', 'security', 'hash', 'telemetry']
diff --git a/lib/ipsec/rte_ipsec.h b/lib/ipsec/rte_ipsec.h
index dd60d95915..d34798bc7f 100644
--- a/lib/ipsec/rte_ipsec.h
+++ b/lib/ipsec/rte_ipsec.h
@@ -158,6 +158,17 @@ rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
return ss->pkt_func.process(ss, mb, num);
}
+
+struct rte_ipsec_telemetry;
+
+__rte_experimental
+int
+rte_ipsec_telemetry_init(void);
+
+__rte_experimental
+int
+rte_ipsec_telemetry_sa_add(struct rte_ipsec_sa *sa);
+
#include <rte_ipsec_group.h>
#ifdef __cplusplus
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 8e369e4618..5b55bbc098 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -7,7 +7,7 @@
#include <rte_ip.h>
#include <rte_errno.h>
#include <rte_cryptodev.h>
-
+#include <rte_telemetry.h>
#include "sa.h"
#include "ipsec_sqn.h"
#include "crypto.h"
@@ -25,6 +25,7 @@ struct crypto_xform {
struct rte_crypto_aead_xform *aead;
};
+
/*
* helper routine, fills internal crypto_xform structure.
*/
@@ -532,6 +533,249 @@ rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
wsz = prm->ipsec_xform.replay_win_sz;
return ipsec_sa_size(type, &wsz, &nb);
}
+struct rte_ipsec_telemetry {
+ bool initialized;
+ LIST_HEAD(, rte_ipsec_sa) sa_list_head;
+};
+
+#include <rte_malloc.h>
+
+static struct rte_ipsec_telemetry rte_ipsec_telemetry_instance = {
+ .initialized = false };
+
+static int
+handle_telemetry_cmd_ipsec_sa_list(const char *cmd __rte_unused,
+ const char *params __rte_unused,
+ struct rte_tel_data *data)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ struct rte_ipsec_sa *sa;
+
+ rte_tel_data_start_array(data, RTE_TEL_U64_VAL);
+
+ LIST_FOREACH(sa, &telemetry->sa_list_head, telemetry_next) {
+ rte_tel_data_add_array_u64(data, htonl(sa->spi));
+ }
+
+ return 0;
+}
+
+/**
+ * Handle IPsec SA statistics telemetry request
+ *
+ * Return dict of SA's with dict of key/value counters
+ *
+ * {
+ * "SA_SPI_XX": {"count": 0, "bytes": 0, "errors": 0},
+ * "SA_SPI_YY": {"count": 0, "bytes": 0, "errors": 0}
+ * }
+ *
+ */
+static int
+handle_telemetry_cmd_ipsec_sa_stats(const char *cmd __rte_unused,
+ const char *params,
+ struct rte_tel_data *data)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ struct rte_ipsec_sa *sa;
+ bool user_specified_spi = false;
+ uint32_t sa_spi;
+
+ if (params) {
+ user_specified_spi = true;
+ sa_spi = htonl((uint32_t)atoi(params));
+ }
+
+ rte_tel_data_start_dict(data);
+
+ LIST_FOREACH(sa, &telemetry->sa_list_head, telemetry_next) {
+ char sa_name[64];
+
+ static const char *name_pkt_cnt = "count";
+ static const char *name_byte_cnt = "bytes";
+ static const char *name_error_cnt = "errors";
+ struct rte_tel_data *sa_data;
+
+ /* If user provided SPI only get telemetry for that SA */
+ if (user_specified_spi && (sa_spi != sa->spi))
+ continue;
+
+ /* allocate telemetry data struct for SA telemetry */
+ sa_data = rte_tel_data_alloc();
+ if (!sa_data)
+ return -ENOMEM;
+
+ rte_tel_data_start_dict(sa_data);
+
+ /* add telemetry key/values pairs */
+ rte_tel_data_add_dict_u64(sa_data, name_pkt_cnt,
+ sa->statistics.count);
+
+ rte_tel_data_add_dict_u64(sa_data, name_byte_cnt,
+ sa->statistics.bytes);
+
+ rte_tel_data_add_dict_u64(sa_data, name_error_cnt,
+ sa->statistics.errors.count);
+
+ /* generate telemetry label */
+ snprintf(sa_name, sizeof(sa_name), "SA_SPI_%i", htonl(sa->spi));
+
+ /* add SA telemetry to dictionary container */
+ rte_tel_data_add_dict_container(data, sa_name, sa_data, 0);
+ }
+
+ return 0;
+}
+
+static int
+handle_telemetry_cmd_ipsec_sa_configuration(const char *cmd __rte_unused,
+ const char *params,
+ struct rte_tel_data *data)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ struct rte_ipsec_sa *sa;
+ uint32_t sa_spi;
+
+ if (params)
+ sa_spi = htonl((uint32_t)atoi(params));
+ else
+ return -EINVAL;
+
+ rte_tel_data_start_dict(data);
+
+ LIST_FOREACH(sa, &telemetry->sa_list_head, telemetry_next) {
+ uint64_t mode;
+
+ if (sa_spi != sa->spi)
+ continue;
+
+ /* add SA configuration key/values pairs */
+ rte_tel_data_add_dict_string(data, "Type",
+ (sa->type & RTE_IPSEC_SATP_PROTO_MASK) ==
+ RTE_IPSEC_SATP_PROTO_AH ? "AH" : "ESP");
+
+ rte_tel_data_add_dict_string(data, "Direction",
+ (sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+ RTE_IPSEC_SATP_DIR_IB ? "Inbound" : "Outbound");
+
+ mode = sa->type & RTE_IPSEC_SATP_MODE_MASK;
+
+ if (mode == RTE_IPSEC_SATP_MODE_TRANS) {
+ rte_tel_data_add_dict_string(data, "Mode", "Transport");
+ } else {
+ rte_tel_data_add_dict_string(data, "Mode", "Tunnel");
+
+ if ((sa->type & RTE_IPSEC_SATP_NATT_MASK) ==
+ RTE_IPSEC_SATP_NATT_ENABLE) {
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ } else if (sa->type &
+ RTE_IPSEC_SATP_MODE_TUNLV6) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ }
+ } else {
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ } else if (sa->type &
+ RTE_IPSEC_SATP_MODE_TUNLV6) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ }
+ }
+ }
+
+ rte_tel_data_add_dict_string(data,
+ "extended-sequence-number",
+ (sa->type & RTE_IPSEC_SATP_ESN_MASK) ==
+ RTE_IPSEC_SATP_ESN_ENABLE ?
+ "enabled" : "disabled");
+
+ if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+ RTE_IPSEC_SATP_DIR_IB)
+
+ if (sa->sqn.inb.rsn[sa->sqn.inb.rdidx])
+ rte_tel_data_add_dict_u64(data,
+ "sequence-number",
+ sa->sqn.inb.rsn[sa->sqn.inb.rdidx]->sqn);
+ else
+ rte_tel_data_add_dict_u64(data,
+ "sequence-number", 0);
+ else
+ rte_tel_data_add_dict_u64(data, "sequence-number",
+ sa->sqn.outb);
+
+ rte_tel_data_add_dict_string(data,
+ "explicit-congestion-notification",
+ (sa->type & RTE_IPSEC_SATP_ECN_MASK) ==
+ RTE_IPSEC_SATP_ECN_ENABLE ?
+ "enabled" : "disabled");
+
+ rte_tel_data_add_dict_string(data,
+ "copy-DSCP",
+ (sa->type & RTE_IPSEC_SATP_DSCP_MASK) ==
+ RTE_IPSEC_SATP_DSCP_ENABLE ?
+ "enabled" : "disabled");
+
+ rte_tel_data_add_dict_string(data, "TSO",
+ sa->tso.enabled ? "enabled" : "disabled");
+
+ if (sa->tso.enabled)
+ rte_tel_data_add_dict_u64(data, "TSO-MSS", sa->tso.mss);
+
+ }
+
+ return 0;
+}
+int
+rte_ipsec_telemetry_init(void)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ int rc = 0;
+
+ if (telemetry->initialized)
+ return rc;
+
+ LIST_INIT(&telemetry->sa_list_head);
+
+ rc = rte_telemetry_register_cmd("/ipsec/sa/list",
+ handle_telemetry_cmd_ipsec_sa_list,
+ "Return list of IPsec Security Associations with telemetry enabled.");
+ if (rc)
+ return rc;
+
+ rc = rte_telemetry_register_cmd("/ipsec/sa/stats",
+ handle_telemetry_cmd_ipsec_sa_stats,
+ "Returns IPsec Security Association stastistics. Parameters: int sa_spi");
+ if (rc)
+ return rc;
+
+ rc = rte_telemetry_register_cmd("/ipsec/sa/details",
+ handle_telemetry_cmd_ipsec_sa_configuration,
+ "Returns IPsec Security Association configuration. Parameters: int sa_spi");
+ if (rc)
+ return rc;
+
+ telemetry->initialized = true;
+
+ return rc;
+}
+
+int
+rte_ipsec_telemetry_sa_add(struct rte_ipsec_sa *sa)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+
+ LIST_INSERT_HEAD(&telemetry->sa_list_head, sa, telemetry_next);
+
+ return 0;
+}
int
rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
@@ -644,19 +888,24 @@ uint16_t
pkt_flag_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- uint32_t i, k;
+ uint32_t i, k, bytes = 0;
uint32_t dr[num];
RTE_SET_USED(ss);
k = 0;
for (i = 0; i != num; i++) {
- if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
+ if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
k++;
+ bytes += mb[i]->data_len;
+ }
else
dr[i - k] = i;
}
+ ss->sa->statistics.count += k;
+ ss->sa->statistics.bytes += bytes - (ss->sa->hdr_len * k);
+
/* handle unprocessed mbufs */
if (k != num) {
rte_errno = EBADMSG;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 3f38921eb3..b9b7ebec5b 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -122,9 +122,30 @@ struct rte_ipsec_sa {
uint16_t mss;
} tso;
+ LIST_ENTRY(rte_ipsec_sa) telemetry_next;
+ /**< list entry for telemetry enabled SA */
+
+
+ RTE_MARKER cachealign_statistics __rte_cache_min_aligned;
+
+ /* Statistics */
+ struct {
+ uint64_t count;
+ uint64_t bytes;
+
+ struct {
+ uint64_t count;
+ uint64_t authentication_failed;
+ } errors;
+ } statistics;
+
+ RTE_MARKER cachealign_tunnel_header __rte_cache_min_aligned;
+
/* template for tunnel header */
uint8_t hdr[IPSEC_MAX_HDR_SIZE];
+
+ RTE_MARKER cachealign_tunnel_seq_num_replay_win __rte_cache_min_aligned;
/*
* sqn and replay window
* In case of SA handled by multiple threads *sqn* cacheline
diff --git a/lib/ipsec/version.map b/lib/ipsec/version.map
index ad3e38b7c8..7ce6ff9ab3 100644
--- a/lib/ipsec/version.map
+++ b/lib/ipsec/version.map
@@ -19,3 +19,12 @@ DPDK_21 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ # added in 21.11
+ rte_ipsec_telemetry_init;
+ rte_ipsec_telemetry_sa_add;
+
+};
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v2 09/10] ipsec: add support for initial SQN value
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 00/10] new features for ipsec and security libraries Radu Nicolau
` (7 preceding siblings ...)
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 08/10] ipsec: add support for SA telemetry Radu Nicolau
@ 2021-08-12 13:54 ` Radu Nicolau
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 10/10] ipsec: add ol_flags support Radu Nicolau
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-08-12 13:54 UTC (permalink / raw)
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, hemant.agrawal, gakhil, anoobj, declan.doherty,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau, Abhijit Sinha
Update IPsec library to support initial SQN value.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijits.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/esp_outb.c | 19 ++++++++++++-------
lib/ipsec/sa.c | 29 ++++++++++++++++++++++-------
2 files changed, 34 insertions(+), 14 deletions(-)
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 2c02c3bb12..8a6d09558f 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -661,7 +661,7 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
*/
static inline void
inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
- struct rte_mbuf *mb[], uint16_t num)
+ struct rte_mbuf *mb[], uint16_t num, uint64_t *sqn)
{
uint32_t i, ol_flags, bytes = 0;
@@ -672,7 +672,7 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
bytes += mb[i]->data_len;
if (ol_flags != 0)
rte_security_set_pkt_metadata(ss->security.ctx,
- ss->security.ses, mb[i], NULL);
+ ss->security.ses, mb[i], sqn);
}
ss->sa->statistics.count += num;
ss->sa->statistics.bytes += bytes - (ss->sa->hdr_len * num);
@@ -764,7 +764,10 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
if (k != num && k != 0)
move_bad_mbufs(mb, dr, num, num - k);
- inline_outb_mbuf_prepare(ss, mb, k);
+ if (sa->sqn_mask > UINT32_MAX)
+ inline_outb_mbuf_prepare(ss, mb, k, &sqn);
+ else
+ inline_outb_mbuf_prepare(ss, mb, k, NULL);
return k;
}
@@ -799,8 +802,7 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
- k = 0;
- for (i = 0; i != num; i++) {
+ for (i = 0, k = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -828,7 +830,10 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
if (k != num && k != 0)
move_bad_mbufs(mb, dr, num, num - k);
- inline_outb_mbuf_prepare(ss, mb, k);
+ if (sa->sqn_mask > UINT32_MAX)
+ inline_outb_mbuf_prepare(ss, mb, k, &sqn);
+ else
+ inline_outb_mbuf_prepare(ss, mb, k, NULL);
return k;
}
@@ -840,6 +845,6 @@ uint16_t
inline_proto_outb_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- inline_outb_mbuf_prepare(ss, mb, num);
+ inline_outb_mbuf_prepare(ss, mb, num, NULL);
return num;
}
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 5b55bbc098..242fdcd461 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -294,11 +294,11 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
* Init ESP outbound specific things.
*/
static void
-esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
+esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen, uint64_t sqn)
{
uint8_t algo_type;
- sa->sqn.outb = 1;
+ sa->sqn.outb = sqn;
algo_type = sa->algo_type;
@@ -356,6 +356,8 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
static void
esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
{
+ uint64_t sqn = prm->ipsec_xform.esn.value > 0 ?
+ prm->ipsec_xform.esn.value : 0;
sa->proto = prm->tun.next_proto;
sa->hdr_len = prm->tun.hdr_len;
sa->hdr_l3_off = prm->tun.hdr_l3_off;
@@ -366,7 +368,7 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
- esp_outb_init(sa, sa->hdr_len);
+ esp_outb_init(sa, sa->hdr_len, sqn);
}
/*
@@ -376,6 +378,8 @@ static int
esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
const struct crypto_xform *cxf)
{
+ uint64_t sqn = prm->ipsec_xform.esn.value > 0 ?
+ prm->ipsec_xform.esn.value : 0;
static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
RTE_IPSEC_SATP_MODE_MASK |
RTE_IPSEC_SATP_NATT_MASK;
@@ -492,7 +496,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
- esp_outb_init(sa, 0);
+ esp_outb_init(sa, 0, sqn);
break;
}
@@ -503,15 +507,19 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
* helper function, init SA replay structure.
*/
static void
-fill_sa_replay(struct rte_ipsec_sa *sa, uint32_t wnd_sz, uint32_t nb_bucket)
+fill_sa_replay(struct rte_ipsec_sa *sa,
+ uint32_t wnd_sz, uint32_t nb_bucket, uint64_t sqn)
{
sa->replay.win_sz = wnd_sz;
sa->replay.nb_bucket = nb_bucket;
sa->replay.bucket_index_mask = nb_bucket - 1;
sa->sqn.inb.rsn[0] = (struct replay_sqn *)(sa + 1);
- if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+ sa->sqn.inb.rsn[0]->sqn = sqn;
+ if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM) {
sa->sqn.inb.rsn[1] = (struct replay_sqn *)
((uintptr_t)sa->sqn.inb.rsn[0] + rsn_size(nb_bucket));
+ sa->sqn.inb.rsn[1]->sqn = sqn;
+ }
}
int
@@ -830,13 +838,20 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
UINT32_MAX : UINT64_MAX;
+ /* if we are starting from a non-zero sn value */
+ if (prm->ipsec_xform.esn.value > 0) {
+ if (prm->ipsec_xform.direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+ sa->sqn.outb = prm->ipsec_xform.esn.value;
+ }
+
rc = esp_sa_init(sa, prm, &cxf);
if (rc != 0)
rte_ipsec_sa_fini(sa);
/* fill replay window related fields */
if (nb != 0)
- fill_sa_replay(sa, wsz, nb);
+ fill_sa_replay(sa, wsz, nb, prm->ipsec_xform.esn.value);
return sz;
}
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v2 10/10] ipsec: add ol_flags support
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 00/10] new features for ipsec and security libraries Radu Nicolau
` (8 preceding siblings ...)
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 09/10] ipsec: add support for initial SQN value Radu Nicolau
@ 2021-08-12 13:54 ` Radu Nicolau
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-08-12 13:54 UTC (permalink / raw)
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, hemant.agrawal, gakhil, anoobj, declan.doherty,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau, Abhijit Sinha
Set mbuff->ol_flags for IPsec packets.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijits.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/esp_inb.c | 17 ++++++++++++--
lib/ipsec/esp_outb.c | 48 ++++++++++++++++++++++++++++++---------
lib/ipsec/rte_ipsec_sa.h | 3 ++-
lib/ipsec/sa.c | 49 ++++++++++++++++++++++++++++++++++++++--
lib/ipsec/sa.h | 8 +++++++
5 files changed, 109 insertions(+), 16 deletions(-)
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index 8cb4c16302..5fcb41297e 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -559,7 +559,8 @@ trs_process_step3(struct rte_mbuf *mb)
* - tx_offload
*/
static inline void
-tun_process_step3(struct rte_mbuf *mb, uint64_t txof_msk, uint64_t txof_val)
+tun_process_step3(struct rte_mbuf *mb, uint8_t is_ipv4, uint64_t txof_msk,
+ uint64_t txof_val)
{
/* reset mbuf metatdata: L2/L3 len, packet type */
mb->packet_type = RTE_PTYPE_UNKNOWN;
@@ -567,6 +568,14 @@ tun_process_step3(struct rte_mbuf *mb, uint64_t txof_msk, uint64_t txof_val)
/* clear the PKT_RX_SEC_OFFLOAD flag if set */
mb->ol_flags &= ~PKT_RX_SEC_OFFLOAD;
+
+ if (is_ipv4) {
+ mb->l3_len = sizeof(struct rte_ipv4_hdr);
+ mb->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ } else {
+ mb->l3_len = sizeof(struct rte_ipv6_hdr);
+ mb->ol_flags |= PKT_TX_IPV6;
+ }
}
/*
@@ -618,8 +627,12 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
update_tun_inb_l3hdr(sa, outh, inh);
/* update mbuf's metadata */
- tun_process_step3(mb[i], sa->tx_offload.msk,
+ tun_process_step3(mb[i],
+ (sa->type & RTE_IPSEC_SATP_IPV_MASK) ==
+ RTE_IPSEC_SATP_IPV4 ? 1 : 0,
+ sa->tx_offload.msk,
sa->tx_offload.val);
+
k++;
} else
dr[i - k] = i;
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 8a6d09558f..d8e261e6fb 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -19,7 +19,7 @@
typedef int32_t (*esp_outb_prepare_t)(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len);
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t icrypto);
/*
* helper function to fill crypto_sym op for cipher+auth algorithms.
@@ -140,9 +140,9 @@ outb_cop_prepare(struct rte_crypto_op *cop,
static inline int32_t
outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len)
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t icrypto)
{
- uint32_t clen, hlen, l2len, pdlen, pdofs, plen, tlen;
+ uint32_t clen, hlen, l2len, l3len, pdlen, pdofs, plen, tlen;
struct rte_mbuf *ml;
struct rte_esp_hdr *esph;
struct rte_esp_tail *espt;
@@ -154,6 +154,8 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* size of ipsec protected data */
l2len = mb->l2_len;
+ l3len = mb->l3_len;
+
plen = mb->pkt_len - l2len;
/* number of bytes to encrypt */
@@ -190,8 +192,26 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
/* update pkt l2/l3 len */
- mb->tx_offload = (mb->tx_offload & sa->tx_offload.msk) |
- sa->tx_offload.val;
+ if (icrypto) {
+ mb->tx_offload =
+ (mb->tx_offload & sa->inline_crypto.tx_offload.msk) |
+ sa->inline_crypto.tx_offload.val;
+ mb->l3_len = l3len;
+
+ mb->ol_flags |= sa->inline_crypto.tx_ol_flags;
+
+ /* set ip checksum offload for inner */
+ if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4)
+ mb->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if ((sa->type & RTE_IPSEC_SATP_IPV_MASK)
+ == RTE_IPSEC_SATP_IPV6)
+ mb->ol_flags |= PKT_TX_IPV6;
+ } else {
+ mb->tx_offload = (mb->tx_offload & sa->tx_offload.msk) |
+ sa->tx_offload.val;
+
+ mb->ol_flags |= sa->tx_ol_flags;
+ }
/* copy tunnel pkt header */
rte_memcpy(ph, sa->hdr, sa->hdr_len);
@@ -311,7 +331,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
/* try to update the packet itself */
rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv,
- sa->sqh_len);
+ sa->sqh_len, 0);
/* success, setup crypto op */
if (rc >= 0) {
outb_pkt_xprepare(sa, sqc, &icv);
@@ -338,7 +358,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
static inline int32_t
outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len)
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t icrypto __rte_unused)
{
uint8_t np;
uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen;
@@ -394,10 +414,16 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* shift L2/L3 headers */
insert_esph(ph, ph + hlen, uhlen);
+ if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4)
+ mb->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV6)
+ mb->ol_flags |= PKT_TX_IPV6;
+
/* update ip header fields */
np = update_trs_l34hdrs(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
l3len, IPPROTO_ESP, tso);
+
/* update spi, seqn and iv */
esph = (struct rte_esp_hdr *)(ph + uhlen);
iv = (uint64_t *)(esph + 1);
@@ -463,7 +489,7 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
/* try to update the packet itself */
rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv,
- sa->sqh_len);
+ sa->sqh_len, 0);
/* success, setup crypto op */
if (rc >= 0) {
outb_pkt_xprepare(sa, sqc, &icv);
@@ -560,7 +586,7 @@ cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss,
gen_iv(ivbuf[k], sqc);
/* try to update the packet itself */
- rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len);
+ rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len, 0);
/* success, proceed with preparations */
if (rc >= 0) {
@@ -741,7 +767,7 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
gen_iv(iv, sqc);
/* try to update the packet itself */
- rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
+ rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0, 1);
k += (rc >= 0);
@@ -808,7 +834,7 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
gen_iv(iv, sqc);
/* try to update the packet itself */
- rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
+ rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0, 0);
k += (rc >= 0);
diff --git a/lib/ipsec/rte_ipsec_sa.h b/lib/ipsec/rte_ipsec_sa.h
index 40d1e70d45..3c36dcaa77 100644
--- a/lib/ipsec/rte_ipsec_sa.h
+++ b/lib/ipsec/rte_ipsec_sa.h
@@ -38,7 +38,8 @@ struct rte_ipsec_sa_prm {
union {
struct {
uint8_t hdr_len; /**< tunnel header len */
- uint8_t hdr_l3_off; /**< offset for IPv4/IPv6 header */
+ uint8_t hdr_l3_off; /**< tunnel l3 header len */
+ uint8_t hdr_l3_len; /**< tunnel l3 header len */
uint8_t next_proto; /**< next header protocol */
const void *hdr; /**< tunnel header template */
} tun; /**< tunnel mode related parameters */
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 242fdcd461..51f71b30c6 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -17,6 +17,8 @@
#define MBUF_MAX_L2_LEN RTE_LEN2MASK(RTE_MBUF_L2_LEN_BITS, uint64_t)
#define MBUF_MAX_L3_LEN RTE_LEN2MASK(RTE_MBUF_L3_LEN_BITS, uint64_t)
+#define MBUF_MAX_TSO_LEN RTE_LEN2MASK(RTE_MBUF_TSO_SEGSZ_BITS, uint64_t)
+#define MBUF_MAX_OL3_LEN RTE_LEN2MASK(RTE_MBUF_OUTL3_LEN_BITS, uint64_t)
/* some helper structures */
struct crypto_xform {
@@ -348,6 +350,11 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen, uint64_t sqn)
sa->cofs.ofs.cipher.head = sa->ctp.cipher.offset - sa->ctp.auth.offset;
sa->cofs.ofs.cipher.tail = (sa->ctp.auth.offset + sa->ctp.auth.length) -
(sa->ctp.cipher.offset + sa->ctp.cipher.length);
+
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
+ sa->tx_ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV6)
+ sa->tx_ol_flags |= PKT_TX_IPV6;
}
/*
@@ -362,9 +369,43 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
sa->hdr_len = prm->tun.hdr_len;
sa->hdr_l3_off = prm->tun.hdr_l3_off;
+
+ /* update l2_len and l3_len fields for outbound mbuf */
+ sa->inline_crypto.tx_offload.val = rte_mbuf_tx_offload(
+ 0, /* iL2_LEN */
+ 0, /* iL3_LEN */
+ 0, /* iL4_LEN */
+ 0, /* TSO_SEG_SZ */
+ prm->tun.hdr_l3_len, /* oL3_LEN */
+ prm->tun.hdr_l3_off, /* oL2_LEN */
+ 0);
+
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_TUNNEL_ESP;
+
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_OUTER_IPV4;
+ else if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV6)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_OUTER_IPV6;
+
+ if (sa->inline_crypto.tx_ol_flags & PKT_TX_OUTER_IPV4)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_OUTER_IP_CKSUM;
+ if (sa->tx_ol_flags & PKT_TX_IPV4)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_IP_CKSUM;
+
/* update l2_len and l3_len fields for outbound mbuf */
- sa->tx_offload.val = rte_mbuf_tx_offload(sa->hdr_l3_off,
- sa->hdr_len - sa->hdr_l3_off, 0, 0, 0, 0, 0);
+ sa->tx_offload.val = rte_mbuf_tx_offload(
+ prm->tun.hdr_l3_off, /* iL2_LEN */
+ prm->tun.hdr_l3_len, /* iL3_LEN */
+ 0, /* iL4_LEN */
+ 0, /* TSO_SEG_SZ */
+ 0, /* oL3_LEN */
+ 0, /* oL2_LEN */
+ 0);
+
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
+ sa->tx_ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV6)
+ sa->tx_ol_flags |= PKT_TX_IPV6;
memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
@@ -473,6 +514,10 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->salt = prm->ipsec_xform.salt;
/* preserve all values except l2_len and l3_len */
+ sa->inline_crypto.tx_offload.msk =
+ ~rte_mbuf_tx_offload(MBUF_MAX_L2_LEN, MBUF_MAX_L3_LEN,
+ 0, 0, MBUF_MAX_OL3_LEN, 0, 0);
+
sa->tx_offload.msk =
~rte_mbuf_tx_offload(MBUF_MAX_L2_LEN, MBUF_MAX_L3_LEN,
0, 0, 0, 0, 0);
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index b9b7ebec5b..172d094c4b 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -101,6 +101,14 @@ struct rte_ipsec_sa {
uint64_t msk;
uint64_t val;
} tx_offload;
+ uint64_t tx_ol_flags;
+ struct {
+ uint64_t tx_ol_flags;
+ struct {
+ uint64_t msk;
+ uint64_t val;
+ } tx_offload;
+ } inline_crypto;
struct {
uint16_t sport;
uint16_t dport;
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v3 00/10] new features for ipsec and security libraries
2021-07-13 13:35 [dpdk-dev] [PATCH 00/10] new features for ipsec and security libraries Radu Nicolau
` (10 preceding siblings ...)
2021-08-12 13:54 ` [dpdk-dev] [PATCH v2 00/10] new features for ipsec and security libraries Radu Nicolau
@ 2021-08-13 9:30 ` Radu Nicolau
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 01/10] security: add support for TSO on IPsec session Radu Nicolau
` (10 more replies)
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 " Radu Nicolau
` (6 subsequent siblings)
18 siblings, 11 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-08-13 9:30 UTC (permalink / raw)
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, hemant.agrawal, gakhil, anoobj, declan.doherty,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau
Add support for:
TSO, NAT-T/UDP encapsulation, ESN
AES_CCM, CHACHA20_POLY1305 and AES_GMAC
SA telemetry
mbuf offload flags
Initial SQN value
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Radu Nicolau (10):
security: add support for TSO on IPsec session
security: add UDP params for IPsec NAT-T
security: add ESN field to ipsec_xform
mbuf: add IPsec ESP tunnel type
ipsec: add support for AEAD algorithms
ipsec: add transmit segmentation offload support
ipsec: add support for NAT-T
ipsec: add support for SA telemetry
ipsec: add support for initial SQN value
ipsec: add ol_flags support
lib/ipsec/crypto.h | 137 ++++++++++++
lib/ipsec/esp_inb.c | 88 +++++++-
lib/ipsec/esp_outb.c | 262 +++++++++++++++++++----
lib/ipsec/iph.h | 23 +-
lib/ipsec/meson.build | 2 +-
lib/ipsec/rte_ipsec.h | 11 +
lib/ipsec/rte_ipsec_sa.h | 11 +-
lib/ipsec/sa.c | 406 ++++++++++++++++++++++++++++++++++--
lib/ipsec/sa.h | 43 ++++
lib/ipsec/version.map | 9 +
lib/mbuf/rte_mbuf_core.h | 1 +
lib/security/rte_security.h | 31 +++
12 files changed, 951 insertions(+), 73 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v3 01/10] security: add support for TSO on IPsec session
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 00/10] new features for ipsec and security libraries Radu Nicolau
@ 2021-08-13 9:30 ` Radu Nicolau
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 02/10] security: add UDP params for IPsec NAT-T Radu Nicolau
` (9 subsequent siblings)
10 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-08-13 9:30 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, hemant.agrawal, anoobj, abhijit.sinha,
daniel.m.buckley, marchana, ktejasree, matan, Radu Nicolau
Allow user to provision a per security session maximum segment size
(MSS) for use when Transmit Segmentation Offload (TSO) is supported.
The MSS value will be used when PKT_TX_TCP_SEG or PKT_TX_UDP_SEG
ol_flags are specified in mbuf.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/security/rte_security.h | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 88d31de0a6..45896a77d0 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -181,6 +181,19 @@ struct rte_security_ipsec_sa_options {
* * 0: Disable per session security statistics collection for this SA.
*/
uint32_t stats : 1;
+
+ /** Transmit Segmentation Offload (TSO)
+ *
+ * * 1: Enable per session security TSO support, use MSS value provide
+ * in IPsec security session when PKT_TX_TCP_SEG or PKT_TX_UDP_SEG
+ * ol_flags are set in mbuf.
+ * this SA, if supported by the driver.
+ * * 0: No TSO support for offload IPsec packets. Hardware will not
+ * attempt to segment packet, and packet transmission will fail if
+ * larger than MTU of interface
+ */
+ uint32_t tso : 1;
+
};
/** IPSec security association direction */
@@ -217,6 +230,8 @@ struct rte_security_ipsec_xform {
/**< Anti replay window size to enable sequence replay attack handling.
* replay checking is disabled if the window size is 0.
*/
+ uint32_t mss;
+ /**< IPsec payload Maximum Segment Size */
};
/**
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v3 02/10] security: add UDP params for IPsec NAT-T
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 00/10] new features for ipsec and security libraries Radu Nicolau
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 01/10] security: add support for TSO on IPsec session Radu Nicolau
@ 2021-08-13 9:30 ` Radu Nicolau
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 03/10] security: add ESN field to ipsec_xform Radu Nicolau
` (8 subsequent siblings)
10 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-08-13 9:30 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, hemant.agrawal, anoobj, abhijit.sinha,
daniel.m.buckley, marchana, ktejasree, matan, Radu Nicolau
Add support for specifying UDP port params for UDP encapsulation option.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/security/rte_security.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 45896a77d0..03572b10ab 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -112,6 +112,12 @@ struct rte_security_ipsec_tunnel_param {
};
};
+struct rte_security_ipsec_udp_param {
+
+ uint16_t sport;
+ uint16_t dport;
+};
+
/**
* IPsec Security Association option flags
*/
@@ -224,6 +230,8 @@ struct rte_security_ipsec_xform {
/**< IPsec SA Mode - transport/tunnel */
struct rte_security_ipsec_tunnel_param tunnel;
/**< Tunnel parameters, NULL for transport mode */
+ struct rte_security_ipsec_udp_param udp;
+ /**< UDP parameters, ignored when udp_encap option not specified */
uint64_t esn_soft_limit;
/**< ESN for which the overflow event need to be raised */
uint32_t replay_win_sz;
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v3 03/10] security: add ESN field to ipsec_xform
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 00/10] new features for ipsec and security libraries Radu Nicolau
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 01/10] security: add support for TSO on IPsec session Radu Nicolau
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 02/10] security: add UDP params for IPsec NAT-T Radu Nicolau
@ 2021-08-13 9:30 ` Radu Nicolau
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 04/10] mbuf: add IPsec ESP tunnel type Radu Nicolau
` (7 subsequent siblings)
10 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-08-13 9:30 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, hemant.agrawal, anoobj, abhijit.sinha,
daniel.m.buckley, marchana, ktejasree, matan, Radu Nicolau
Update ipsec_xform definition to include ESN field.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/security/rte_security.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 03572b10ab..702de58b48 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -240,6 +240,14 @@ struct rte_security_ipsec_xform {
*/
uint32_t mss;
/**< IPsec payload Maximum Segment Size */
+ union {
+ uint64_t value;
+ struct {
+ uint32_t low;
+ uint32_t hi;
+ };
+ } esn;
+ /**< Extended Sequence Number */
};
/**
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v3 04/10] mbuf: add IPsec ESP tunnel type
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 00/10] new features for ipsec and security libraries Radu Nicolau
` (2 preceding siblings ...)
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 03/10] security: add ESN field to ipsec_xform Radu Nicolau
@ 2021-08-13 9:30 ` Radu Nicolau
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 05/10] ipsec: add support for AEAD algorithms Radu Nicolau
` (6 subsequent siblings)
10 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-08-13 9:30 UTC (permalink / raw)
To: Olivier Matz
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, hemant.agrawal, gakhil, anoobj, declan.doherty,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau
Add tunnel type for IPsec ESP tunnels
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/mbuf/rte_mbuf_core.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index bb38d7f581..a4d95deee6 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -253,6 +253,7 @@ extern "C" {
#define PKT_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
#define PKT_TX_TUNNEL_VXLAN_GPE (0x6ULL << 45)
#define PKT_TX_TUNNEL_GTP (0x7ULL << 45)
+#define PKT_TX_TUNNEL_ESP (0x8ULL << 45)
/**
* Generic IP encapsulated tunnel type, used for TSO and checksum offload.
* It can be used for tunnels which are not standards or listed above.
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v3 05/10] ipsec: add support for AEAD algorithms
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 00/10] new features for ipsec and security libraries Radu Nicolau
` (3 preceding siblings ...)
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 04/10] mbuf: add IPsec ESP tunnel type Radu Nicolau
@ 2021-08-13 9:30 ` Radu Nicolau
2021-08-31 10:17 ` Zhang, Roy Fan
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 06/10] ipsec: add transmit segmentation offload support Radu Nicolau
` (5 subsequent siblings)
10 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-08-13 9:30 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, hemant.agrawal, gakhil, anoobj,
declan.doherty, abhijit.sinha, daniel.m.buckley, marchana,
ktejasree, matan, Radu Nicolau
Add support for AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/crypto.h | 137 +++++++++++++++++++++++++++++++++++++++++++
lib/ipsec/esp_inb.c | 66 ++++++++++++++++++++-
lib/ipsec/esp_outb.c | 70 +++++++++++++++++++++-
lib/ipsec/sa.c | 54 +++++++++++++++--
lib/ipsec/sa.h | 6 ++
5 files changed, 322 insertions(+), 11 deletions(-)
diff --git a/lib/ipsec/crypto.h b/lib/ipsec/crypto.h
index 3d03034590..e080422851 100644
--- a/lib/ipsec/crypto.h
+++ b/lib/ipsec/crypto.h
@@ -21,6 +21,37 @@ struct aesctr_cnt_blk {
uint32_t cnt;
} __rte_packed;
+ /*
+ * CHACHA20-POLY1305 devices have some specific requirements
+ * for IV and AAD formats.
+ * Ideally that to be done by the driver itself.
+ */
+
+struct aead_chacha20_poly1305_iv {
+ uint32_t salt;
+ uint64_t iv;
+ uint32_t cnt;
+} __rte_packed;
+
+struct aead_chacha20_poly1305_aad {
+ uint32_t spi;
+ /*
+ * RFC 4106, section 5:
+ * Two formats of the AAD are defined:
+ * one for 32-bit sequence numbers, and one for 64-bit ESN.
+ */
+ union {
+ uint32_t u32[2];
+ uint64_t u64;
+ } sqn;
+ uint32_t align0; /* align to 16B boundary */
+} __rte_packed;
+
+struct chacha20_poly1305_esph_iv {
+ struct rte_esp_hdr esph;
+ uint64_t iv;
+} __rte_packed;
+
/*
* AES-GCM devices have some specific requirements for IV and AAD formats.
* Ideally that to be done by the driver itself.
@@ -51,6 +82,47 @@ struct gcm_esph_iv {
uint64_t iv;
} __rte_packed;
+ /*
+ * AES-CCM devices have some specific requirements for IV and AAD formats.
+ * Ideally that to be done by the driver itself.
+ */
+union aead_ccm_salt {
+ uint32_t salt;
+ struct inner {
+ uint8_t salt8[3];
+ uint8_t ccm_flags;
+ } inner;
+} __rte_packed;
+
+
+struct aead_ccm_iv {
+ uint8_t ccm_flags;
+ uint8_t salt[3];
+ uint64_t iv;
+ uint32_t cnt;
+} __rte_packed;
+
+struct aead_ccm_aad {
+ uint8_t padding[18];
+ uint32_t spi;
+ /*
+ * RFC 4309, section 5:
+ * Two formats of the AAD are defined:
+ * one for 32-bit sequence numbers, and one for 64-bit ESN.
+ */
+ union {
+ uint32_t u32[2];
+ uint64_t u64;
+ } sqn;
+ uint32_t align0; /* align to 16B boundary */
+} __rte_packed;
+
+struct ccm_esph_iv {
+ struct rte_esp_hdr esph;
+ uint64_t iv;
+} __rte_packed;
+
+
static inline void
aes_ctr_cnt_blk_fill(struct aesctr_cnt_blk *ctr, uint64_t iv, uint32_t nonce)
{
@@ -59,6 +131,16 @@ aes_ctr_cnt_blk_fill(struct aesctr_cnt_blk *ctr, uint64_t iv, uint32_t nonce)
ctr->cnt = rte_cpu_to_be_32(1);
}
+static inline void
+aead_chacha20_poly1305_iv_fill(struct aead_chacha20_poly1305_iv
+ *chacha20_poly1305,
+ uint64_t iv, uint32_t salt)
+{
+ chacha20_poly1305->salt = salt;
+ chacha20_poly1305->iv = iv;
+ chacha20_poly1305->cnt = rte_cpu_to_be_32(1);
+}
+
static inline void
aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
{
@@ -67,6 +149,21 @@ aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
gcm->cnt = rte_cpu_to_be_32(1);
}
+static inline void
+aead_ccm_iv_fill(struct aead_ccm_iv *ccm, uint64_t iv, uint32_t salt)
+{
+ union aead_ccm_salt tsalt;
+
+ tsalt.salt = salt;
+ ccm->ccm_flags = tsalt.inner.ccm_flags;
+ ccm->salt[0] = tsalt.inner.salt8[0];
+ ccm->salt[1] = tsalt.inner.salt8[1];
+ ccm->salt[2] = tsalt.inner.salt8[2];
+ ccm->iv = iv;
+ ccm->cnt = rte_cpu_to_be_32(1);
+}
+
+
/*
* RFC 4106, 5 AAD Construction
* spi and sqn should already be converted into network byte order.
@@ -86,6 +183,25 @@ aead_gcm_aad_fill(struct aead_gcm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
aad->align0 = 0;
}
+/*
+ * RFC 4309, 5 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_ccm_aad_fill(struct aead_ccm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
+ int esn)
+{
+ aad->spi = spi;
+ if (esn)
+ aad->sqn.u64 = sqn;
+ else {
+ aad->sqn.u32[0] = sqn_low32(sqn);
+ aad->sqn.u32[1] = 0;
+ }
+ aad->align0 = 0;
+}
+
static inline void
gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
{
@@ -93,6 +209,27 @@ gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
iv[1] = 0;
}
+
+/*
+ * RFC 4106, 5 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_chacha20_poly1305_aad_fill(struct aead_chacha20_poly1305_aad *aad,
+ rte_be32_t spi, rte_be64_t sqn,
+ int esn)
+{
+ aad->spi = spi;
+ if (esn)
+ aad->sqn.u64 = sqn;
+ else {
+ aad->sqn.u32[0] = sqn_low32(sqn);
+ aad->sqn.u32[1] = 0;
+ }
+ aad->align0 = 0;
+}
+
/*
* Helper routine to copy IV
* Right now we support only algorithms with IV length equals 0/8/16 bytes.
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index 2b1df6a032..d66c88f05d 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -63,6 +63,8 @@ inb_cop_prepare(struct rte_crypto_op *cop,
{
struct rte_crypto_sym_op *sop;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint64_t *ivc, *ivp;
uint32_t algo;
@@ -83,6 +85,24 @@ inb_cop_prepare(struct rte_crypto_op *cop,
sa->iv_ofs);
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ sop_aead_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ ccm = rte_crypto_op_ctod_offset(cop, struct aead_ccm_iv *,
+ sa->iv_ofs);
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ sop_aead_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ chacha20_poly1305 = rte_crypto_op_ctod_offset(cop,
+ struct aead_chacha20_poly1305_iv *,
+ sa->iv_ofs);
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CBC:
case ALGO_TYPE_3DES_CBC:
sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
@@ -91,6 +111,14 @@ inb_cop_prepare(struct rte_crypto_op *cop,
ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs);
copy_iv(ivc, ivp, sa->iv_len);
break;
+ case ALGO_TYPE_AES_GMAC:
+ sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+ sa->iv_ofs);
+ aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
@@ -110,6 +138,8 @@ inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
uint32_t *pofs, uint32_t plen, void *iv)
{
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint64_t *ivp;
uint32_t clen;
@@ -120,9 +150,19 @@ inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
switch (sa->algo_type) {
case ALGO_TYPE_AES_GCM:
+ case ALGO_TYPE_AES_GMAC:
gcm = (struct aead_gcm_iv *)iv;
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ ccm = (struct aead_ccm_iv *)iv;
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ chacha20_poly1305 = (struct aead_chacha20_poly1305_iv *)iv;
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CBC:
case ALGO_TYPE_3DES_CBC:
copy_iv(iv, ivp, sa->iv_len);
@@ -175,6 +215,8 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
const union sym_op_data *icv)
{
struct aead_gcm_aad *aad;
+ struct aead_ccm_aad *caad;
+ struct aead_chacha20_poly1305_aad *chacha_aad;
/* insert SQN.hi between ESP trailer and ICV */
if (sa->sqh_len != 0)
@@ -184,9 +226,27 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
* fill AAD fields, if any (aad fields are placed after icv),
* right now we support only one AEAD algorithm: AES-GCM.
*/
- if (sa->aad_len != 0) {
- aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
- aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ switch (sa->algo_type) {
+ case ALGO_TYPE_AES_GCM:
+ if (sa->aad_len != 0) {
+ aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+ aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_AES_CCM:
+ if (sa->aad_len != 0) {
+ caad = (struct aead_ccm_aad *)(icv->va + sa->icv_len);
+ aead_ccm_aad_fill(caad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ if (sa->aad_len != 0) {
+ chacha_aad = (struct aead_chacha20_poly1305_aad *)
+ (icv->va + sa->icv_len);
+ aead_chacha20_poly1305_aad_fill(chacha_aad,
+ sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
}
}
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 1e181cf2ce..a3f77469c3 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -63,6 +63,8 @@ outb_cop_prepare(struct rte_crypto_op *cop,
{
struct rte_crypto_sym_op *sop;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint32_t algo;
@@ -80,6 +82,15 @@ outb_cop_prepare(struct rte_crypto_op *cop,
/* NULL case */
sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
break;
+ case ALGO_TYPE_AES_GMAC:
+ /* GMAC case */
+ sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+ sa->iv_ofs);
+ aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_GCM:
/* AEAD (AES_GCM) case */
sop_aead_prepare(sop, sa, icv, hlen, plen);
@@ -89,6 +100,26 @@ outb_cop_prepare(struct rte_crypto_op *cop,
sa->iv_ofs);
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ /* AEAD (AES_CCM) case */
+ sop_aead_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ ccm = rte_crypto_op_ctod_offset(cop, struct aead_ccm_iv *,
+ sa->iv_ofs);
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ /* AEAD (CHACHA20_POLY) case */
+ sop_aead_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ chacha20_poly1305 = rte_crypto_op_ctod_offset(cop,
+ struct aead_chacha20_poly1305_iv *,
+ sa->iv_ofs);
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
/* Cipher-Auth (AES-CTR *) case */
sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
@@ -196,7 +227,9 @@ outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
const union sym_op_data *icv)
{
uint32_t *psqh;
- struct aead_gcm_aad *aad;
+ struct aead_gcm_aad *gaad;
+ struct aead_ccm_aad *caad;
+ struct aead_chacha20_poly1305_aad *chacha20_poly1305_aad;
/* insert SQN.hi between ESP trailer and ICV */
if (sa->sqh_len != 0) {
@@ -208,9 +241,29 @@ outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
* fill IV and AAD fields, if any (aad fields are placed after icv),
* right now we support only one AEAD algorithm: AES-GCM .
*/
+ switch (sa->algo_type) {
+ case ALGO_TYPE_AES_GCM:
if (sa->aad_len != 0) {
- aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
- aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ gaad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+ aead_gcm_aad_fill(gaad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_AES_CCM:
+ if (sa->aad_len != 0) {
+ caad = (struct aead_ccm_aad *)(icv->va + sa->icv_len);
+ aead_ccm_aad_fill(caad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ if (sa->aad_len != 0) {
+ chacha20_poly1305_aad = (struct aead_chacha20_poly1305_aad *)
+ (icv->va + sa->icv_len);
+ aead_chacha20_poly1305_aad_fill(chacha20_poly1305_aad,
+ sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ default:
+ break;
}
}
@@ -418,6 +471,8 @@ outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
{
uint64_t *ivp = iv;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint32_t clen;
@@ -426,6 +481,15 @@ outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
gcm = iv;
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ ccm = iv;
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ chacha20_poly1305 = iv;
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
ctr = iv;
aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt);
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index e59189d215..720e0f365b 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -47,6 +47,15 @@ fill_crypto_xform(struct crypto_xform *xform, uint64_t type,
if (xfn != NULL)
return -EINVAL;
xform->aead = &xf->aead;
+
+ /* GMAC has only auth */
+ } else if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+ xf->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ if (xfn != NULL)
+ return -EINVAL;
+ xform->auth = &xf->auth;
+ xform->cipher = &xfn->cipher;
+
/*
* CIPHER+AUTH xforms are expected in strict order,
* depending on SA direction:
@@ -247,12 +256,13 @@ esp_inb_init(struct rte_ipsec_sa *sa)
sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
/*
- * for AEAD and NULL algorithms we can assume that
+ * for AEAD algorithms we can assume that
* auth and cipher offsets would be equal.
*/
switch (sa->algo_type) {
case ALGO_TYPE_AES_GCM:
- case ALGO_TYPE_NULL:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
sa->ctp.auth.raw = sa->ctp.cipher.raw;
break;
default:
@@ -294,6 +304,8 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
switch (algo_type) {
case ALGO_TYPE_AES_GCM:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
case ALGO_TYPE_AES_CTR:
case ALGO_TYPE_NULL:
sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr) +
@@ -305,15 +317,20 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr);
sa->ctp.cipher.length = sa->iv_len;
break;
+ case ALGO_TYPE_AES_GMAC:
+ sa->ctp.cipher.offset = 0;
+ sa->ctp.cipher.length = 0;
+ break;
}
/*
- * for AEAD and NULL algorithms we can assume that
+ * for AEAD algorithms we can assume that
* auth and cipher offsets would be equal.
*/
switch (algo_type) {
case ALGO_TYPE_AES_GCM:
- case ALGO_TYPE_NULL:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
sa->ctp.auth.raw = sa->ctp.cipher.raw;
break;
default:
@@ -374,13 +391,39 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->pad_align = IPSEC_PAD_AES_GCM;
sa->algo_type = ALGO_TYPE_AES_GCM;
break;
+ case RTE_CRYPTO_AEAD_AES_CCM:
+ /* RFC 4309 */
+ sa->aad_len = sizeof(struct aead_ccm_aad);
+ sa->icv_len = cxf->aead->digest_length;
+ sa->iv_ofs = cxf->aead->iv.offset;
+ sa->iv_len = sizeof(uint64_t);
+ sa->pad_align = IPSEC_PAD_AES_CCM;
+ sa->algo_type = ALGO_TYPE_AES_CCM;
+ break;
+ case RTE_CRYPTO_AEAD_CHACHA20_POLY1305:
+ /* RFC 7634 & 8439*/
+ sa->aad_len = sizeof(struct aead_chacha20_poly1305_aad);
+ sa->icv_len = cxf->aead->digest_length;
+ sa->iv_ofs = cxf->aead->iv.offset;
+ sa->iv_len = sizeof(uint64_t);
+ sa->pad_align = IPSEC_PAD_CHACHA20_POLY1305;
+ sa->algo_type = ALGO_TYPE_CHACHA20_POLY1305;
+ break;
default:
return -EINVAL;
}
+ } else if (cxf->auth->algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ /* RFC 4543 */
+ /* AES-GMAC is a special case of auth that needs IV */
+ sa->pad_align = IPSEC_PAD_AES_GMAC;
+ sa->iv_len = sizeof(uint64_t);
+ sa->icv_len = cxf->auth->digest_length;
+ sa->iv_ofs = cxf->auth->iv.offset;
+ sa->algo_type = ALGO_TYPE_AES_GMAC;
+
} else {
sa->icv_len = cxf->auth->digest_length;
sa->iv_ofs = cxf->cipher->iv.offset;
- sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
switch (cxf->cipher->algo) {
case RTE_CRYPTO_CIPHER_NULL:
@@ -414,6 +457,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
}
}
+ sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
sa->udata = prm->userdata;
sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi);
sa->salt = prm->ipsec_xform.salt;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 1bffe751f5..107ebd1519 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -19,7 +19,10 @@ enum {
IPSEC_PAD_AES_CBC = IPSEC_MAX_IV_SIZE,
IPSEC_PAD_AES_CTR = IPSEC_PAD_DEFAULT,
IPSEC_PAD_AES_GCM = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_AES_CCM = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_CHACHA20_POLY1305 = IPSEC_PAD_DEFAULT,
IPSEC_PAD_NULL = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_AES_GMAC = IPSEC_PAD_DEFAULT,
};
/* iv sizes for different algorithms */
@@ -67,6 +70,9 @@ enum sa_algo_type {
ALGO_TYPE_AES_CBC,
ALGO_TYPE_AES_CTR,
ALGO_TYPE_AES_GCM,
+ ALGO_TYPE_AES_CCM,
+ ALGO_TYPE_CHACHA20_POLY1305,
+ ALGO_TYPE_AES_GMAC,
ALGO_TYPE_MAX
};
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v3 05/10] ipsec: add support for AEAD algorithms
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 05/10] ipsec: add support for AEAD algorithms Radu Nicolau
@ 2021-08-31 10:17 ` Zhang, Roy Fan
0 siblings, 0 replies; 184+ messages in thread
From: Zhang, Roy Fan @ 2021-08-31 10:17 UTC (permalink / raw)
To: Nicolau, Radu, Ananyev, Konstantin, Iremonger, Bernard,
Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, hemant.agrawal, gakhil, anoobj,
Doherty, Declan, Sinha, Abhijit, Buckley, Daniel M, marchana,
ktejasree, matan, Nicolau, Radu
Hi Radu,
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Radu Nicolau
> Sent: Friday, August 13, 2021 10:30 AM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Iremonger,
> Bernard <bernard.iremonger@intel.com>; Medvedkin, Vladimir
> <vladimir.medvedkin@intel.com>
> Cc: dev@dpdk.org; mdr@ashroe.eu; Richardson, Bruce
> <bruce.richardson@intel.com>; hemant.agrawal@nxp.com;
> gakhil@marvell.com; anoobj@marvell.com; Doherty, Declan
> <declan.doherty@intel.com>; Sinha, Abhijit <abhijit.sinha@intel.com>;
> Buckley, Daniel M <daniel.m.buckley@intel.com>; marchana@marvell.com;
> ktejasree@marvell.com; matan@nvidia.com; Nicolau, Radu
> <radu.nicolau@intel.com>
> Subject: [dpdk-dev] [PATCH v3 05/10] ipsec: add support for AEAD algorithms
>
> Add support for AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> ---
> +
> +/*
> + * RFC 4106, 5 AAD Construction
> + * spi and sqn should already be converted into network byte order.
[Fan: Comments is incorrect, should be RFC7643]
> + * Make sure that not used bytes are zeroed.
> + */
> +static inline void
> +aead_chacha20_poly1305_aad_fill(struct aead_chacha20_poly1305_aad
> *aad,
> + rte_be32_t spi, rte_be64_t sqn,
> + int esn)
> +{
> + aad->spi = spi;
> + if (esn)
> + aad->sqn.u64 = sqn;
> + else {
> + aad->sqn.u32[0] = sqn_low32(sqn);
> + aad->sqn.u32[1] = 0;
> + }
> + aad->align0 = 0;
> +}
> +
> /*
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v3 06/10] ipsec: add transmit segmentation offload support
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 00/10] new features for ipsec and security libraries Radu Nicolau
` (4 preceding siblings ...)
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 05/10] ipsec: add support for AEAD algorithms Radu Nicolau
@ 2021-08-13 9:30 ` Radu Nicolau
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 07/10] ipsec: add support for NAT-T Radu Nicolau
` (4 subsequent siblings)
10 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-08-13 9:30 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, hemant.agrawal, gakhil, anoobj,
declan.doherty, abhijit.sinha, daniel.m.buckley, marchana,
ktejasree, matan, Radu Nicolau
Add support for transmit segmentation offload to inline crypto processing
mode. This offload is not supported by other offload modes, as at a
minimum it requires inline crypto for IPsec to be supported on the
network interface.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/esp_inb.c | 4 +-
lib/ipsec/esp_outb.c | 115 +++++++++++++++++++++++++++++++++++--------
lib/ipsec/iph.h | 10 +++-
lib/ipsec/sa.c | 6 +++
lib/ipsec/sa.h | 4 ++
5 files changed, 114 insertions(+), 25 deletions(-)
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index d66c88f05d..a6ab8fbdd5 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -668,8 +668,8 @@ trs_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
/* modify packet's layout */
np = trs_process_step2(mb[i], ml[i], hl[i], cofs,
to[i], tl, sqn + k);
- update_trs_l3hdr(sa, np + l2, mb[i]->pkt_len,
- l2, hl[i] - l2, espt[i].next_proto);
+ update_trs_l34hdrs(sa, np + l2, mb[i]->pkt_len,
+ l2, hl[i] - l2, espt[i].next_proto, 0);
/* update mbuf's metadata */
trs_process_step3(mb[i]);
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index a3f77469c3..9fc7075796 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -2,6 +2,8 @@
* Copyright(c) 2018-2020 Intel Corporation
*/
+#include <math.h>
+
#include <rte_ipsec.h>
#include <rte_esp.h>
#include <rte_ip.h>
@@ -156,11 +158,20 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* number of bytes to encrypt */
clen = plen + sizeof(*espt);
- clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+ /* We don't need to pad/ailgn packet when using TSO offload */
+ if (likely(!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))))
+ clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
/* pad length + esp tail */
pdlen = clen - plen;
- tlen = pdlen + sa->icv_len + sqh_len;
+
+ /* We don't append ICV length when using TSO offload */
+ if (likely(!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))))
+ tlen = pdlen + sa->icv_len + sqh_len;
+ else
+ tlen = pdlen + sqh_len;
/* do append and prepend */
ml = rte_pktmbuf_lastseg(mb);
@@ -337,6 +348,7 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
char *ph, *pt;
uint64_t *iv;
uint32_t l2len, l3len;
+ uint8_t tso = mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG) ? 1 : 0;
l2len = mb->l2_len;
l3len = mb->l3_len;
@@ -349,11 +361,19 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* number of bytes to encrypt */
clen = plen + sizeof(*espt);
- clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+ /* We don't need to pad/ailgn packet when using TSO offload */
+ if (likely(!tso))
+ clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
/* pad length + esp tail */
pdlen = clen - plen;
- tlen = pdlen + sa->icv_len + sqh_len;
+
+ /* We don't append ICV length when using TSO offload */
+ if (likely(!tso))
+ tlen = pdlen + sa->icv_len + sqh_len;
+ else
+ tlen = pdlen + sqh_len;
/* do append and insert */
ml = rte_pktmbuf_lastseg(mb);
@@ -375,8 +395,8 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
insert_esph(ph, ph + hlen, uhlen);
/* update ip header fields */
- np = update_trs_l3hdr(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
- l3len, IPPROTO_ESP);
+ np = update_trs_l34hdrs(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
+ l3len, IPPROTO_ESP, tso);
/* update spi, seqn and iv */
esph = (struct rte_esp_hdr *)(ph + uhlen);
@@ -651,6 +671,33 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
}
}
+/* check if packet will exceed MSS and segmentation is required */
+static inline int
+esn_outb_nb_segments(const struct rte_ipsec_sa *sa, struct rte_mbuf *m) {
+ uint16_t segments = 1;
+ uint16_t pkt_l3len = m->pkt_len - m->l2_len;
+
+ /* Only support segmentation for UDP/TCP flows */
+ if (!(m->packet_type & (RTE_PTYPE_L4_UDP | RTE_PTYPE_L4_TCP)))
+ return segments;
+
+ if (sa->tso.enabled && pkt_l3len > sa->tso.mss) {
+ segments = ceil((float)pkt_l3len / sa->tso.mss);
+
+ if (m->packet_type & RTE_PTYPE_L4_TCP) {
+ m->ol_flags |= (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM);
+ m->l4_len = sizeof(struct rte_tcp_hdr);
+ } else {
+ m->ol_flags |= (PKT_TX_UDP_SEG | PKT_TX_UDP_CKSUM);
+ m->l4_len = sizeof(struct rte_udp_hdr);
+ }
+
+ m->tso_segsz = sa->tso.mss;
+ }
+
+ return segments;
+}
+
/*
* process group of ESP outbound tunnel packets destined for
* INLINE_CRYPTO type of device.
@@ -660,24 +707,29 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
int32_t rc;
- uint32_t i, k, n;
+ uint32_t i, k, nb_sqn = 0, nb_sqn_alloc;
uint64_t sqn;
rte_be64_t sqc;
struct rte_ipsec_sa *sa;
union sym_op_data icv;
uint64_t iv[IPSEC_MAX_IV_QWORD];
uint32_t dr[num];
+ uint16_t nb_segs[num];
sa = ss->sa;
- n = num;
- sqn = esn_outb_update_sqn(sa, &n);
- if (n != num)
+ for (i = 0; i != num; i++) {
+ nb_segs[i] = esn_outb_nb_segments(sa, mb[i]);
+ nb_sqn += nb_segs[i];
+ }
+
+ nb_sqn_alloc = nb_sqn;
+ sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
+ if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
k = 0;
- for (i = 0; i != n; i++) {
-
+ for (i = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -691,11 +743,18 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
dr[i - k] = i;
rte_errno = -rc;
}
+
+ /**
+ * If packet is using tso, increment sqn by the number of
+ * segments for packet
+ */
+ if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
+ sqn += nb_segs[i] - 1;
}
/* copy not processed mbufs beyond good ones */
- if (k != n && k != 0)
- move_bad_mbufs(mb, dr, n, n - k);
+ if (k != num && k != 0)
+ move_bad_mbufs(mb, dr, num, num - k);
inline_outb_mbuf_prepare(ss, mb, k);
return k;
@@ -710,23 +769,30 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
int32_t rc;
- uint32_t i, k, n;
+ uint32_t i, k, nb_sqn, nb_sqn_alloc;
uint64_t sqn;
rte_be64_t sqc;
struct rte_ipsec_sa *sa;
union sym_op_data icv;
uint64_t iv[IPSEC_MAX_IV_QWORD];
uint32_t dr[num];
+ uint16_t nb_segs[num];
sa = ss->sa;
- n = num;
- sqn = esn_outb_update_sqn(sa, &n);
- if (n != num)
+ /* Calculate number of sequence numbers required */
+ for (i = 0, nb_sqn = 0; i != num; i++) {
+ nb_segs[i] = esn_outb_nb_segments(sa, mb[i]);
+ nb_sqn += nb_segs[i];
+ }
+
+ nb_sqn_alloc = nb_sqn;
+ sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
+ if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
k = 0;
- for (i = 0; i != n; i++) {
+ for (i = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -741,11 +807,18 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
dr[i - k] = i;
rte_errno = -rc;
}
+
+ /**
+ * If packet is using tso, increment sqn by the number of
+ * segments for packet
+ */
+ if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
+ sqn += nb_segs[i] - 1;
}
/* copy not processed mbufs beyond good ones */
- if (k != n && k != 0)
- move_bad_mbufs(mb, dr, n, n - k);
+ if (k != num && k != 0)
+ move_bad_mbufs(mb, dr, num, num - k);
inline_outb_mbuf_prepare(ss, mb, k);
return k;
diff --git a/lib/ipsec/iph.h b/lib/ipsec/iph.h
index 861f16905a..2d223199ac 100644
--- a/lib/ipsec/iph.h
+++ b/lib/ipsec/iph.h
@@ -6,6 +6,8 @@
#define _IPH_H_
#include <rte_ip.h>
+#include <rte_udp.h>
+#include <rte_tcp.h>
/**
* @file iph.h
@@ -39,8 +41,8 @@ insert_esph(char *np, char *op, uint32_t hlen)
/* update original ip header fields for transport case */
static inline int
-update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
- uint32_t l2len, uint32_t l3len, uint8_t proto)
+update_trs_l34hdrs(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
+ uint32_t l2len, uint32_t l3len, uint8_t proto, uint8_t tso)
{
int32_t rc;
@@ -51,6 +53,10 @@ update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
v4h = p;
rc = v4h->next_proto_id;
v4h->next_proto_id = proto;
+ if (tso) {
+ v4h->hdr_checksum = 0;
+ v4h->total_length = 0;
+ }
v4h->total_length = rte_cpu_to_be_16(plen - l2len);
/* IPv6 */
} else {
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 720e0f365b..2ecbbce0a4 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -565,6 +565,12 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->type = type;
sa->size = sz;
+
+ if (prm->ipsec_xform.options.tso == 1) {
+ sa->tso.enabled = 1;
+ sa->tso.mss = prm->ipsec_xform.mss;
+ }
+
/* check for ESN flag */
sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
UINT32_MAX : UINT64_MAX;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 107ebd1519..5e237f3525 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -113,6 +113,10 @@ struct rte_ipsec_sa {
uint8_t iv_len;
uint8_t pad_align;
uint8_t tos_mask;
+ struct {
+ uint8_t enabled:1;
+ uint16_t mss;
+ } tso;
/* template for tunnel header */
uint8_t hdr[IPSEC_MAX_HDR_SIZE];
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v3 07/10] ipsec: add support for NAT-T
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 00/10] new features for ipsec and security libraries Radu Nicolau
` (5 preceding siblings ...)
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 06/10] ipsec: add transmit segmentation offload support Radu Nicolau
@ 2021-08-13 9:30 ` Radu Nicolau
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 08/10] ipsec: add support for SA telemetry Radu Nicolau
` (3 subsequent siblings)
10 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-08-13 9:30 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, hemant.agrawal, gakhil, anoobj,
declan.doherty, abhijit.sinha, daniel.m.buckley, marchana,
ktejasree, matan, Radu Nicolau
Add support for the IPsec NAT-Traversal use case for Tunnel mode
packets.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/iph.h | 13 +++++++++++++
lib/ipsec/rte_ipsec_sa.h | 8 +++++++-
lib/ipsec/sa.c | 13 ++++++++++++-
lib/ipsec/sa.h | 4 ++++
4 files changed, 36 insertions(+), 2 deletions(-)
diff --git a/lib/ipsec/iph.h b/lib/ipsec/iph.h
index 2d223199ac..093f86d34a 100644
--- a/lib/ipsec/iph.h
+++ b/lib/ipsec/iph.h
@@ -251,6 +251,7 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa *sa, void *outh,
{
struct rte_ipv4_hdr *v4h;
struct rte_ipv6_hdr *v6h;
+ struct rte_udp_hdr *udph;
uint8_t is_outh_ipv4;
if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
@@ -258,11 +259,23 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa *sa, void *outh,
v4h = outh;
v4h->packet_id = pid;
v4h->total_length = rte_cpu_to_be_16(plen - l2len);
+
+ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
+ udph = (struct rte_udp_hdr *)(v4h + 1);
+ udph->dgram_len = rte_cpu_to_be_16(plen - l2len -
+ (sizeof(*v4h) + sizeof(*udph)));
+ }
} else {
is_outh_ipv4 = 0;
v6h = outh;
v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
sizeof(*v6h));
+
+ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
+ udph = (struct rte_udp_hdr *)(v6h + 1);
+ udph->dgram_len = rte_cpu_to_be_16(plen - l2len -
+ (sizeof(*v6h) + sizeof(*udph)));
+ }
}
if (sa->type & TUN_HDR_MSK)
diff --git a/lib/ipsec/rte_ipsec_sa.h b/lib/ipsec/rte_ipsec_sa.h
index cf51ad8338..40d1e70d45 100644
--- a/lib/ipsec/rte_ipsec_sa.h
+++ b/lib/ipsec/rte_ipsec_sa.h
@@ -76,6 +76,7 @@ struct rte_ipsec_sa_prm {
* - inbound/outbound
* - mode (TRANSPORT/TUNNEL)
* - for TUNNEL outer IP version (IPv4/IPv6)
+ * - NAT-T UDP encapsulated (TUNNEL mode only)
* - are SA SQN operations 'atomic'
* - ESN enabled/disabled
* ...
@@ -86,7 +87,8 @@ enum {
RTE_SATP_LOG2_PROTO,
RTE_SATP_LOG2_DIR,
RTE_SATP_LOG2_MODE,
- RTE_SATP_LOG2_SQN = RTE_SATP_LOG2_MODE + 2,
+ RTE_SATP_LOG2_NATT = RTE_SATP_LOG2_MODE + 2,
+ RTE_SATP_LOG2_SQN,
RTE_SATP_LOG2_ESN,
RTE_SATP_LOG2_ECN,
RTE_SATP_LOG2_DSCP
@@ -109,6 +111,10 @@ enum {
#define RTE_IPSEC_SATP_MODE_TUNLV4 (1ULL << RTE_SATP_LOG2_MODE)
#define RTE_IPSEC_SATP_MODE_TUNLV6 (2ULL << RTE_SATP_LOG2_MODE)
+#define RTE_IPSEC_SATP_NATT_MASK (1ULL << RTE_SATP_LOG2_NATT)
+#define RTE_IPSEC_SATP_NATT_DISABLE (0ULL << RTE_SATP_LOG2_NATT)
+#define RTE_IPSEC_SATP_NATT_ENABLE (1ULL << RTE_SATP_LOG2_NATT)
+
#define RTE_IPSEC_SATP_SQN_MASK (1ULL << RTE_SATP_LOG2_SQN)
#define RTE_IPSEC_SATP_SQN_RAW (0ULL << RTE_SATP_LOG2_SQN)
#define RTE_IPSEC_SATP_SQN_ATOM (1ULL << RTE_SATP_LOG2_SQN)
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 2ecbbce0a4..8e369e4618 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -217,6 +217,10 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
} else
return -EINVAL;
+ /* check for UDP encapsulation flag */
+ if (prm->ipsec_xform.options.udp_encap == 1)
+ tp |= RTE_IPSEC_SATP_NATT_ENABLE;
+
/* check for ESN flag */
if (prm->ipsec_xform.options.esn == 0)
tp |= RTE_IPSEC_SATP_ESN_DISABLE;
@@ -372,7 +376,8 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
const struct crypto_xform *cxf)
{
static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
- RTE_IPSEC_SATP_MODE_MASK;
+ RTE_IPSEC_SATP_MODE_MASK |
+ RTE_IPSEC_SATP_NATT_MASK;
if (prm->ipsec_xform.options.ecn)
sa->tos_mask |= RTE_IPV4_HDR_ECN_MASK;
@@ -475,10 +480,16 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
esp_inb_init(sa);
break;
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4 |
+ RTE_IPSEC_SATP_NATT_ENABLE):
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6 |
+ RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
esp_outb_tun_init(sa, prm);
break;
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
+ RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
esp_outb_init(sa, 0);
break;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 5e237f3525..3f38921eb3 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -101,6 +101,10 @@ struct rte_ipsec_sa {
uint64_t msk;
uint64_t val;
} tx_offload;
+ struct {
+ uint16_t sport;
+ uint16_t dport;
+ } natt;
uint32_t salt;
uint8_t algo_type;
uint8_t proto; /* next proto */
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v3 08/10] ipsec: add support for SA telemetry
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 00/10] new features for ipsec and security libraries Radu Nicolau
` (6 preceding siblings ...)
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 07/10] ipsec: add support for NAT-T Radu Nicolau
@ 2021-08-13 9:30 ` Radu Nicolau
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 09/10] ipsec: add support for initial SQN value Radu Nicolau
` (2 subsequent siblings)
10 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-08-13 9:30 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin, Ray Kinsella
Cc: dev, bruce.richardson, hemant.agrawal, gakhil, anoobj,
declan.doherty, abhijit.sinha, daniel.m.buckley, marchana,
ktejasree, matan, Radu Nicolau
Add telemetry support for ipsec SAs
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/esp_inb.c | 1 +
lib/ipsec/esp_outb.c | 12 +-
lib/ipsec/meson.build | 2 +-
lib/ipsec/rte_ipsec.h | 11 ++
lib/ipsec/sa.c | 255 +++++++++++++++++++++++++++++++++++++++++-
lib/ipsec/sa.h | 21 ++++
lib/ipsec/version.map | 9 ++
7 files changed, 305 insertions(+), 6 deletions(-)
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index a6ab8fbdd5..8cb4c16302 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -722,6 +722,7 @@ esp_inb_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
/* process packets, extract seq numbers */
k = process(sa, mb, sqn, dr, num, sqh_len);
+ sa->statistics.count += k;
/* handle unprocessed mbufs */
if (k != num && k != 0)
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 9fc7075796..2c02c3bb12 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -617,7 +617,7 @@ uint16_t
esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
uint16_t num)
{
- uint32_t i, k, icv_len, *icv;
+ uint32_t i, k, icv_len, *icv, bytes;
struct rte_mbuf *ml;
struct rte_ipsec_sa *sa;
uint32_t dr[num];
@@ -626,10 +626,12 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
k = 0;
icv_len = sa->icv_len;
+ bytes = 0;
for (i = 0; i != num; i++) {
if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
ml = rte_pktmbuf_lastseg(mb[i]);
+ bytes += mb[i]->data_len;
/* remove high-order 32 bits of esn from packet len */
mb[i]->pkt_len -= sa->sqh_len;
ml->data_len -= sa->sqh_len;
@@ -640,6 +642,8 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
} else
dr[i - k] = i;
}
+ sa->statistics.count += k;
+ sa->statistics.bytes += bytes - (sa->hdr_len * k);
/* handle unprocessed mbufs */
if (k != num) {
@@ -659,16 +663,19 @@ static inline void
inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- uint32_t i, ol_flags;
+ uint32_t i, ol_flags, bytes = 0;
ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
for (i = 0; i != num; i++) {
mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
+ bytes += mb[i]->data_len;
if (ol_flags != 0)
rte_security_set_pkt_metadata(ss->security.ctx,
ss->security.ses, mb[i], NULL);
}
+ ss->sa->statistics.count += num;
+ ss->sa->statistics.bytes += bytes - (ss->sa->hdr_len * num);
}
/* check if packet will exceed MSS and segmentation is required */
@@ -752,6 +759,7 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
sqn += nb_segs[i] - 1;
}
+
/* copy not processed mbufs beyond good ones */
if (k != num && k != 0)
move_bad_mbufs(mb, dr, num, num - k);
diff --git a/lib/ipsec/meson.build b/lib/ipsec/meson.build
index 1497f573bb..f5e44cfe47 100644
--- a/lib/ipsec/meson.build
+++ b/lib/ipsec/meson.build
@@ -6,4 +6,4 @@ sources = files('esp_inb.c', 'esp_outb.c', 'sa.c', 'ses.c', 'ipsec_sad.c')
headers = files('rte_ipsec.h', 'rte_ipsec_sa.h', 'rte_ipsec_sad.h')
indirect_headers += files('rte_ipsec_group.h')
-deps += ['mbuf', 'net', 'cryptodev', 'security', 'hash']
+deps += ['mbuf', 'net', 'cryptodev', 'security', 'hash', 'telemetry']
diff --git a/lib/ipsec/rte_ipsec.h b/lib/ipsec/rte_ipsec.h
index dd60d95915..d34798bc7f 100644
--- a/lib/ipsec/rte_ipsec.h
+++ b/lib/ipsec/rte_ipsec.h
@@ -158,6 +158,17 @@ rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
return ss->pkt_func.process(ss, mb, num);
}
+
+struct rte_ipsec_telemetry;
+
+__rte_experimental
+int
+rte_ipsec_telemetry_init(void);
+
+__rte_experimental
+int
+rte_ipsec_telemetry_sa_add(struct rte_ipsec_sa *sa);
+
#include <rte_ipsec_group.h>
#ifdef __cplusplus
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 8e369e4618..5b55bbc098 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -7,7 +7,7 @@
#include <rte_ip.h>
#include <rte_errno.h>
#include <rte_cryptodev.h>
-
+#include <rte_telemetry.h>
#include "sa.h"
#include "ipsec_sqn.h"
#include "crypto.h"
@@ -25,6 +25,7 @@ struct crypto_xform {
struct rte_crypto_aead_xform *aead;
};
+
/*
* helper routine, fills internal crypto_xform structure.
*/
@@ -532,6 +533,249 @@ rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
wsz = prm->ipsec_xform.replay_win_sz;
return ipsec_sa_size(type, &wsz, &nb);
}
+struct rte_ipsec_telemetry {
+ bool initialized;
+ LIST_HEAD(, rte_ipsec_sa) sa_list_head;
+};
+
+#include <rte_malloc.h>
+
+static struct rte_ipsec_telemetry rte_ipsec_telemetry_instance = {
+ .initialized = false };
+
+static int
+handle_telemetry_cmd_ipsec_sa_list(const char *cmd __rte_unused,
+ const char *params __rte_unused,
+ struct rte_tel_data *data)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ struct rte_ipsec_sa *sa;
+
+ rte_tel_data_start_array(data, RTE_TEL_U64_VAL);
+
+ LIST_FOREACH(sa, &telemetry->sa_list_head, telemetry_next) {
+ rte_tel_data_add_array_u64(data, htonl(sa->spi));
+ }
+
+ return 0;
+}
+
+/**
+ * Handle IPsec SA statistics telemetry request
+ *
+ * Return dict of SA's with dict of key/value counters
+ *
+ * {
+ * "SA_SPI_XX": {"count": 0, "bytes": 0, "errors": 0},
+ * "SA_SPI_YY": {"count": 0, "bytes": 0, "errors": 0}
+ * }
+ *
+ */
+static int
+handle_telemetry_cmd_ipsec_sa_stats(const char *cmd __rte_unused,
+ const char *params,
+ struct rte_tel_data *data)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ struct rte_ipsec_sa *sa;
+ bool user_specified_spi = false;
+ uint32_t sa_spi;
+
+ if (params) {
+ user_specified_spi = true;
+ sa_spi = htonl((uint32_t)atoi(params));
+ }
+
+ rte_tel_data_start_dict(data);
+
+ LIST_FOREACH(sa, &telemetry->sa_list_head, telemetry_next) {
+ char sa_name[64];
+
+ static const char *name_pkt_cnt = "count";
+ static const char *name_byte_cnt = "bytes";
+ static const char *name_error_cnt = "errors";
+ struct rte_tel_data *sa_data;
+
+ /* If user provided SPI only get telemetry for that SA */
+ if (user_specified_spi && (sa_spi != sa->spi))
+ continue;
+
+ /* allocate telemetry data struct for SA telemetry */
+ sa_data = rte_tel_data_alloc();
+ if (!sa_data)
+ return -ENOMEM;
+
+ rte_tel_data_start_dict(sa_data);
+
+ /* add telemetry key/values pairs */
+ rte_tel_data_add_dict_u64(sa_data, name_pkt_cnt,
+ sa->statistics.count);
+
+ rte_tel_data_add_dict_u64(sa_data, name_byte_cnt,
+ sa->statistics.bytes);
+
+ rte_tel_data_add_dict_u64(sa_data, name_error_cnt,
+ sa->statistics.errors.count);
+
+ /* generate telemetry label */
+ snprintf(sa_name, sizeof(sa_name), "SA_SPI_%i", htonl(sa->spi));
+
+ /* add SA telemetry to dictionary container */
+ rte_tel_data_add_dict_container(data, sa_name, sa_data, 0);
+ }
+
+ return 0;
+}
+
+static int
+handle_telemetry_cmd_ipsec_sa_configuration(const char *cmd __rte_unused,
+ const char *params,
+ struct rte_tel_data *data)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ struct rte_ipsec_sa *sa;
+ uint32_t sa_spi;
+
+ if (params)
+ sa_spi = htonl((uint32_t)atoi(params));
+ else
+ return -EINVAL;
+
+ rte_tel_data_start_dict(data);
+
+ LIST_FOREACH(sa, &telemetry->sa_list_head, telemetry_next) {
+ uint64_t mode;
+
+ if (sa_spi != sa->spi)
+ continue;
+
+ /* add SA configuration key/values pairs */
+ rte_tel_data_add_dict_string(data, "Type",
+ (sa->type & RTE_IPSEC_SATP_PROTO_MASK) ==
+ RTE_IPSEC_SATP_PROTO_AH ? "AH" : "ESP");
+
+ rte_tel_data_add_dict_string(data, "Direction",
+ (sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+ RTE_IPSEC_SATP_DIR_IB ? "Inbound" : "Outbound");
+
+ mode = sa->type & RTE_IPSEC_SATP_MODE_MASK;
+
+ if (mode == RTE_IPSEC_SATP_MODE_TRANS) {
+ rte_tel_data_add_dict_string(data, "Mode", "Transport");
+ } else {
+ rte_tel_data_add_dict_string(data, "Mode", "Tunnel");
+
+ if ((sa->type & RTE_IPSEC_SATP_NATT_MASK) ==
+ RTE_IPSEC_SATP_NATT_ENABLE) {
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ } else if (sa->type &
+ RTE_IPSEC_SATP_MODE_TUNLV6) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ }
+ } else {
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ } else if (sa->type &
+ RTE_IPSEC_SATP_MODE_TUNLV6) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ }
+ }
+ }
+
+ rte_tel_data_add_dict_string(data,
+ "extended-sequence-number",
+ (sa->type & RTE_IPSEC_SATP_ESN_MASK) ==
+ RTE_IPSEC_SATP_ESN_ENABLE ?
+ "enabled" : "disabled");
+
+ if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+ RTE_IPSEC_SATP_DIR_IB)
+
+ if (sa->sqn.inb.rsn[sa->sqn.inb.rdidx])
+ rte_tel_data_add_dict_u64(data,
+ "sequence-number",
+ sa->sqn.inb.rsn[sa->sqn.inb.rdidx]->sqn);
+ else
+ rte_tel_data_add_dict_u64(data,
+ "sequence-number", 0);
+ else
+ rte_tel_data_add_dict_u64(data, "sequence-number",
+ sa->sqn.outb);
+
+ rte_tel_data_add_dict_string(data,
+ "explicit-congestion-notification",
+ (sa->type & RTE_IPSEC_SATP_ECN_MASK) ==
+ RTE_IPSEC_SATP_ECN_ENABLE ?
+ "enabled" : "disabled");
+
+ rte_tel_data_add_dict_string(data,
+ "copy-DSCP",
+ (sa->type & RTE_IPSEC_SATP_DSCP_MASK) ==
+ RTE_IPSEC_SATP_DSCP_ENABLE ?
+ "enabled" : "disabled");
+
+ rte_tel_data_add_dict_string(data, "TSO",
+ sa->tso.enabled ? "enabled" : "disabled");
+
+ if (sa->tso.enabled)
+ rte_tel_data_add_dict_u64(data, "TSO-MSS", sa->tso.mss);
+
+ }
+
+ return 0;
+}
+int
+rte_ipsec_telemetry_init(void)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ int rc = 0;
+
+ if (telemetry->initialized)
+ return rc;
+
+ LIST_INIT(&telemetry->sa_list_head);
+
+ rc = rte_telemetry_register_cmd("/ipsec/sa/list",
+ handle_telemetry_cmd_ipsec_sa_list,
+ "Return list of IPsec Security Associations with telemetry enabled.");
+ if (rc)
+ return rc;
+
+ rc = rte_telemetry_register_cmd("/ipsec/sa/stats",
+ handle_telemetry_cmd_ipsec_sa_stats,
+ "Returns IPsec Security Association stastistics. Parameters: int sa_spi");
+ if (rc)
+ return rc;
+
+ rc = rte_telemetry_register_cmd("/ipsec/sa/details",
+ handle_telemetry_cmd_ipsec_sa_configuration,
+ "Returns IPsec Security Association configuration. Parameters: int sa_spi");
+ if (rc)
+ return rc;
+
+ telemetry->initialized = true;
+
+ return rc;
+}
+
+int
+rte_ipsec_telemetry_sa_add(struct rte_ipsec_sa *sa)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+
+ LIST_INSERT_HEAD(&telemetry->sa_list_head, sa, telemetry_next);
+
+ return 0;
+}
int
rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
@@ -644,19 +888,24 @@ uint16_t
pkt_flag_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- uint32_t i, k;
+ uint32_t i, k, bytes = 0;
uint32_t dr[num];
RTE_SET_USED(ss);
k = 0;
for (i = 0; i != num; i++) {
- if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
+ if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
k++;
+ bytes += mb[i]->data_len;
+ }
else
dr[i - k] = i;
}
+ ss->sa->statistics.count += k;
+ ss->sa->statistics.bytes += bytes - (ss->sa->hdr_len * k);
+
/* handle unprocessed mbufs */
if (k != num) {
rte_errno = EBADMSG;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 3f38921eb3..b9b7ebec5b 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -122,9 +122,30 @@ struct rte_ipsec_sa {
uint16_t mss;
} tso;
+ LIST_ENTRY(rte_ipsec_sa) telemetry_next;
+ /**< list entry for telemetry enabled SA */
+
+
+ RTE_MARKER cachealign_statistics __rte_cache_min_aligned;
+
+ /* Statistics */
+ struct {
+ uint64_t count;
+ uint64_t bytes;
+
+ struct {
+ uint64_t count;
+ uint64_t authentication_failed;
+ } errors;
+ } statistics;
+
+ RTE_MARKER cachealign_tunnel_header __rte_cache_min_aligned;
+
/* template for tunnel header */
uint8_t hdr[IPSEC_MAX_HDR_SIZE];
+
+ RTE_MARKER cachealign_tunnel_seq_num_replay_win __rte_cache_min_aligned;
/*
* sqn and replay window
* In case of SA handled by multiple threads *sqn* cacheline
diff --git a/lib/ipsec/version.map b/lib/ipsec/version.map
index ad3e38b7c8..7ce6ff9ab3 100644
--- a/lib/ipsec/version.map
+++ b/lib/ipsec/version.map
@@ -19,3 +19,12 @@ DPDK_21 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ # added in 21.11
+ rte_ipsec_telemetry_init;
+ rte_ipsec_telemetry_sa_add;
+
+};
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v3 09/10] ipsec: add support for initial SQN value
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 00/10] new features for ipsec and security libraries Radu Nicolau
` (7 preceding siblings ...)
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 08/10] ipsec: add support for SA telemetry Radu Nicolau
@ 2021-08-13 9:30 ` Radu Nicolau
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 10/10] ipsec: add ol_flags support Radu Nicolau
2021-08-13 11:08 ` [dpdk-dev] [EXT] [PATCH v3 00/10] new features for ipsec and security libraries Akhil Goyal
10 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-08-13 9:30 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, hemant.agrawal, gakhil, anoobj,
declan.doherty, abhijit.sinha, daniel.m.buckley, marchana,
ktejasree, matan, Radu Nicolau
Update IPsec library to support initial SQN value.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/esp_outb.c | 19 ++++++++++++-------
lib/ipsec/sa.c | 29 ++++++++++++++++++++++-------
2 files changed, 34 insertions(+), 14 deletions(-)
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 2c02c3bb12..8a6d09558f 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -661,7 +661,7 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
*/
static inline void
inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
- struct rte_mbuf *mb[], uint16_t num)
+ struct rte_mbuf *mb[], uint16_t num, uint64_t *sqn)
{
uint32_t i, ol_flags, bytes = 0;
@@ -672,7 +672,7 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
bytes += mb[i]->data_len;
if (ol_flags != 0)
rte_security_set_pkt_metadata(ss->security.ctx,
- ss->security.ses, mb[i], NULL);
+ ss->security.ses, mb[i], sqn);
}
ss->sa->statistics.count += num;
ss->sa->statistics.bytes += bytes - (ss->sa->hdr_len * num);
@@ -764,7 +764,10 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
if (k != num && k != 0)
move_bad_mbufs(mb, dr, num, num - k);
- inline_outb_mbuf_prepare(ss, mb, k);
+ if (sa->sqn_mask > UINT32_MAX)
+ inline_outb_mbuf_prepare(ss, mb, k, &sqn);
+ else
+ inline_outb_mbuf_prepare(ss, mb, k, NULL);
return k;
}
@@ -799,8 +802,7 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
- k = 0;
- for (i = 0; i != num; i++) {
+ for (i = 0, k = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -828,7 +830,10 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
if (k != num && k != 0)
move_bad_mbufs(mb, dr, num, num - k);
- inline_outb_mbuf_prepare(ss, mb, k);
+ if (sa->sqn_mask > UINT32_MAX)
+ inline_outb_mbuf_prepare(ss, mb, k, &sqn);
+ else
+ inline_outb_mbuf_prepare(ss, mb, k, NULL);
return k;
}
@@ -840,6 +845,6 @@ uint16_t
inline_proto_outb_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- inline_outb_mbuf_prepare(ss, mb, num);
+ inline_outb_mbuf_prepare(ss, mb, num, NULL);
return num;
}
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 5b55bbc098..242fdcd461 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -294,11 +294,11 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
* Init ESP outbound specific things.
*/
static void
-esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
+esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen, uint64_t sqn)
{
uint8_t algo_type;
- sa->sqn.outb = 1;
+ sa->sqn.outb = sqn;
algo_type = sa->algo_type;
@@ -356,6 +356,8 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
static void
esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
{
+ uint64_t sqn = prm->ipsec_xform.esn.value > 0 ?
+ prm->ipsec_xform.esn.value : 0;
sa->proto = prm->tun.next_proto;
sa->hdr_len = prm->tun.hdr_len;
sa->hdr_l3_off = prm->tun.hdr_l3_off;
@@ -366,7 +368,7 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
- esp_outb_init(sa, sa->hdr_len);
+ esp_outb_init(sa, sa->hdr_len, sqn);
}
/*
@@ -376,6 +378,8 @@ static int
esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
const struct crypto_xform *cxf)
{
+ uint64_t sqn = prm->ipsec_xform.esn.value > 0 ?
+ prm->ipsec_xform.esn.value : 0;
static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
RTE_IPSEC_SATP_MODE_MASK |
RTE_IPSEC_SATP_NATT_MASK;
@@ -492,7 +496,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
- esp_outb_init(sa, 0);
+ esp_outb_init(sa, 0, sqn);
break;
}
@@ -503,15 +507,19 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
* helper function, init SA replay structure.
*/
static void
-fill_sa_replay(struct rte_ipsec_sa *sa, uint32_t wnd_sz, uint32_t nb_bucket)
+fill_sa_replay(struct rte_ipsec_sa *sa,
+ uint32_t wnd_sz, uint32_t nb_bucket, uint64_t sqn)
{
sa->replay.win_sz = wnd_sz;
sa->replay.nb_bucket = nb_bucket;
sa->replay.bucket_index_mask = nb_bucket - 1;
sa->sqn.inb.rsn[0] = (struct replay_sqn *)(sa + 1);
- if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+ sa->sqn.inb.rsn[0]->sqn = sqn;
+ if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM) {
sa->sqn.inb.rsn[1] = (struct replay_sqn *)
((uintptr_t)sa->sqn.inb.rsn[0] + rsn_size(nb_bucket));
+ sa->sqn.inb.rsn[1]->sqn = sqn;
+ }
}
int
@@ -830,13 +838,20 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
UINT32_MAX : UINT64_MAX;
+ /* if we are starting from a non-zero sn value */
+ if (prm->ipsec_xform.esn.value > 0) {
+ if (prm->ipsec_xform.direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+ sa->sqn.outb = prm->ipsec_xform.esn.value;
+ }
+
rc = esp_sa_init(sa, prm, &cxf);
if (rc != 0)
rte_ipsec_sa_fini(sa);
/* fill replay window related fields */
if (nb != 0)
- fill_sa_replay(sa, wsz, nb);
+ fill_sa_replay(sa, wsz, nb, prm->ipsec_xform.esn.value);
return sz;
}
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v3 10/10] ipsec: add ol_flags support
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 00/10] new features for ipsec and security libraries Radu Nicolau
` (8 preceding siblings ...)
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 09/10] ipsec: add support for initial SQN value Radu Nicolau
@ 2021-08-13 9:30 ` Radu Nicolau
2021-08-13 11:08 ` [dpdk-dev] [EXT] [PATCH v3 00/10] new features for ipsec and security libraries Akhil Goyal
10 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-08-13 9:30 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, hemant.agrawal, gakhil, anoobj,
declan.doherty, abhijit.sinha, daniel.m.buckley, marchana,
ktejasree, matan, Radu Nicolau
Set mbuff->ol_flags for IPsec packets.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/esp_inb.c | 17 ++++++++++++--
lib/ipsec/esp_outb.c | 48 ++++++++++++++++++++++++++++++---------
lib/ipsec/rte_ipsec_sa.h | 3 ++-
lib/ipsec/sa.c | 49 ++++++++++++++++++++++++++++++++++++++--
lib/ipsec/sa.h | 8 +++++++
5 files changed, 109 insertions(+), 16 deletions(-)
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index 8cb4c16302..5fcb41297e 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -559,7 +559,8 @@ trs_process_step3(struct rte_mbuf *mb)
* - tx_offload
*/
static inline void
-tun_process_step3(struct rte_mbuf *mb, uint64_t txof_msk, uint64_t txof_val)
+tun_process_step3(struct rte_mbuf *mb, uint8_t is_ipv4, uint64_t txof_msk,
+ uint64_t txof_val)
{
/* reset mbuf metatdata: L2/L3 len, packet type */
mb->packet_type = RTE_PTYPE_UNKNOWN;
@@ -567,6 +568,14 @@ tun_process_step3(struct rte_mbuf *mb, uint64_t txof_msk, uint64_t txof_val)
/* clear the PKT_RX_SEC_OFFLOAD flag if set */
mb->ol_flags &= ~PKT_RX_SEC_OFFLOAD;
+
+ if (is_ipv4) {
+ mb->l3_len = sizeof(struct rte_ipv4_hdr);
+ mb->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ } else {
+ mb->l3_len = sizeof(struct rte_ipv6_hdr);
+ mb->ol_flags |= PKT_TX_IPV6;
+ }
}
/*
@@ -618,8 +627,12 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
update_tun_inb_l3hdr(sa, outh, inh);
/* update mbuf's metadata */
- tun_process_step3(mb[i], sa->tx_offload.msk,
+ tun_process_step3(mb[i],
+ (sa->type & RTE_IPSEC_SATP_IPV_MASK) ==
+ RTE_IPSEC_SATP_IPV4 ? 1 : 0,
+ sa->tx_offload.msk,
sa->tx_offload.val);
+
k++;
} else
dr[i - k] = i;
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 8a6d09558f..d8e261e6fb 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -19,7 +19,7 @@
typedef int32_t (*esp_outb_prepare_t)(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len);
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t icrypto);
/*
* helper function to fill crypto_sym op for cipher+auth algorithms.
@@ -140,9 +140,9 @@ outb_cop_prepare(struct rte_crypto_op *cop,
static inline int32_t
outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len)
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t icrypto)
{
- uint32_t clen, hlen, l2len, pdlen, pdofs, plen, tlen;
+ uint32_t clen, hlen, l2len, l3len, pdlen, pdofs, plen, tlen;
struct rte_mbuf *ml;
struct rte_esp_hdr *esph;
struct rte_esp_tail *espt;
@@ -154,6 +154,8 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* size of ipsec protected data */
l2len = mb->l2_len;
+ l3len = mb->l3_len;
+
plen = mb->pkt_len - l2len;
/* number of bytes to encrypt */
@@ -190,8 +192,26 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
/* update pkt l2/l3 len */
- mb->tx_offload = (mb->tx_offload & sa->tx_offload.msk) |
- sa->tx_offload.val;
+ if (icrypto) {
+ mb->tx_offload =
+ (mb->tx_offload & sa->inline_crypto.tx_offload.msk) |
+ sa->inline_crypto.tx_offload.val;
+ mb->l3_len = l3len;
+
+ mb->ol_flags |= sa->inline_crypto.tx_ol_flags;
+
+ /* set ip checksum offload for inner */
+ if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4)
+ mb->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if ((sa->type & RTE_IPSEC_SATP_IPV_MASK)
+ == RTE_IPSEC_SATP_IPV6)
+ mb->ol_flags |= PKT_TX_IPV6;
+ } else {
+ mb->tx_offload = (mb->tx_offload & sa->tx_offload.msk) |
+ sa->tx_offload.val;
+
+ mb->ol_flags |= sa->tx_ol_flags;
+ }
/* copy tunnel pkt header */
rte_memcpy(ph, sa->hdr, sa->hdr_len);
@@ -311,7 +331,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
/* try to update the packet itself */
rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv,
- sa->sqh_len);
+ sa->sqh_len, 0);
/* success, setup crypto op */
if (rc >= 0) {
outb_pkt_xprepare(sa, sqc, &icv);
@@ -338,7 +358,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
static inline int32_t
outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len)
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t icrypto __rte_unused)
{
uint8_t np;
uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen;
@@ -394,10 +414,16 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* shift L2/L3 headers */
insert_esph(ph, ph + hlen, uhlen);
+ if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4)
+ mb->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV6)
+ mb->ol_flags |= PKT_TX_IPV6;
+
/* update ip header fields */
np = update_trs_l34hdrs(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
l3len, IPPROTO_ESP, tso);
+
/* update spi, seqn and iv */
esph = (struct rte_esp_hdr *)(ph + uhlen);
iv = (uint64_t *)(esph + 1);
@@ -463,7 +489,7 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
/* try to update the packet itself */
rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv,
- sa->sqh_len);
+ sa->sqh_len, 0);
/* success, setup crypto op */
if (rc >= 0) {
outb_pkt_xprepare(sa, sqc, &icv);
@@ -560,7 +586,7 @@ cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss,
gen_iv(ivbuf[k], sqc);
/* try to update the packet itself */
- rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len);
+ rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len, 0);
/* success, proceed with preparations */
if (rc >= 0) {
@@ -741,7 +767,7 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
gen_iv(iv, sqc);
/* try to update the packet itself */
- rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
+ rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0, 1);
k += (rc >= 0);
@@ -808,7 +834,7 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
gen_iv(iv, sqc);
/* try to update the packet itself */
- rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
+ rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0, 0);
k += (rc >= 0);
diff --git a/lib/ipsec/rte_ipsec_sa.h b/lib/ipsec/rte_ipsec_sa.h
index 40d1e70d45..3c36dcaa77 100644
--- a/lib/ipsec/rte_ipsec_sa.h
+++ b/lib/ipsec/rte_ipsec_sa.h
@@ -38,7 +38,8 @@ struct rte_ipsec_sa_prm {
union {
struct {
uint8_t hdr_len; /**< tunnel header len */
- uint8_t hdr_l3_off; /**< offset for IPv4/IPv6 header */
+ uint8_t hdr_l3_off; /**< tunnel l3 header len */
+ uint8_t hdr_l3_len; /**< tunnel l3 header len */
uint8_t next_proto; /**< next header protocol */
const void *hdr; /**< tunnel header template */
} tun; /**< tunnel mode related parameters */
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 242fdcd461..51f71b30c6 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -17,6 +17,8 @@
#define MBUF_MAX_L2_LEN RTE_LEN2MASK(RTE_MBUF_L2_LEN_BITS, uint64_t)
#define MBUF_MAX_L3_LEN RTE_LEN2MASK(RTE_MBUF_L3_LEN_BITS, uint64_t)
+#define MBUF_MAX_TSO_LEN RTE_LEN2MASK(RTE_MBUF_TSO_SEGSZ_BITS, uint64_t)
+#define MBUF_MAX_OL3_LEN RTE_LEN2MASK(RTE_MBUF_OUTL3_LEN_BITS, uint64_t)
/* some helper structures */
struct crypto_xform {
@@ -348,6 +350,11 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen, uint64_t sqn)
sa->cofs.ofs.cipher.head = sa->ctp.cipher.offset - sa->ctp.auth.offset;
sa->cofs.ofs.cipher.tail = (sa->ctp.auth.offset + sa->ctp.auth.length) -
(sa->ctp.cipher.offset + sa->ctp.cipher.length);
+
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
+ sa->tx_ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV6)
+ sa->tx_ol_flags |= PKT_TX_IPV6;
}
/*
@@ -362,9 +369,43 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
sa->hdr_len = prm->tun.hdr_len;
sa->hdr_l3_off = prm->tun.hdr_l3_off;
+
+ /* update l2_len and l3_len fields for outbound mbuf */
+ sa->inline_crypto.tx_offload.val = rte_mbuf_tx_offload(
+ 0, /* iL2_LEN */
+ 0, /* iL3_LEN */
+ 0, /* iL4_LEN */
+ 0, /* TSO_SEG_SZ */
+ prm->tun.hdr_l3_len, /* oL3_LEN */
+ prm->tun.hdr_l3_off, /* oL2_LEN */
+ 0);
+
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_TUNNEL_ESP;
+
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_OUTER_IPV4;
+ else if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV6)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_OUTER_IPV6;
+
+ if (sa->inline_crypto.tx_ol_flags & PKT_TX_OUTER_IPV4)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_OUTER_IP_CKSUM;
+ if (sa->tx_ol_flags & PKT_TX_IPV4)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_IP_CKSUM;
+
/* update l2_len and l3_len fields for outbound mbuf */
- sa->tx_offload.val = rte_mbuf_tx_offload(sa->hdr_l3_off,
- sa->hdr_len - sa->hdr_l3_off, 0, 0, 0, 0, 0);
+ sa->tx_offload.val = rte_mbuf_tx_offload(
+ prm->tun.hdr_l3_off, /* iL2_LEN */
+ prm->tun.hdr_l3_len, /* iL3_LEN */
+ 0, /* iL4_LEN */
+ 0, /* TSO_SEG_SZ */
+ 0, /* oL3_LEN */
+ 0, /* oL2_LEN */
+ 0);
+
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
+ sa->tx_ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV6)
+ sa->tx_ol_flags |= PKT_TX_IPV6;
memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
@@ -473,6 +514,10 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->salt = prm->ipsec_xform.salt;
/* preserve all values except l2_len and l3_len */
+ sa->inline_crypto.tx_offload.msk =
+ ~rte_mbuf_tx_offload(MBUF_MAX_L2_LEN, MBUF_MAX_L3_LEN,
+ 0, 0, MBUF_MAX_OL3_LEN, 0, 0);
+
sa->tx_offload.msk =
~rte_mbuf_tx_offload(MBUF_MAX_L2_LEN, MBUF_MAX_L3_LEN,
0, 0, 0, 0, 0);
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index b9b7ebec5b..172d094c4b 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -101,6 +101,14 @@ struct rte_ipsec_sa {
uint64_t msk;
uint64_t val;
} tx_offload;
+ uint64_t tx_ol_flags;
+ struct {
+ uint64_t tx_ol_flags;
+ struct {
+ uint64_t msk;
+ uint64_t val;
+ } tx_offload;
+ } inline_crypto;
struct {
uint16_t sport;
uint16_t dport;
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH v3 00/10] new features for ipsec and security libraries
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 00/10] new features for ipsec and security libraries Radu Nicolau
` (9 preceding siblings ...)
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 10/10] ipsec: add ol_flags support Radu Nicolau
@ 2021-08-13 11:08 ` Akhil Goyal
2021-08-13 11:41 ` Nicolau, Radu
10 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2021-08-13 11:08 UTC (permalink / raw)
To: Radu Nicolau
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, hemant.agrawal, Anoob Joseph, declan.doherty,
abhijit.sinha, daniel.m.buckley, Archana Muniganti,
Tejasree Kondoj, matan
Changelog??
> Add support for:
> TSO, NAT-T/UDP encapsulation, ESN
> AES_CCM, CHACHA20_POLY1305 and AES_GMAC
> SA telemetry
> mbuf offload flags
> Initial SQN value
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
>
> Radu Nicolau (10):
> security: add support for TSO on IPsec session
> security: add UDP params for IPsec NAT-T
> security: add ESN field to ipsec_xform
> mbuf: add IPsec ESP tunnel type
> ipsec: add support for AEAD algorithms
> ipsec: add transmit segmentation offload support
> ipsec: add support for NAT-T
> ipsec: add support for SA telemetry
> ipsec: add support for initial SQN value
> ipsec: add ol_flags support
>
> lib/ipsec/crypto.h | 137 ++++++++++++
> lib/ipsec/esp_inb.c | 88 +++++++-
> lib/ipsec/esp_outb.c | 262 +++++++++++++++++++----
> lib/ipsec/iph.h | 23 +-
> lib/ipsec/meson.build | 2 +-
> lib/ipsec/rte_ipsec.h | 11 +
> lib/ipsec/rte_ipsec_sa.h | 11 +-
> lib/ipsec/sa.c | 406 ++++++++++++++++++++++++++++++++++--
> lib/ipsec/sa.h | 43 ++++
> lib/ipsec/version.map | 9 +
> lib/mbuf/rte_mbuf_core.h | 1 +
> lib/security/rte_security.h | 31 +++
> 12 files changed, 951 insertions(+), 73 deletions(-)
>
> --
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH v3 00/10] new features for ipsec and security libraries
2021-08-13 11:08 ` [dpdk-dev] [EXT] [PATCH v3 00/10] new features for ipsec and security libraries Akhil Goyal
@ 2021-08-13 11:41 ` Nicolau, Radu
0 siblings, 0 replies; 184+ messages in thread
From: Nicolau, Radu @ 2021-08-13 11:41 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, hemant.agrawal, Anoob Joseph, declan.doherty,
abhijit.sinha, daniel.m.buckley, Archana Muniganti,
Tejasree Kondoj, matan
On 8/13/2021 12:08 PM, Akhil Goyal wrote:
> Changelog??
Sorry, just a small fix for a build error and corrected misspelled email
address.
>
>> Add support for:
>> TSO, NAT-T/UDP encapsulation, ESN
>> AES_CCM, CHACHA20_POLY1305 and AES_GMAC
>> SA telemetry
>> mbuf offload flags
>> Initial SQN value
>>
>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
>> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
>>
>> Radu Nicolau (10):
>> security: add support for TSO on IPsec session
>> security: add UDP params for IPsec NAT-T
>> security: add ESN field to ipsec_xform
>> mbuf: add IPsec ESP tunnel type
>> ipsec: add support for AEAD algorithms
>> ipsec: add transmit segmentation offload support
>> ipsec: add support for NAT-T
>> ipsec: add support for SA telemetry
>> ipsec: add support for initial SQN value
>> ipsec: add ol_flags support
>>
>> lib/ipsec/crypto.h | 137 ++++++++++++
>> lib/ipsec/esp_inb.c | 88 +++++++-
>> lib/ipsec/esp_outb.c | 262 +++++++++++++++++++----
>> lib/ipsec/iph.h | 23 +-
>> lib/ipsec/meson.build | 2 +-
>> lib/ipsec/rte_ipsec.h | 11 +
>> lib/ipsec/rte_ipsec_sa.h | 11 +-
>> lib/ipsec/sa.c | 406 ++++++++++++++++++++++++++++++++++--
>> lib/ipsec/sa.h | 43 ++++
>> lib/ipsec/version.map | 9 +
>> lib/mbuf/rte_mbuf_core.h | 1 +
>> lib/security/rte_security.h | 31 +++
>> 12 files changed, 951 insertions(+), 73 deletions(-)
>>
>> --
>> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v4 00/10] new features for ipsec and security libraries
2021-07-13 13:35 [dpdk-dev] [PATCH 00/10] new features for ipsec and security libraries Radu Nicolau
` (11 preceding siblings ...)
2021-08-13 9:30 ` [dpdk-dev] [PATCH v3 00/10] new features for ipsec and security libraries Radu Nicolau
@ 2021-09-03 11:26 ` Radu Nicolau
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 01/10] security: add support for TSO on IPsec session Radu Nicolau
` (9 more replies)
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 00/10] new features for ipsec and security libraries Radu Nicolau
` (5 subsequent siblings)
18 siblings, 10 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-09-03 11:26 UTC (permalink / raw)
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, gakhil, anoobj,
declan.doherty, abhijit.sinha, daniel.m.buckley, marchana,
ktejasree, matan, Radu Nicolau
Add support for:
TSO, NAT-T/UDP encapsulation, ESN
AES_CCM, CHACHA20_POLY1305 and AES_GMAC
SA telemetry
mbuf offload flags
Initial SQN value
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Radu Nicolau (10):
security: add support for TSO on IPsec session
security: add UDP params for IPsec NAT-T
security: add ESN field to ipsec_xform
mbuf: add IPsec ESP tunnel type
ipsec: add support for AEAD algorithms
ipsec: add transmit segmentation offload support
ipsec: add support for NAT-T
ipsec: add support for SA telemetry
ipsec: add support for initial SQN value
ipsec: add ol_flags support
lib/ipsec/crypto.h | 137 ++++++++++++
lib/ipsec/esp_inb.c | 88 +++++++-
lib/ipsec/esp_outb.c | 262 +++++++++++++++++++----
lib/ipsec/iph.h | 23 +-
lib/ipsec/meson.build | 2 +-
lib/ipsec/rte_ipsec.h | 23 ++
lib/ipsec/rte_ipsec_sa.h | 11 +-
lib/ipsec/sa.c | 406 ++++++++++++++++++++++++++++++++++--
lib/ipsec/sa.h | 43 ++++
lib/ipsec/version.map | 9 +
lib/mbuf/rte_mbuf_core.h | 1 +
lib/security/rte_security.h | 31 +++
12 files changed, 963 insertions(+), 73 deletions(-)
--
v2: fixed lib/ipsec/version.map updates to show correct version
v3: fixed build error and corrected misspelled email address
v4: add doxygen comments for the IPsec telemetry APIs
update inline comments refering to the wrong RFC
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v4 01/10] security: add support for TSO on IPsec session
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 " Radu Nicolau
@ 2021-09-03 11:26 ` Radu Nicolau
2021-09-03 12:50 ` Zhang, Roy Fan
2021-09-24 9:09 ` Hemant Agrawal
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 02/10] security: add UDP params for IPsec NAT-T Radu Nicolau
` (8 subsequent siblings)
9 siblings, 2 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-09-03 11:26 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, anoobj,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau
Allow user to provision a per security session maximum segment size
(MSS) for use when Transmit Segmentation Offload (TSO) is supported.
The MSS value will be used when PKT_TX_TCP_SEG or PKT_TX_UDP_SEG
ol_flags are specified in mbuf.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/security/rte_security.h | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 88d31de0a6..45896a77d0 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -181,6 +181,19 @@ struct rte_security_ipsec_sa_options {
* * 0: Disable per session security statistics collection for this SA.
*/
uint32_t stats : 1;
+
+ /** Transmit Segmentation Offload (TSO)
+ *
+ * * 1: Enable per session security TSO support, use MSS value provide
+ * in IPsec security session when PKT_TX_TCP_SEG or PKT_TX_UDP_SEG
+ * ol_flags are set in mbuf.
+ * this SA, if supported by the driver.
+ * * 0: No TSO support for offload IPsec packets. Hardware will not
+ * attempt to segment packet, and packet transmission will fail if
+ * larger than MTU of interface
+ */
+ uint32_t tso : 1;
+
};
/** IPSec security association direction */
@@ -217,6 +230,8 @@ struct rte_security_ipsec_xform {
/**< Anti replay window size to enable sequence replay attack handling.
* replay checking is disabled if the window size is 0.
*/
+ uint32_t mss;
+ /**< IPsec payload Maximum Segment Size */
};
/**
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v4 01/10] security: add support for TSO on IPsec session
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 01/10] security: add support for TSO on IPsec session Radu Nicolau
@ 2021-09-03 12:50 ` Zhang, Roy Fan
2021-09-24 9:09 ` Hemant Agrawal
1 sibling, 0 replies; 184+ messages in thread
From: Zhang, Roy Fan @ 2021-09-03 12:50 UTC (permalink / raw)
To: Nicolau, Radu, Akhil Goyal, Doherty, Declan
Cc: dev, mdr, Ananyev, Konstantin, Medvedkin, Vladimir, Richardson,
Bruce, hemant.agrawal, anoobj, Sinha, Abhijit, Buckley, Daniel M,
marchana, ktejasree, matan
> -----Original Message-----
> From: Nicolau, Radu <radu.nicolau@intel.com>
> Sent: Friday, September 3, 2021 12:26 PM
> To: Akhil Goyal <gakhil@marvell.com>; Doherty, Declan
> <declan.doherty@intel.com>
> Cc: dev@dpdk.org; mdr@ashroe.eu; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Medvedkin, Vladimir
> <vladimir.medvedkin@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> hemant.agrawal@nxp.com; anoobj@marvell.com; Sinha, Abhijit
> <abhijit.sinha@intel.com>; Buckley, Daniel M <daniel.m.buckley@intel.com>;
> marchana@marvell.com; ktejasree@marvell.com; matan@nvidia.com;
> Nicolau, Radu <radu.nicolau@intel.com>
> Subject: [PATCH v4 01/10] security: add support for TSO on IPsec session
>
> Allow user to provision a per security session maximum segment size
> (MSS) for use when Transmit Segmentation Offload (TSO) is supported.
> The MSS value will be used when PKT_TX_TCP_SEG or PKT_TX_UDP_SEG
> ol_flags are specified in mbuf.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> ---
> lib/security/rte_security.h | 15 +++++++++++++++
> 1 file changed, 15 insertions(+)
>
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index 88d31de0a6..45896a77d0 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -181,6 +181,19 @@ struct rte_security_ipsec_sa_options {
> * * 0: Disable per session security statistics collection for this SA.
> */
> uint32_t stats : 1;
> +
> + /** Transmit Segmentation Offload (TSO)
> + *
> + * * 1: Enable per session security TSO support, use MSS value
> provide
> + * in IPsec security session when PKT_TX_TCP_SEG or
> PKT_TX_UDP_SEG
> + * ol_flags are set in mbuf.
> + * this SA, if supported by the driver.
> + * * 0: No TSO support for offload IPsec packets. Hardware will not
> + * attempt to segment packet, and packet transmission will fail if
> + * larger than MTU of interface
> + */
> + uint32_t tso : 1;
> +
> };
>
> /** IPSec security association direction */
> @@ -217,6 +230,8 @@ struct rte_security_ipsec_xform {
> /**< Anti replay window size to enable sequence replay attack
> handling.
> * replay checking is disabled if the window size is 0.
> */
> + uint32_t mss;
> + /**< IPsec payload Maximum Segment Size */
> };
>
> /**
> --
> 2.25.1
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v4 01/10] security: add support for TSO on IPsec session
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 01/10] security: add support for TSO on IPsec session Radu Nicolau
2021-09-03 12:50 ` Zhang, Roy Fan
@ 2021-09-24 9:09 ` Hemant Agrawal
1 sibling, 0 replies; 184+ messages in thread
From: Hemant Agrawal @ 2021-09-24 9:09 UTC (permalink / raw)
To: Radu Nicolau, Akhil Goyal, Declan Doherty
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, anoobj,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v4 02/10] security: add UDP params for IPsec NAT-T
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 " Radu Nicolau
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 01/10] security: add support for TSO on IPsec session Radu Nicolau
@ 2021-09-03 11:26 ` Radu Nicolau
2021-09-03 12:51 ` Zhang, Roy Fan
2021-09-05 14:19 ` [dpdk-dev] [EXT] " Akhil Goyal
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 03/10] security: add ESN field to ipsec_xform Radu Nicolau
` (7 subsequent siblings)
9 siblings, 2 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-09-03 11:26 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, anoobj,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau
Add support for specifying UDP port params for UDP encapsulation option.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/security/rte_security.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 45896a77d0..03572b10ab 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -112,6 +112,12 @@ struct rte_security_ipsec_tunnel_param {
};
};
+struct rte_security_ipsec_udp_param {
+
+ uint16_t sport;
+ uint16_t dport;
+};
+
/**
* IPsec Security Association option flags
*/
@@ -224,6 +230,8 @@ struct rte_security_ipsec_xform {
/**< IPsec SA Mode - transport/tunnel */
struct rte_security_ipsec_tunnel_param tunnel;
/**< Tunnel parameters, NULL for transport mode */
+ struct rte_security_ipsec_udp_param udp;
+ /**< UDP parameters, ignored when udp_encap option not specified */
uint64_t esn_soft_limit;
/**< ESN for which the overflow event need to be raised */
uint32_t replay_win_sz;
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v4 02/10] security: add UDP params for IPsec NAT-T
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 02/10] security: add UDP params for IPsec NAT-T Radu Nicolau
@ 2021-09-03 12:51 ` Zhang, Roy Fan
2021-09-05 14:19 ` [dpdk-dev] [EXT] " Akhil Goyal
1 sibling, 0 replies; 184+ messages in thread
From: Zhang, Roy Fan @ 2021-09-03 12:51 UTC (permalink / raw)
To: Nicolau, Radu, Akhil Goyal, Doherty, Declan
Cc: dev, mdr, Ananyev, Konstantin, Medvedkin, Vladimir, Richardson,
Bruce, hemant.agrawal, anoobj, Sinha, Abhijit, Buckley, Daniel M,
marchana, ktejasree, matan
> -----Original Message-----
> From: Nicolau, Radu <radu.nicolau@intel.com>
> Sent: Friday, September 3, 2021 12:26 PM
> To: Akhil Goyal <gakhil@marvell.com>; Doherty, Declan
> <declan.doherty@intel.com>
> Cc: dev@dpdk.org; mdr@ashroe.eu; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Medvedkin, Vladimir
> <vladimir.medvedkin@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> hemant.agrawal@nxp.com; anoobj@marvell.com; Sinha, Abhijit
> <abhijit.sinha@intel.com>; Buckley, Daniel M <daniel.m.buckley@intel.com>;
> marchana@marvell.com; ktejasree@marvell.com; matan@nvidia.com;
> Nicolau, Radu <radu.nicolau@intel.com>
> Subject: [PATCH v4 02/10] security: add UDP params for IPsec NAT-T
>
> Add support for specifying UDP port params for UDP encapsulation option.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> ---
> lib/security/rte_security.h | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index 45896a77d0..03572b10ab 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -112,6 +112,12 @@ struct rte_security_ipsec_tunnel_param {
> };
> };
>
> +struct rte_security_ipsec_udp_param {
> +
> + uint16_t sport;
> + uint16_t dport;
> +};
> +
> /**
> * IPsec Security Association option flags
> */
> @@ -224,6 +230,8 @@ struct rte_security_ipsec_xform {
> /**< IPsec SA Mode - transport/tunnel */
> struct rte_security_ipsec_tunnel_param tunnel;
> /**< Tunnel parameters, NULL for transport mode */
> + struct rte_security_ipsec_udp_param udp;
> + /**< UDP parameters, ignored when udp_encap option not
> specified */
> uint64_t esn_soft_limit;
> /**< ESN for which the overflow event need to be raised */
> uint32_t replay_win_sz;
> --
> 2.25.1
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH v4 02/10] security: add UDP params for IPsec NAT-T
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 02/10] security: add UDP params for IPsec NAT-T Radu Nicolau
2021-09-03 12:51 ` Zhang, Roy Fan
@ 2021-09-05 14:19 ` Akhil Goyal
2021-09-06 11:09 ` Nicolau, Radu
1 sibling, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2021-09-05 14:19 UTC (permalink / raw)
To: Radu Nicolau, Declan Doherty
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, Anoob Joseph,
abhijit.sinha, daniel.m.buckley, Archana Muniganti,
Tejasree Kondoj, matan
Hi Radu,
> Add support for specifying UDP port params for UDP encapsulation option.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Do we really need to specify the port numbers for NAT-T?
I suppose they are fixed as 4500.
Could you please specify what the user need to set here for session
creation?
> ---
> lib/security/rte_security.h | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index 45896a77d0..03572b10ab 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -112,6 +112,12 @@ struct rte_security_ipsec_tunnel_param {
> };
> };
>
> +struct rte_security_ipsec_udp_param {
> +
> + uint16_t sport;
> + uint16_t dport;
> +};
> +
> /**
> * IPsec Security Association option flags
> */
> @@ -224,6 +230,8 @@ struct rte_security_ipsec_xform {
> /**< IPsec SA Mode - transport/tunnel */
> struct rte_security_ipsec_tunnel_param tunnel;
> /**< Tunnel parameters, NULL for transport mode */
> + struct rte_security_ipsec_udp_param udp;
> + /**< UDP parameters, ignored when udp_encap option not specified
> */
> uint64_t esn_soft_limit;
> /**< ESN for which the overflow event need to be raised */
> uint32_t replay_win_sz;
> --
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH v4 02/10] security: add UDP params for IPsec NAT-T
2021-09-05 14:19 ` [dpdk-dev] [EXT] " Akhil Goyal
@ 2021-09-06 11:09 ` Nicolau, Radu
2021-09-24 9:11 ` Hemant Agrawal
0 siblings, 1 reply; 184+ messages in thread
From: Nicolau, Radu @ 2021-09-06 11:09 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, Anoob Joseph,
abhijit.sinha, daniel.m.buckley, Archana Muniganti,
Tejasree Kondoj, matan
On 9/5/2021 3:19 PM, Akhil Goyal wrote:
> Hi Radu,
>
>> Add support for specifying UDP port params for UDP encapsulation option.
>>
>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
>> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> Do we really need to specify the port numbers for NAT-T?
> I suppose they are fixed as 4500.
> Could you please specify what the user need to set here for session
> creation?
From what I'm seeing here
https://datatracker.ietf.org/doc/html/rfc3948#section-2.1 there is no
requirement in general for UDP encapsulation so I think it's better to
make the API flexible as to allow any port to be used.
>
>> ---
>> lib/security/rte_security.h | 8 ++++++++
>> 1 file changed, 8 insertions(+)
>>
>> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
>> index 45896a77d0..03572b10ab 100644
>> --- a/lib/security/rte_security.h
>> +++ b/lib/security/rte_security.h
>> @@ -112,6 +112,12 @@ struct rte_security_ipsec_tunnel_param {
>> };
>> };
>>
>> +struct rte_security_ipsec_udp_param {
>> +
>> + uint16_t sport;
>> + uint16_t dport;
>> +};
>> +
>> /**
>> * IPsec Security Association option flags
>> */
>> @@ -224,6 +230,8 @@ struct rte_security_ipsec_xform {
>> /**< IPsec SA Mode - transport/tunnel */
>> struct rte_security_ipsec_tunnel_param tunnel;
>> /**< Tunnel parameters, NULL for transport mode */
>> + struct rte_security_ipsec_udp_param udp;
>> + /**< UDP parameters, ignored when udp_encap option not specified
>> */
>> uint64_t esn_soft_limit;
>> /**< ESN for which the overflow event need to be raised */
>> uint32_t replay_win_sz;
>> --
>> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH v4 02/10] security: add UDP params for IPsec NAT-T
2021-09-06 11:09 ` Nicolau, Radu
@ 2021-09-24 9:11 ` Hemant Agrawal
2021-09-27 9:16 ` Nicolau, Radu
0 siblings, 1 reply; 184+ messages in thread
From: Hemant Agrawal @ 2021-09-24 9:11 UTC (permalink / raw)
To: Nicolau, Radu, Akhil Goyal, Declan Doherty
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, Anoob Joseph,
abhijit.sinha, daniel.m.buckley, Archana Muniganti,
Tejasree Kondoj, matan
On 9/6/2021 4:39 PM, Nicolau, Radu wrote:
>
> On 9/5/2021 3:19 PM, Akhil Goyal wrote:
>> Hi Radu,
>>
>>> Add support for specifying UDP port params for UDP encapsulation
>>> option.
>>>
>>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>>> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
>>> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
>> Do we really need to specify the port numbers for NAT-T?
>> I suppose they are fixed as 4500.
>> Could you please specify what the user need to set here for session
>> creation?
>
> From what I'm seeing here
> https://datatracker.ietf.org/doc/html/rfc3948#section-2.1 there is no
> requirement in general for UDP encapsulation so I think it's better to
> make the API flexible as to allow any port to be used.
This section states that :
o the Source Port and Destination Port MUST be the same as that used by IKE traffic,
IKE usages port 4500
am I missing something?
>
>
>>
>>> ---
>>> lib/security/rte_security.h | 8 ++++++++
>>> 1 file changed, 8 insertions(+)
>>>
>>> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
>>> index 45896a77d0..03572b10ab 100644
>>> --- a/lib/security/rte_security.h
>>> +++ b/lib/security/rte_security.h
>>> @@ -112,6 +112,12 @@ struct rte_security_ipsec_tunnel_param {
>>> };
>>> };
>>>
>>> +struct rte_security_ipsec_udp_param {
>>> +
>>> + uint16_t sport;
>>> + uint16_t dport;
>>> +};
>>> +
>>> /**
>>> * IPsec Security Association option flags
>>> */
>>> @@ -224,6 +230,8 @@ struct rte_security_ipsec_xform {
>>> /**< IPsec SA Mode - transport/tunnel */
>>> struct rte_security_ipsec_tunnel_param tunnel;
>>> /**< Tunnel parameters, NULL for transport mode */
>>> + struct rte_security_ipsec_udp_param udp;
>>> + /**< UDP parameters, ignored when udp_encap option not specified
>>> */
>>> uint64_t esn_soft_limit;
>>> /**< ESN for which the overflow event need to be raised */
>>> uint32_t replay_win_sz;
>>> --
>>> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH v4 02/10] security: add UDP params for IPsec NAT-T
2021-09-24 9:11 ` Hemant Agrawal
@ 2021-09-27 9:16 ` Nicolau, Radu
2021-09-28 7:07 ` Akhil Goyal
0 siblings, 1 reply; 184+ messages in thread
From: Nicolau, Radu @ 2021-09-27 9:16 UTC (permalink / raw)
To: hemant.agrawal, Akhil Goyal, Declan Doherty
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, Anoob Joseph, abhijit.sinha,
daniel.m.buckley, Archana Muniganti, Tejasree Kondoj, matan
On 9/24/2021 10:11 AM, Hemant Agrawal wrote:
>
>
> On 9/6/2021 4:39 PM, Nicolau, Radu wrote:
>>
>> On 9/5/2021 3:19 PM, Akhil Goyal wrote:
>>> Hi Radu,
>>>
>>>> Add support for specifying UDP port params for UDP encapsulation
>>>> option.
>>>>
>>>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>>>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>>>> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
>>>> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
>>> Do we really need to specify the port numbers for NAT-T?
>>> I suppose they are fixed as 4500.
>>> Could you please specify what the user need to set here for session
>>> creation?
>>
>> From what I'm seeing here
>> https://datatracker.ietf.org/doc/html/rfc3948#section-2.1 there is no
>> requirement in general for UDP encapsulation so I think it's better
>> to make the API flexible as to allow any port to be used.
>
>
> This section states that :
>
> o the Source Port and Destination Port MUST be the same as that used by IKE traffic,
>
> IKE usages port 4500
>
> am I missing something?
I think there's enough confusion in the RFCs so I think it's better to
keep this option flexible:
For example https://datatracker.ietf.org/doc/html/rfc5996#section-2.23:
> It is a common practice of NATs to translate TCP and UDP port numbers
> as well as addresses and use the port numbers of inbound packets to
> decide which internal node should get a given packet. For this
> reason, even though IKE packets MUST be sent to and from UDP port 500
> or 4500, they MUST be accepted coming from any port and responses
> MUST be sent to the port from whence they came. This is because the
> ports may be modified as the packets pass through NATs. Similarly,
> IP addresses of the IKE endpoints are generally not included in the
> IKE payloads because the payloads are cryptographically protected and
> could not be transparently modified by NATs.
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH v4 02/10] security: add UDP params for IPsec NAT-T
2021-09-27 9:16 ` Nicolau, Radu
@ 2021-09-28 7:07 ` Akhil Goyal
0 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2021-09-28 7:07 UTC (permalink / raw)
To: Nicolau, Radu, hemant.agrawal, Declan Doherty
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, Anoob Joseph, abhijit.sinha,
daniel.m.buckley, Archana Muniganti, Tejasree Kondoj, matan
RFC states about NAT-T, that it should be 4500 but for UDP encapsulation it does not specify.
Hence it should be generic here.
From: Nicolau, Radu <radu.nicolau@intel.com>
Sent: Monday, September 27, 2021 2:47 PM
To: hemant.agrawal@nxp.com; Akhil Goyal <gakhil@marvell.com>; Declan Doherty <declan.doherty@intel.com>
Cc: dev@dpdk.org; mdr@ashroe.eu; konstantin.ananyev@intel.com; vladimir.medvedkin@intel.com; bruce.richardson@intel.com; roy.fan.zhang@intel.com; Anoob Joseph <anoobj@marvell.com>; abhijit.sinha@intel.com; daniel.m.buckley@intel.com; Archana Muniganti <marchana@marvell.com>; Tejasree Kondoj <ktejasree@marvell.com>; matan@nvidia.com
Subject: Re: [dpdk-dev] [EXT] [PATCH v4 02/10] security: add UDP params for IPsec NAT-T
On 9/24/2021 10:11 AM, Hemant Agrawal wrote:
On 9/6/2021 4:39 PM, Nicolau, Radu wrote:
On 9/5/2021 3:19 PM, Akhil Goyal wrote:
Hi Radu,
Add support for specifying UDP port params for UDP encapsulation option.
Signed-off-by: Declan Doherty <declan.doherty@intel.com><mailto:declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com><mailto:radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com><mailto:abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com><mailto:daniel.m.buckley@intel.com>
Do we really need to specify the port numbers for NAT-T?
I suppose they are fixed as 4500.
Could you please specify what the user need to set here for session
creation?
From what I'm seeing here https://datatracker.ietf.org/doc/html/rfc3948#section-2.1<https://urldefense.proofpoint.com/v2/url?u=https-3A__datatracker.ietf.org_doc_html_rfc3948-23section-2D2.1&d=DwMCaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=DnL7Si2wl_PRwpZ9TWey3eu68gBzn7DkPwuqhd6WNyo&m=YEEGklabxsppAUjLVd0Lm_8ZiM_fgw7QUDfaRIcXoZA&s=_j1X7QKzxfp4fPOrPr8nYopLrLkcwYElWx0dbrq1fTI&e=> there is no requirement in general for UDP encapsulation so I think it's better to make the API flexible as to allow any port to be used.
This section states that :
o the Source Port and Destination Port MUST be the same as that used by IKE traffic,
IKE usages port 4500
am I missing something?
I think there's enough confusion in the RFCs so I think it's better to keep this option flexible:
For example https://datatracker.ietf.org/doc/html/rfc5996#section-2.23<https://urldefense.proofpoint.com/v2/url?u=https-3A__datatracker.ietf.org_doc_html_rfc5996-23section-2D2.23&d=DwMCaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=DnL7Si2wl_PRwpZ9TWey3eu68gBzn7DkPwuqhd6WNyo&m=YEEGklabxsppAUjLVd0Lm_8ZiM_fgw7QUDfaRIcXoZA&s=t9fLK5bOmzKvHRH63Qdvoma3JtMHmOwjF5FnbvfCmvI&e=>:
It is a common practice of NATs to translate TCP and UDP port numbers
as well as addresses and use the port numbers of inbound packets to
decide which internal node should get a given packet. For this
reason, even though IKE packets MUST be sent to and from UDP port 500
or 4500, they MUST be accepted coming from any port and responses
MUST be sent to the port from whence they came. This is because the
ports may be modified as the packets pass through NATs. Similarly,
IP addresses of the IKE endpoints are generally not included in the
IKE payloads because the payloads are cryptographically protected and
could not be transparently modified by NATs.
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v4 03/10] security: add ESN field to ipsec_xform
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 " Radu Nicolau
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 01/10] security: add support for TSO on IPsec session Radu Nicolau
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 02/10] security: add UDP params for IPsec NAT-T Radu Nicolau
@ 2021-09-03 11:26 ` Radu Nicolau
2021-09-03 12:50 ` Zhang, Roy Fan
2021-09-05 14:47 ` [dpdk-dev] [EXT] " Akhil Goyal
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 04/10] mbuf: add IPsec ESP tunnel type Radu Nicolau
` (6 subsequent siblings)
9 siblings, 2 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-09-03 11:26 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, anoobj,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau
Update ipsec_xform definition to include ESN field.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/security/rte_security.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 03572b10ab..702de58b48 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -240,6 +240,14 @@ struct rte_security_ipsec_xform {
*/
uint32_t mss;
/**< IPsec payload Maximum Segment Size */
+ union {
+ uint64_t value;
+ struct {
+ uint32_t low;
+ uint32_t hi;
+ };
+ } esn;
+ /**< Extended Sequence Number */
};
/**
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v4 03/10] security: add ESN field to ipsec_xform
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 03/10] security: add ESN field to ipsec_xform Radu Nicolau
@ 2021-09-03 12:50 ` Zhang, Roy Fan
2021-09-05 14:47 ` [dpdk-dev] [EXT] " Akhil Goyal
1 sibling, 0 replies; 184+ messages in thread
From: Zhang, Roy Fan @ 2021-09-03 12:50 UTC (permalink / raw)
To: Nicolau, Radu, Akhil Goyal, Doherty, Declan
Cc: dev, mdr, Ananyev, Konstantin, Medvedkin, Vladimir, Richardson,
Bruce, hemant.agrawal, anoobj, Sinha, Abhijit, Buckley, Daniel M,
marchana, ktejasree, matan
> -----Original Message-----
> From: Nicolau, Radu <radu.nicolau@intel.com>
> Sent: Friday, September 3, 2021 12:26 PM
> To: Akhil Goyal <gakhil@marvell.com>; Doherty, Declan
> <declan.doherty@intel.com>
> Cc: dev@dpdk.org; mdr@ashroe.eu; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Medvedkin, Vladimir
> <vladimir.medvedkin@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> hemant.agrawal@nxp.com; anoobj@marvell.com; Sinha, Abhijit
> <abhijit.sinha@intel.com>; Buckley, Daniel M <daniel.m.buckley@intel.com>;
> marchana@marvell.com; ktejasree@marvell.com; matan@nvidia.com;
> Nicolau, Radu <radu.nicolau@intel.com>
> Subject: [PATCH v4 03/10] security: add ESN field to ipsec_xform
>
> Update ipsec_xform definition to include ESN field.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> ---
> lib/security/rte_security.h | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index 03572b10ab..702de58b48 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -240,6 +240,14 @@ struct rte_security_ipsec_xform {
> */
> uint32_t mss;
> /**< IPsec payload Maximum Segment Size */
> + union {
> + uint64_t value;
> + struct {
> + uint32_t low;
> + uint32_t hi;
> + };
> + } esn;
> + /**< Extended Sequence Number */
> };
>
> /**
> --
> 2.25.1
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH v4 03/10] security: add ESN field to ipsec_xform
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 03/10] security: add ESN field to ipsec_xform Radu Nicolau
2021-09-03 12:50 ` Zhang, Roy Fan
@ 2021-09-05 14:47 ` Akhil Goyal
2021-09-06 11:21 ` Nicolau, Radu
1 sibling, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2021-09-05 14:47 UTC (permalink / raw)
To: Radu Nicolau, Declan Doherty
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, Anoob Joseph,
abhijit.sinha, daniel.m.buckley, Archana Muniganti,
Tejasree Kondoj, matan
Hi Radu,
> ----------------------------------------------------------------------
> Update ipsec_xform definition to include ESN field.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> ---
> lib/security/rte_security.h | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index 03572b10ab..702de58b48 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -240,6 +240,14 @@ struct rte_security_ipsec_xform {
> */
> uint32_t mss;
> /**< IPsec payload Maximum Segment Size */
> + union {
> + uint64_t value;
> + struct {
> + uint32_t low;
> + uint32_t hi;
> + };
> + } esn;
> + /**< Extended Sequence Number */
> };
Can we use the following change for monitoring ESN?
http://patches.dpdk.org/project/dpdk/patch/1629207767-262-2-git-send-email-anoobj@marvell.com/
I believe ESN is not required to be set as SA parameter, it is normally
maintained by the PMD and application should be notified if a limit is reached.
Regards,
Akhil
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH v4 03/10] security: add ESN field to ipsec_xform
2021-09-05 14:47 ` [dpdk-dev] [EXT] " Akhil Goyal
@ 2021-09-06 11:21 ` Nicolau, Radu
2021-09-06 11:36 ` Anoob Joseph
0 siblings, 1 reply; 184+ messages in thread
From: Nicolau, Radu @ 2021-09-06 11:21 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, Anoob Joseph,
abhijit.sinha, daniel.m.buckley, Archana Muniganti,
Tejasree Kondoj, matan
On 9/5/2021 3:47 PM, Akhil Goyal wrote:
> Hi Radu,
>
>> ----------------------------------------------------------------------
>> Update ipsec_xform definition to include ESN field.
>>
>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
>> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
>> ---
>> lib/security/rte_security.h | 8 ++++++++
>> 1 file changed, 8 insertions(+)
>>
>> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
>> index 03572b10ab..702de58b48 100644
>> --- a/lib/security/rte_security.h
>> +++ b/lib/security/rte_security.h
>> @@ -240,6 +240,14 @@ struct rte_security_ipsec_xform {
>> */
>> uint32_t mss;
>> /**< IPsec payload Maximum Segment Size */
>> + union {
>> + uint64_t value;
>> + struct {
>> + uint32_t low;
>> + uint32_t hi;
>> + };
>> + } esn;
>> + /**< Extended Sequence Number */
>> };
> Can we use the following change for monitoring ESN?
> http://patches.dpdk.org/project/dpdk/patch/1629207767-262-2-git-send-email-anoobj@marvell.com/
>
> I believe ESN is not required to be set as SA parameter, it is normally
> maintained by the PMD and application should be notified if a limit is reached.
>
> Regards,
> Akhil
Hi Akhil, I suppose they can be complementary, with this one being a
hard ESN limit that the user can enforce by setting the initial ESN
value - but there is no requirement to do so. Also, this change doesn't
need explicit support added in the PMDs.
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH v4 03/10] security: add ESN field to ipsec_xform
2021-09-06 11:21 ` Nicolau, Radu
@ 2021-09-06 11:36 ` Anoob Joseph
2021-09-06 13:39 ` Nicolau, Radu
0 siblings, 1 reply; 184+ messages in thread
From: Anoob Joseph @ 2021-09-06 11:36 UTC (permalink / raw)
To: Nicolau, Radu
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, abhijit.sinha,
daniel.m.buckley, Archana Muniganti, Tejasree Kondoj, matan,
Akhil Goyal, Declan Doherty
Hi Radu,
> Hi Akhil, I suppose they can be complementary, with this one being a hard
> ESN limit that the user can enforce by setting the initial ESN value - but there
> is no requirement to do so. Also, this change doesn't need explicit support
> added in the PMDs.
What is the actual use case of this field (ESN)? My impression was this is to allow application to control sequence number. For normal use cases, it can be like starting sequence number. And this can be used with ``rte_security_session_update`` to allow simulating corner cases (like large anti-replay windows sizes with ESN enabled etc). Did I capture the intended use case correctly?
If it is to set max sequence number to be handled by the session, then I guess, this is getting addressed as part of SA lifetime spec proposal.
Can you confirm what is the intended use case?
Thanks,
Anoob
> -----Original Message-----
> From: Nicolau, Radu <radu.nicolau@intel.com>
> Sent: Monday, September 6, 2021 4:51 PM
> To: Akhil Goyal <gakhil@marvell.com>; Declan Doherty
> <declan.doherty@intel.com>
> Cc: dev@dpdk.org; mdr@ashroe.eu; konstantin.ananyev@intel.com;
> vladimir.medvedkin@intel.com; bruce.richardson@intel.com;
> roy.fan.zhang@intel.com; hemant.agrawal@nxp.com; Anoob Joseph
> <anoobj@marvell.com>; abhijit.sinha@intel.com;
> daniel.m.buckley@intel.com; Archana Muniganti <marchana@marvell.com>;
> Tejasree Kondoj <ktejasree@marvell.com>; matan@nvidia.com
> Subject: Re: [EXT] [PATCH v4 03/10] security: add ESN field to ipsec_xform
>
>
> On 9/5/2021 3:47 PM, Akhil Goyal wrote:
> > Hi Radu,
> >
> >> ---------------------------------------------------------------------
> >> - Update ipsec_xform definition to include ESN field.
> >>
> >> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> >> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> >> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> >> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> >> ---
> >> lib/security/rte_security.h | 8 ++++++++
> >> 1 file changed, 8 insertions(+)
> >>
> >> diff --git a/lib/security/rte_security.h
> >> b/lib/security/rte_security.h index 03572b10ab..702de58b48 100644
> >> --- a/lib/security/rte_security.h
> >> +++ b/lib/security/rte_security.h
> >> @@ -240,6 +240,14 @@ struct rte_security_ipsec_xform {
> >> */
> >> uint32_t mss;
> >> /**< IPsec payload Maximum Segment Size */
> >> + union {
> >> + uint64_t value;
> >> + struct {
> >> + uint32_t low;
> >> + uint32_t hi;
> >> + };
> >> + } esn;
> >> + /**< Extended Sequence Number */
> >> };
> > Can we use the following change for monitoring ESN?
> > https://urldefense.proofpoint.com/v2/url?u=http-
> 3A__patches.dpdk.org_p
> > roject_dpdk_patch_1629207767-2D262-2D2-2Dgit-2Dsend-2Demail-
> 2Danoobj-4
> >
> 0marvell.com_&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=jPfB8rwwviRS
> xyLWs2n6
> > B-
> WYLn1v9SyTMrT5EQqh2TU&m=u4ceKpeCwgpmKFhuny3rjUzauRZVfhlNdxm
> Cy95gHMs&
> > s=OshWh8UBWrxO0abYCUCBhRZBzj423rwddyfzB9Q9rT0&e=
> >
> > I believe ESN is not required to be set as SA parameter, it is
> > normally maintained by the PMD and application should be notified if a limit
> is reached.
> >
> > Regards,
> > Akhil
>
> Hi Akhil, I suppose they can be complementary, with this one being a hard
> ESN limit that the user can enforce by setting the initial ESN value - but there
> is no requirement to do so. Also, this change doesn't need explicit support
> added in the PMDs.
>
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH v4 03/10] security: add ESN field to ipsec_xform
2021-09-06 11:36 ` Anoob Joseph
@ 2021-09-06 13:39 ` Nicolau, Radu
2021-09-06 13:50 ` Anoob Joseph
0 siblings, 1 reply; 184+ messages in thread
From: Nicolau, Radu @ 2021-09-06 13:39 UTC (permalink / raw)
To: Anoob Joseph
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, abhijit.sinha,
daniel.m.buckley, Archana Muniganti, Tejasree Kondoj, matan,
Akhil Goyal, Declan Doherty
On 9/6/2021 12:36 PM, Anoob Joseph wrote:
> Hi Radu,
>
>> Hi Akhil, I suppose they can be complementary, with this one being a hard
>> ESN limit that the user can enforce by setting the initial ESN value - but there
>> is no requirement to do so. Also, this change doesn't need explicit support
>> added in the PMDs.
> What is the actual use case of this field (ESN)? My impression was this is to allow application to control sequence number. For normal use cases, it can be like starting sequence number. And this can be used with ``rte_security_session_update`` to allow simulating corner cases (like large anti-replay windows sizes with ESN enabled etc). Did I capture the intended use case correctly?
>
> If it is to set max sequence number to be handled by the session, then I guess, this is getting addressed as part of SA lifetime spec proposal.
>
> Can you confirm what is the intended use case?
>
> Thanks,
> Anoob
Hi Anoob, the purpose was to have a starting value controlled by the app
and I think you're right, it can be achieved with
rte_security_session_update.
Thanks,
Radu
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH v4 03/10] security: add ESN field to ipsec_xform
2021-09-06 13:39 ` Nicolau, Radu
@ 2021-09-06 13:50 ` Anoob Joseph
0 siblings, 0 replies; 184+ messages in thread
From: Anoob Joseph @ 2021-09-06 13:50 UTC (permalink / raw)
To: Nicolau, Radu, Akhil Goyal
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, abhijit.sinha,
daniel.m.buckley, Archana Muniganti, Tejasree Kondoj, matan,
Declan Doherty
Hi Radu, Akhil,
Please see inline
Thanks,
Anoob
>
> On 9/6/2021 12:36 PM, Anoob Joseph wrote:
> > Hi Radu,
> >
> >> Hi Akhil, I suppose they can be complementary, with this one being a
> >> hard ESN limit that the user can enforce by setting the initial ESN
> >> value - but there is no requirement to do so. Also, this change
> >> doesn't need explicit support added in the PMDs.
> > What is the actual use case of this field (ESN)? My impression was this is to
> allow application to control sequence number. For normal use cases, it can be
> like starting sequence number. And this can be used with
> ``rte_security_session_update`` to allow simulating corner cases (like large
> anti-replay windows sizes with ESN enabled etc). Did I capture the intended
> use case correctly?
> >
> > If it is to set max sequence number to be handled by the session, then I
> guess, this is getting addressed as part of SA lifetime spec proposal.
> >
> > Can you confirm what is the intended use case?
> >
> > Thanks,
> > Anoob
>
> Hi Anoob, the purpose was to have a starting value controlled by the app and
> I think you're right, it can be achieved with rte_security_session_update.
>
[Anoob] Thanks for the confirmation. In that case, I'm in agreement with this proposal. May be update the patch description to better explain the use case.
Acked-by: Anoob Joseph <anoobj@marvell.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v4 04/10] mbuf: add IPsec ESP tunnel type
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 " Radu Nicolau
` (2 preceding siblings ...)
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 03/10] security: add ESN field to ipsec_xform Radu Nicolau
@ 2021-09-03 11:26 ` Radu Nicolau
2021-09-03 12:49 ` Zhang, Roy Fan
2021-09-05 14:34 ` [dpdk-dev] [EXT] " Akhil Goyal
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 05/10] ipsec: add support for AEAD algorithms Radu Nicolau
` (5 subsequent siblings)
9 siblings, 2 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-09-03 11:26 UTC (permalink / raw)
To: Olivier Matz
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, gakhil, anoobj,
declan.doherty, abhijit.sinha, daniel.m.buckley, marchana,
ktejasree, matan, Radu Nicolau
Add tunnel type for IPsec ESP tunnels
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/mbuf/rte_mbuf_core.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index bb38d7f581..a4d95deee6 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -253,6 +253,7 @@ extern "C" {
#define PKT_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
#define PKT_TX_TUNNEL_VXLAN_GPE (0x6ULL << 45)
#define PKT_TX_TUNNEL_GTP (0x7ULL << 45)
+#define PKT_TX_TUNNEL_ESP (0x8ULL << 45)
/**
* Generic IP encapsulated tunnel type, used for TSO and checksum offload.
* It can be used for tunnels which are not standards or listed above.
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v4 04/10] mbuf: add IPsec ESP tunnel type
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 04/10] mbuf: add IPsec ESP tunnel type Radu Nicolau
@ 2021-09-03 12:49 ` Zhang, Roy Fan
2021-09-05 14:34 ` [dpdk-dev] [EXT] " Akhil Goyal
1 sibling, 0 replies; 184+ messages in thread
From: Zhang, Roy Fan @ 2021-09-03 12:49 UTC (permalink / raw)
To: Nicolau, Radu, Olivier Matz
Cc: dev, mdr, Ananyev, Konstantin, Medvedkin, Vladimir, Richardson,
Bruce, hemant.agrawal, gakhil, anoobj, Doherty, Declan, Sinha,
Abhijit, Buckley, Daniel M, marchana, ktejasree, matan
> -----Original Message-----
> From: Nicolau, Radu <radu.nicolau@intel.com>
> Sent: Friday, September 3, 2021 12:26 PM
> To: Olivier Matz <olivier.matz@6wind.com>
> Cc: dev@dpdk.org; mdr@ashroe.eu; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Medvedkin, Vladimir
> <vladimir.medvedkin@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> hemant.agrawal@nxp.com; gakhil@marvell.com; anoobj@marvell.com;
> Doherty, Declan <declan.doherty@intel.com>; Sinha, Abhijit
> <abhijit.sinha@intel.com>; Buckley, Daniel M <daniel.m.buckley@intel.com>;
> marchana@marvell.com; ktejasree@marvell.com; matan@nvidia.com;
> Nicolau, Radu <radu.nicolau@intel.com>
> Subject: [PATCH v4 04/10] mbuf: add IPsec ESP tunnel type
>
> Add tunnel type for IPsec ESP tunnels
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> ---
> lib/mbuf/rte_mbuf_core.h | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
> index bb38d7f581..a4d95deee6 100644
> --- a/lib/mbuf/rte_mbuf_core.h
> +++ b/lib/mbuf/rte_mbuf_core.h
> @@ -253,6 +253,7 @@ extern "C" {
> #define PKT_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
> #define PKT_TX_TUNNEL_VXLAN_GPE (0x6ULL << 45)
> #define PKT_TX_TUNNEL_GTP (0x7ULL << 45)
> +#define PKT_TX_TUNNEL_ESP (0x8ULL << 45)
> /**
> * Generic IP encapsulated tunnel type, used for TSO and checksum offload.
> * It can be used for tunnels which are not standards or listed above.
> --
> 2.25.1
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH v4 04/10] mbuf: add IPsec ESP tunnel type
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 04/10] mbuf: add IPsec ESP tunnel type Radu Nicolau
2021-09-03 12:49 ` Zhang, Roy Fan
@ 2021-09-05 14:34 ` Akhil Goyal
1 sibling, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2021-09-05 14:34 UTC (permalink / raw)
To: Radu Nicolau, Olivier Matz
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, Anoob Joseph,
declan.doherty, abhijit.sinha, daniel.m.buckley,
Archana Muniganti, Tejasree Kondoj, matan
> ----------------------------------------------------------------------
> Add tunnel type for IPsec ESP tunnels
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> ---
> lib/mbuf/rte_mbuf_core.h | 1 +
> 1 file changed, 1 insertion(+)
>
Acked-by: Akhil Goyal <gakhil@marvell.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v4 05/10] ipsec: add support for AEAD algorithms
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 " Radu Nicolau
` (3 preceding siblings ...)
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 04/10] mbuf: add IPsec ESP tunnel type Radu Nicolau
@ 2021-09-03 11:26 ` Radu Nicolau
2021-09-03 12:49 ` Zhang, Roy Fan
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 06/10] ipsec: add transmit segmentation offload support Radu Nicolau
` (4 subsequent siblings)
9 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-09-03 11:26 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add support for AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/crypto.h | 137 +++++++++++++++++++++++++++++++++++++++++++
lib/ipsec/esp_inb.c | 66 ++++++++++++++++++++-
lib/ipsec/esp_outb.c | 70 +++++++++++++++++++++-
lib/ipsec/sa.c | 54 +++++++++++++++--
lib/ipsec/sa.h | 6 ++
5 files changed, 322 insertions(+), 11 deletions(-)
diff --git a/lib/ipsec/crypto.h b/lib/ipsec/crypto.h
index 3d03034590..93d20aaaa0 100644
--- a/lib/ipsec/crypto.h
+++ b/lib/ipsec/crypto.h
@@ -21,6 +21,37 @@ struct aesctr_cnt_blk {
uint32_t cnt;
} __rte_packed;
+ /*
+ * CHACHA20-POLY1305 devices have some specific requirements
+ * for IV and AAD formats.
+ * Ideally that to be done by the driver itself.
+ */
+
+struct aead_chacha20_poly1305_iv {
+ uint32_t salt;
+ uint64_t iv;
+ uint32_t cnt;
+} __rte_packed;
+
+struct aead_chacha20_poly1305_aad {
+ uint32_t spi;
+ /*
+ * RFC 4106, section 5:
+ * Two formats of the AAD are defined:
+ * one for 32-bit sequence numbers, and one for 64-bit ESN.
+ */
+ union {
+ uint32_t u32[2];
+ uint64_t u64;
+ } sqn;
+ uint32_t align0; /* align to 16B boundary */
+} __rte_packed;
+
+struct chacha20_poly1305_esph_iv {
+ struct rte_esp_hdr esph;
+ uint64_t iv;
+} __rte_packed;
+
/*
* AES-GCM devices have some specific requirements for IV and AAD formats.
* Ideally that to be done by the driver itself.
@@ -51,6 +82,47 @@ struct gcm_esph_iv {
uint64_t iv;
} __rte_packed;
+ /*
+ * AES-CCM devices have some specific requirements for IV and AAD formats.
+ * Ideally that to be done by the driver itself.
+ */
+union aead_ccm_salt {
+ uint32_t salt;
+ struct inner {
+ uint8_t salt8[3];
+ uint8_t ccm_flags;
+ } inner;
+} __rte_packed;
+
+
+struct aead_ccm_iv {
+ uint8_t ccm_flags;
+ uint8_t salt[3];
+ uint64_t iv;
+ uint32_t cnt;
+} __rte_packed;
+
+struct aead_ccm_aad {
+ uint8_t padding[18];
+ uint32_t spi;
+ /*
+ * RFC 4309, section 5:
+ * Two formats of the AAD are defined:
+ * one for 32-bit sequence numbers, and one for 64-bit ESN.
+ */
+ union {
+ uint32_t u32[2];
+ uint64_t u64;
+ } sqn;
+ uint32_t align0; /* align to 16B boundary */
+} __rte_packed;
+
+struct ccm_esph_iv {
+ struct rte_esp_hdr esph;
+ uint64_t iv;
+} __rte_packed;
+
+
static inline void
aes_ctr_cnt_blk_fill(struct aesctr_cnt_blk *ctr, uint64_t iv, uint32_t nonce)
{
@@ -59,6 +131,16 @@ aes_ctr_cnt_blk_fill(struct aesctr_cnt_blk *ctr, uint64_t iv, uint32_t nonce)
ctr->cnt = rte_cpu_to_be_32(1);
}
+static inline void
+aead_chacha20_poly1305_iv_fill(struct aead_chacha20_poly1305_iv
+ *chacha20_poly1305,
+ uint64_t iv, uint32_t salt)
+{
+ chacha20_poly1305->salt = salt;
+ chacha20_poly1305->iv = iv;
+ chacha20_poly1305->cnt = rte_cpu_to_be_32(1);
+}
+
static inline void
aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
{
@@ -67,6 +149,21 @@ aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
gcm->cnt = rte_cpu_to_be_32(1);
}
+static inline void
+aead_ccm_iv_fill(struct aead_ccm_iv *ccm, uint64_t iv, uint32_t salt)
+{
+ union aead_ccm_salt tsalt;
+
+ tsalt.salt = salt;
+ ccm->ccm_flags = tsalt.inner.ccm_flags;
+ ccm->salt[0] = tsalt.inner.salt8[0];
+ ccm->salt[1] = tsalt.inner.salt8[1];
+ ccm->salt[2] = tsalt.inner.salt8[2];
+ ccm->iv = iv;
+ ccm->cnt = rte_cpu_to_be_32(1);
+}
+
+
/*
* RFC 4106, 5 AAD Construction
* spi and sqn should already be converted into network byte order.
@@ -86,6 +183,25 @@ aead_gcm_aad_fill(struct aead_gcm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
aad->align0 = 0;
}
+/*
+ * RFC 4309, 5 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_ccm_aad_fill(struct aead_ccm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
+ int esn)
+{
+ aad->spi = spi;
+ if (esn)
+ aad->sqn.u64 = sqn;
+ else {
+ aad->sqn.u32[0] = sqn_low32(sqn);
+ aad->sqn.u32[1] = 0;
+ }
+ aad->align0 = 0;
+}
+
static inline void
gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
{
@@ -93,6 +209,27 @@ gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
iv[1] = 0;
}
+
+/*
+ * RFC 7634, 2.1 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_chacha20_poly1305_aad_fill(struct aead_chacha20_poly1305_aad *aad,
+ rte_be32_t spi, rte_be64_t sqn,
+ int esn)
+{
+ aad->spi = spi;
+ if (esn)
+ aad->sqn.u64 = sqn;
+ else {
+ aad->sqn.u32[0] = sqn_low32(sqn);
+ aad->sqn.u32[1] = 0;
+ }
+ aad->align0 = 0;
+}
+
/*
* Helper routine to copy IV
* Right now we support only algorithms with IV length equals 0/8/16 bytes.
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index 2b1df6a032..d66c88f05d 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -63,6 +63,8 @@ inb_cop_prepare(struct rte_crypto_op *cop,
{
struct rte_crypto_sym_op *sop;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint64_t *ivc, *ivp;
uint32_t algo;
@@ -83,6 +85,24 @@ inb_cop_prepare(struct rte_crypto_op *cop,
sa->iv_ofs);
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ sop_aead_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ ccm = rte_crypto_op_ctod_offset(cop, struct aead_ccm_iv *,
+ sa->iv_ofs);
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ sop_aead_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ chacha20_poly1305 = rte_crypto_op_ctod_offset(cop,
+ struct aead_chacha20_poly1305_iv *,
+ sa->iv_ofs);
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CBC:
case ALGO_TYPE_3DES_CBC:
sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
@@ -91,6 +111,14 @@ inb_cop_prepare(struct rte_crypto_op *cop,
ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs);
copy_iv(ivc, ivp, sa->iv_len);
break;
+ case ALGO_TYPE_AES_GMAC:
+ sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+ sa->iv_ofs);
+ aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
@@ -110,6 +138,8 @@ inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
uint32_t *pofs, uint32_t plen, void *iv)
{
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint64_t *ivp;
uint32_t clen;
@@ -120,9 +150,19 @@ inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
switch (sa->algo_type) {
case ALGO_TYPE_AES_GCM:
+ case ALGO_TYPE_AES_GMAC:
gcm = (struct aead_gcm_iv *)iv;
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ ccm = (struct aead_ccm_iv *)iv;
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ chacha20_poly1305 = (struct aead_chacha20_poly1305_iv *)iv;
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CBC:
case ALGO_TYPE_3DES_CBC:
copy_iv(iv, ivp, sa->iv_len);
@@ -175,6 +215,8 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
const union sym_op_data *icv)
{
struct aead_gcm_aad *aad;
+ struct aead_ccm_aad *caad;
+ struct aead_chacha20_poly1305_aad *chacha_aad;
/* insert SQN.hi between ESP trailer and ICV */
if (sa->sqh_len != 0)
@@ -184,9 +226,27 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
* fill AAD fields, if any (aad fields are placed after icv),
* right now we support only one AEAD algorithm: AES-GCM.
*/
- if (sa->aad_len != 0) {
- aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
- aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ switch (sa->algo_type) {
+ case ALGO_TYPE_AES_GCM:
+ if (sa->aad_len != 0) {
+ aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+ aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_AES_CCM:
+ if (sa->aad_len != 0) {
+ caad = (struct aead_ccm_aad *)(icv->va + sa->icv_len);
+ aead_ccm_aad_fill(caad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ if (sa->aad_len != 0) {
+ chacha_aad = (struct aead_chacha20_poly1305_aad *)
+ (icv->va + sa->icv_len);
+ aead_chacha20_poly1305_aad_fill(chacha_aad,
+ sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
}
}
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 1e181cf2ce..a3f77469c3 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -63,6 +63,8 @@ outb_cop_prepare(struct rte_crypto_op *cop,
{
struct rte_crypto_sym_op *sop;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint32_t algo;
@@ -80,6 +82,15 @@ outb_cop_prepare(struct rte_crypto_op *cop,
/* NULL case */
sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
break;
+ case ALGO_TYPE_AES_GMAC:
+ /* GMAC case */
+ sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+ sa->iv_ofs);
+ aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_GCM:
/* AEAD (AES_GCM) case */
sop_aead_prepare(sop, sa, icv, hlen, plen);
@@ -89,6 +100,26 @@ outb_cop_prepare(struct rte_crypto_op *cop,
sa->iv_ofs);
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ /* AEAD (AES_CCM) case */
+ sop_aead_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ ccm = rte_crypto_op_ctod_offset(cop, struct aead_ccm_iv *,
+ sa->iv_ofs);
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ /* AEAD (CHACHA20_POLY) case */
+ sop_aead_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ chacha20_poly1305 = rte_crypto_op_ctod_offset(cop,
+ struct aead_chacha20_poly1305_iv *,
+ sa->iv_ofs);
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
/* Cipher-Auth (AES-CTR *) case */
sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
@@ -196,7 +227,9 @@ outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
const union sym_op_data *icv)
{
uint32_t *psqh;
- struct aead_gcm_aad *aad;
+ struct aead_gcm_aad *gaad;
+ struct aead_ccm_aad *caad;
+ struct aead_chacha20_poly1305_aad *chacha20_poly1305_aad;
/* insert SQN.hi between ESP trailer and ICV */
if (sa->sqh_len != 0) {
@@ -208,9 +241,29 @@ outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
* fill IV and AAD fields, if any (aad fields are placed after icv),
* right now we support only one AEAD algorithm: AES-GCM .
*/
+ switch (sa->algo_type) {
+ case ALGO_TYPE_AES_GCM:
if (sa->aad_len != 0) {
- aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
- aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ gaad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+ aead_gcm_aad_fill(gaad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_AES_CCM:
+ if (sa->aad_len != 0) {
+ caad = (struct aead_ccm_aad *)(icv->va + sa->icv_len);
+ aead_ccm_aad_fill(caad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ if (sa->aad_len != 0) {
+ chacha20_poly1305_aad = (struct aead_chacha20_poly1305_aad *)
+ (icv->va + sa->icv_len);
+ aead_chacha20_poly1305_aad_fill(chacha20_poly1305_aad,
+ sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ default:
+ break;
}
}
@@ -418,6 +471,8 @@ outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
{
uint64_t *ivp = iv;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint32_t clen;
@@ -426,6 +481,15 @@ outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
gcm = iv;
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ ccm = iv;
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ chacha20_poly1305 = iv;
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
ctr = iv;
aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt);
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index e59189d215..720e0f365b 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -47,6 +47,15 @@ fill_crypto_xform(struct crypto_xform *xform, uint64_t type,
if (xfn != NULL)
return -EINVAL;
xform->aead = &xf->aead;
+
+ /* GMAC has only auth */
+ } else if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+ xf->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ if (xfn != NULL)
+ return -EINVAL;
+ xform->auth = &xf->auth;
+ xform->cipher = &xfn->cipher;
+
/*
* CIPHER+AUTH xforms are expected in strict order,
* depending on SA direction:
@@ -247,12 +256,13 @@ esp_inb_init(struct rte_ipsec_sa *sa)
sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
/*
- * for AEAD and NULL algorithms we can assume that
+ * for AEAD algorithms we can assume that
* auth and cipher offsets would be equal.
*/
switch (sa->algo_type) {
case ALGO_TYPE_AES_GCM:
- case ALGO_TYPE_NULL:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
sa->ctp.auth.raw = sa->ctp.cipher.raw;
break;
default:
@@ -294,6 +304,8 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
switch (algo_type) {
case ALGO_TYPE_AES_GCM:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
case ALGO_TYPE_AES_CTR:
case ALGO_TYPE_NULL:
sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr) +
@@ -305,15 +317,20 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr);
sa->ctp.cipher.length = sa->iv_len;
break;
+ case ALGO_TYPE_AES_GMAC:
+ sa->ctp.cipher.offset = 0;
+ sa->ctp.cipher.length = 0;
+ break;
}
/*
- * for AEAD and NULL algorithms we can assume that
+ * for AEAD algorithms we can assume that
* auth and cipher offsets would be equal.
*/
switch (algo_type) {
case ALGO_TYPE_AES_GCM:
- case ALGO_TYPE_NULL:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
sa->ctp.auth.raw = sa->ctp.cipher.raw;
break;
default:
@@ -374,13 +391,39 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->pad_align = IPSEC_PAD_AES_GCM;
sa->algo_type = ALGO_TYPE_AES_GCM;
break;
+ case RTE_CRYPTO_AEAD_AES_CCM:
+ /* RFC 4309 */
+ sa->aad_len = sizeof(struct aead_ccm_aad);
+ sa->icv_len = cxf->aead->digest_length;
+ sa->iv_ofs = cxf->aead->iv.offset;
+ sa->iv_len = sizeof(uint64_t);
+ sa->pad_align = IPSEC_PAD_AES_CCM;
+ sa->algo_type = ALGO_TYPE_AES_CCM;
+ break;
+ case RTE_CRYPTO_AEAD_CHACHA20_POLY1305:
+ /* RFC 7634 & 8439*/
+ sa->aad_len = sizeof(struct aead_chacha20_poly1305_aad);
+ sa->icv_len = cxf->aead->digest_length;
+ sa->iv_ofs = cxf->aead->iv.offset;
+ sa->iv_len = sizeof(uint64_t);
+ sa->pad_align = IPSEC_PAD_CHACHA20_POLY1305;
+ sa->algo_type = ALGO_TYPE_CHACHA20_POLY1305;
+ break;
default:
return -EINVAL;
}
+ } else if (cxf->auth->algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ /* RFC 4543 */
+ /* AES-GMAC is a special case of auth that needs IV */
+ sa->pad_align = IPSEC_PAD_AES_GMAC;
+ sa->iv_len = sizeof(uint64_t);
+ sa->icv_len = cxf->auth->digest_length;
+ sa->iv_ofs = cxf->auth->iv.offset;
+ sa->algo_type = ALGO_TYPE_AES_GMAC;
+
} else {
sa->icv_len = cxf->auth->digest_length;
sa->iv_ofs = cxf->cipher->iv.offset;
- sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
switch (cxf->cipher->algo) {
case RTE_CRYPTO_CIPHER_NULL:
@@ -414,6 +457,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
}
}
+ sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
sa->udata = prm->userdata;
sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi);
sa->salt = prm->ipsec_xform.salt;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 1bffe751f5..107ebd1519 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -19,7 +19,10 @@ enum {
IPSEC_PAD_AES_CBC = IPSEC_MAX_IV_SIZE,
IPSEC_PAD_AES_CTR = IPSEC_PAD_DEFAULT,
IPSEC_PAD_AES_GCM = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_AES_CCM = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_CHACHA20_POLY1305 = IPSEC_PAD_DEFAULT,
IPSEC_PAD_NULL = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_AES_GMAC = IPSEC_PAD_DEFAULT,
};
/* iv sizes for different algorithms */
@@ -67,6 +70,9 @@ enum sa_algo_type {
ALGO_TYPE_AES_CBC,
ALGO_TYPE_AES_CTR,
ALGO_TYPE_AES_GCM,
+ ALGO_TYPE_AES_CCM,
+ ALGO_TYPE_CHACHA20_POLY1305,
+ ALGO_TYPE_AES_GMAC,
ALGO_TYPE_MAX
};
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v4 05/10] ipsec: add support for AEAD algorithms
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 05/10] ipsec: add support for AEAD algorithms Radu Nicolau
@ 2021-09-03 12:49 ` Zhang, Roy Fan
0 siblings, 0 replies; 184+ messages in thread
From: Zhang, Roy Fan @ 2021-09-03 12:49 UTC (permalink / raw)
To: Nicolau, Radu, Ananyev, Konstantin, Iremonger, Bernard,
Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, hemant.agrawal, gakhil, anoobj,
Doherty, Declan, Sinha, Abhijit, Buckley, Daniel M, marchana,
ktejasree, matan
> -----Original Message-----
> From: Nicolau, Radu <radu.nicolau@intel.com>
> Sent: Friday, September 3, 2021 12:26 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Iremonger,
> Bernard <bernard.iremonger@intel.com>; Medvedkin, Vladimir
> <vladimir.medvedkin@intel.com>
> Cc: dev@dpdk.org; mdr@ashroe.eu; Richardson, Bruce
> <bruce.richardson@intel.com>; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> hemant.agrawal@nxp.com; gakhil@marvell.com; anoobj@marvell.com;
> Doherty, Declan <declan.doherty@intel.com>; Sinha, Abhijit
> <abhijit.sinha@intel.com>; Buckley, Daniel M <daniel.m.buckley@intel.com>;
> marchana@marvell.com; ktejasree@marvell.com; matan@nvidia.com;
> Nicolau, Radu <radu.nicolau@intel.com>
> Subject: [PATCH v4 05/10] ipsec: add support for AEAD algorithms
>
> Add support for AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> ---
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v4 06/10] ipsec: add transmit segmentation offload support
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 " Radu Nicolau
` (4 preceding siblings ...)
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 05/10] ipsec: add support for AEAD algorithms Radu Nicolau
@ 2021-09-03 11:26 ` Radu Nicolau
2021-09-03 12:52 ` Zhang, Roy Fan
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 07/10] ipsec: add support for NAT-T Radu Nicolau
` (3 subsequent siblings)
9 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-09-03 11:26 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add support for transmit segmentation offload to inline crypto processing
mode. This offload is not supported by other offload modes, as at a
minimum it requires inline crypto for IPsec to be supported on the
network interface.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/esp_inb.c | 4 +-
lib/ipsec/esp_outb.c | 115 +++++++++++++++++++++++++++++++++++--------
lib/ipsec/iph.h | 10 +++-
lib/ipsec/sa.c | 6 +++
lib/ipsec/sa.h | 4 ++
5 files changed, 114 insertions(+), 25 deletions(-)
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index d66c88f05d..a6ab8fbdd5 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -668,8 +668,8 @@ trs_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
/* modify packet's layout */
np = trs_process_step2(mb[i], ml[i], hl[i], cofs,
to[i], tl, sqn + k);
- update_trs_l3hdr(sa, np + l2, mb[i]->pkt_len,
- l2, hl[i] - l2, espt[i].next_proto);
+ update_trs_l34hdrs(sa, np + l2, mb[i]->pkt_len,
+ l2, hl[i] - l2, espt[i].next_proto, 0);
/* update mbuf's metadata */
trs_process_step3(mb[i]);
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index a3f77469c3..9fc7075796 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -2,6 +2,8 @@
* Copyright(c) 2018-2020 Intel Corporation
*/
+#include <math.h>
+
#include <rte_ipsec.h>
#include <rte_esp.h>
#include <rte_ip.h>
@@ -156,11 +158,20 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* number of bytes to encrypt */
clen = plen + sizeof(*espt);
- clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+ /* We don't need to pad/ailgn packet when using TSO offload */
+ if (likely(!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))))
+ clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
/* pad length + esp tail */
pdlen = clen - plen;
- tlen = pdlen + sa->icv_len + sqh_len;
+
+ /* We don't append ICV length when using TSO offload */
+ if (likely(!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))))
+ tlen = pdlen + sa->icv_len + sqh_len;
+ else
+ tlen = pdlen + sqh_len;
/* do append and prepend */
ml = rte_pktmbuf_lastseg(mb);
@@ -337,6 +348,7 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
char *ph, *pt;
uint64_t *iv;
uint32_t l2len, l3len;
+ uint8_t tso = mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG) ? 1 : 0;
l2len = mb->l2_len;
l3len = mb->l3_len;
@@ -349,11 +361,19 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* number of bytes to encrypt */
clen = plen + sizeof(*espt);
- clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+ /* We don't need to pad/ailgn packet when using TSO offload */
+ if (likely(!tso))
+ clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
/* pad length + esp tail */
pdlen = clen - plen;
- tlen = pdlen + sa->icv_len + sqh_len;
+
+ /* We don't append ICV length when using TSO offload */
+ if (likely(!tso))
+ tlen = pdlen + sa->icv_len + sqh_len;
+ else
+ tlen = pdlen + sqh_len;
/* do append and insert */
ml = rte_pktmbuf_lastseg(mb);
@@ -375,8 +395,8 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
insert_esph(ph, ph + hlen, uhlen);
/* update ip header fields */
- np = update_trs_l3hdr(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
- l3len, IPPROTO_ESP);
+ np = update_trs_l34hdrs(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
+ l3len, IPPROTO_ESP, tso);
/* update spi, seqn and iv */
esph = (struct rte_esp_hdr *)(ph + uhlen);
@@ -651,6 +671,33 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
}
}
+/* check if packet will exceed MSS and segmentation is required */
+static inline int
+esn_outb_nb_segments(const struct rte_ipsec_sa *sa, struct rte_mbuf *m) {
+ uint16_t segments = 1;
+ uint16_t pkt_l3len = m->pkt_len - m->l2_len;
+
+ /* Only support segmentation for UDP/TCP flows */
+ if (!(m->packet_type & (RTE_PTYPE_L4_UDP | RTE_PTYPE_L4_TCP)))
+ return segments;
+
+ if (sa->tso.enabled && pkt_l3len > sa->tso.mss) {
+ segments = ceil((float)pkt_l3len / sa->tso.mss);
+
+ if (m->packet_type & RTE_PTYPE_L4_TCP) {
+ m->ol_flags |= (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM);
+ m->l4_len = sizeof(struct rte_tcp_hdr);
+ } else {
+ m->ol_flags |= (PKT_TX_UDP_SEG | PKT_TX_UDP_CKSUM);
+ m->l4_len = sizeof(struct rte_udp_hdr);
+ }
+
+ m->tso_segsz = sa->tso.mss;
+ }
+
+ return segments;
+}
+
/*
* process group of ESP outbound tunnel packets destined for
* INLINE_CRYPTO type of device.
@@ -660,24 +707,29 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
int32_t rc;
- uint32_t i, k, n;
+ uint32_t i, k, nb_sqn = 0, nb_sqn_alloc;
uint64_t sqn;
rte_be64_t sqc;
struct rte_ipsec_sa *sa;
union sym_op_data icv;
uint64_t iv[IPSEC_MAX_IV_QWORD];
uint32_t dr[num];
+ uint16_t nb_segs[num];
sa = ss->sa;
- n = num;
- sqn = esn_outb_update_sqn(sa, &n);
- if (n != num)
+ for (i = 0; i != num; i++) {
+ nb_segs[i] = esn_outb_nb_segments(sa, mb[i]);
+ nb_sqn += nb_segs[i];
+ }
+
+ nb_sqn_alloc = nb_sqn;
+ sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
+ if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
k = 0;
- for (i = 0; i != n; i++) {
-
+ for (i = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -691,11 +743,18 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
dr[i - k] = i;
rte_errno = -rc;
}
+
+ /**
+ * If packet is using tso, increment sqn by the number of
+ * segments for packet
+ */
+ if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
+ sqn += nb_segs[i] - 1;
}
/* copy not processed mbufs beyond good ones */
- if (k != n && k != 0)
- move_bad_mbufs(mb, dr, n, n - k);
+ if (k != num && k != 0)
+ move_bad_mbufs(mb, dr, num, num - k);
inline_outb_mbuf_prepare(ss, mb, k);
return k;
@@ -710,23 +769,30 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
int32_t rc;
- uint32_t i, k, n;
+ uint32_t i, k, nb_sqn, nb_sqn_alloc;
uint64_t sqn;
rte_be64_t sqc;
struct rte_ipsec_sa *sa;
union sym_op_data icv;
uint64_t iv[IPSEC_MAX_IV_QWORD];
uint32_t dr[num];
+ uint16_t nb_segs[num];
sa = ss->sa;
- n = num;
- sqn = esn_outb_update_sqn(sa, &n);
- if (n != num)
+ /* Calculate number of sequence numbers required */
+ for (i = 0, nb_sqn = 0; i != num; i++) {
+ nb_segs[i] = esn_outb_nb_segments(sa, mb[i]);
+ nb_sqn += nb_segs[i];
+ }
+
+ nb_sqn_alloc = nb_sqn;
+ sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
+ if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
k = 0;
- for (i = 0; i != n; i++) {
+ for (i = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -741,11 +807,18 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
dr[i - k] = i;
rte_errno = -rc;
}
+
+ /**
+ * If packet is using tso, increment sqn by the number of
+ * segments for packet
+ */
+ if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
+ sqn += nb_segs[i] - 1;
}
/* copy not processed mbufs beyond good ones */
- if (k != n && k != 0)
- move_bad_mbufs(mb, dr, n, n - k);
+ if (k != num && k != 0)
+ move_bad_mbufs(mb, dr, num, num - k);
inline_outb_mbuf_prepare(ss, mb, k);
return k;
diff --git a/lib/ipsec/iph.h b/lib/ipsec/iph.h
index 861f16905a..2d223199ac 100644
--- a/lib/ipsec/iph.h
+++ b/lib/ipsec/iph.h
@@ -6,6 +6,8 @@
#define _IPH_H_
#include <rte_ip.h>
+#include <rte_udp.h>
+#include <rte_tcp.h>
/**
* @file iph.h
@@ -39,8 +41,8 @@ insert_esph(char *np, char *op, uint32_t hlen)
/* update original ip header fields for transport case */
static inline int
-update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
- uint32_t l2len, uint32_t l3len, uint8_t proto)
+update_trs_l34hdrs(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
+ uint32_t l2len, uint32_t l3len, uint8_t proto, uint8_t tso)
{
int32_t rc;
@@ -51,6 +53,10 @@ update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
v4h = p;
rc = v4h->next_proto_id;
v4h->next_proto_id = proto;
+ if (tso) {
+ v4h->hdr_checksum = 0;
+ v4h->total_length = 0;
+ }
v4h->total_length = rte_cpu_to_be_16(plen - l2len);
/* IPv6 */
} else {
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 720e0f365b..2ecbbce0a4 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -565,6 +565,12 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->type = type;
sa->size = sz;
+
+ if (prm->ipsec_xform.options.tso == 1) {
+ sa->tso.enabled = 1;
+ sa->tso.mss = prm->ipsec_xform.mss;
+ }
+
/* check for ESN flag */
sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
UINT32_MAX : UINT64_MAX;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 107ebd1519..5e237f3525 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -113,6 +113,10 @@ struct rte_ipsec_sa {
uint8_t iv_len;
uint8_t pad_align;
uint8_t tos_mask;
+ struct {
+ uint8_t enabled:1;
+ uint16_t mss;
+ } tso;
/* template for tunnel header */
uint8_t hdr[IPSEC_MAX_HDR_SIZE];
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v4 06/10] ipsec: add transmit segmentation offload support
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 06/10] ipsec: add transmit segmentation offload support Radu Nicolau
@ 2021-09-03 12:52 ` Zhang, Roy Fan
0 siblings, 0 replies; 184+ messages in thread
From: Zhang, Roy Fan @ 2021-09-03 12:52 UTC (permalink / raw)
To: Nicolau, Radu, Ananyev, Konstantin, Iremonger, Bernard,
Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, hemant.agrawal, gakhil, anoobj,
Doherty, Declan, Sinha, Abhijit, Buckley, Daniel M, marchana,
ktejasree, matan
> -----Original Message-----
> From: Nicolau, Radu <radu.nicolau@intel.com>
> Sent: Friday, September 3, 2021 12:26 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Iremonger,
> Bernard <bernard.iremonger@intel.com>; Medvedkin, Vladimir
> <vladimir.medvedkin@intel.com>
> Cc: dev@dpdk.org; mdr@ashroe.eu; Richardson, Bruce
> <bruce.richardson@intel.com>; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> hemant.agrawal@nxp.com; gakhil@marvell.com; anoobj@marvell.com;
> Doherty, Declan <declan.doherty@intel.com>; Sinha, Abhijit
> <abhijit.sinha@intel.com>; Buckley, Daniel M <daniel.m.buckley@intel.com>;
> marchana@marvell.com; ktejasree@marvell.com; matan@nvidia.com;
> Nicolau, Radu <radu.nicolau@intel.com>
> Subject: [PATCH v4 06/10] ipsec: add transmit segmentation offload support
>
> Add support for transmit segmentation offload to inline crypto processing
> mode. This offload is not supported by other offload modes, as at a
> minimum it requires inline crypto for IPsec to be supported on the
> network interface.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v4 07/10] ipsec: add support for NAT-T
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 " Radu Nicolau
` (5 preceding siblings ...)
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 06/10] ipsec: add transmit segmentation offload support Radu Nicolau
@ 2021-09-03 11:26 ` Radu Nicolau
2021-09-03 12:52 ` Zhang, Roy Fan
2021-09-05 15:00 ` [dpdk-dev] [EXT] " Akhil Goyal
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 08/10] ipsec: add support for SA telemetry Radu Nicolau
` (2 subsequent siblings)
9 siblings, 2 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-09-03 11:26 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add support for the IPsec NAT-Traversal use case for Tunnel mode
packets.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/iph.h | 13 +++++++++++++
lib/ipsec/rte_ipsec_sa.h | 8 +++++++-
lib/ipsec/sa.c | 13 ++++++++++++-
lib/ipsec/sa.h | 4 ++++
4 files changed, 36 insertions(+), 2 deletions(-)
diff --git a/lib/ipsec/iph.h b/lib/ipsec/iph.h
index 2d223199ac..093f86d34a 100644
--- a/lib/ipsec/iph.h
+++ b/lib/ipsec/iph.h
@@ -251,6 +251,7 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa *sa, void *outh,
{
struct rte_ipv4_hdr *v4h;
struct rte_ipv6_hdr *v6h;
+ struct rte_udp_hdr *udph;
uint8_t is_outh_ipv4;
if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
@@ -258,11 +259,23 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa *sa, void *outh,
v4h = outh;
v4h->packet_id = pid;
v4h->total_length = rte_cpu_to_be_16(plen - l2len);
+
+ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
+ udph = (struct rte_udp_hdr *)(v4h + 1);
+ udph->dgram_len = rte_cpu_to_be_16(plen - l2len -
+ (sizeof(*v4h) + sizeof(*udph)));
+ }
} else {
is_outh_ipv4 = 0;
v6h = outh;
v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
sizeof(*v6h));
+
+ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
+ udph = (struct rte_udp_hdr *)(v6h + 1);
+ udph->dgram_len = rte_cpu_to_be_16(plen - l2len -
+ (sizeof(*v6h) + sizeof(*udph)));
+ }
}
if (sa->type & TUN_HDR_MSK)
diff --git a/lib/ipsec/rte_ipsec_sa.h b/lib/ipsec/rte_ipsec_sa.h
index cf51ad8338..40d1e70d45 100644
--- a/lib/ipsec/rte_ipsec_sa.h
+++ b/lib/ipsec/rte_ipsec_sa.h
@@ -76,6 +76,7 @@ struct rte_ipsec_sa_prm {
* - inbound/outbound
* - mode (TRANSPORT/TUNNEL)
* - for TUNNEL outer IP version (IPv4/IPv6)
+ * - NAT-T UDP encapsulated (TUNNEL mode only)
* - are SA SQN operations 'atomic'
* - ESN enabled/disabled
* ...
@@ -86,7 +87,8 @@ enum {
RTE_SATP_LOG2_PROTO,
RTE_SATP_LOG2_DIR,
RTE_SATP_LOG2_MODE,
- RTE_SATP_LOG2_SQN = RTE_SATP_LOG2_MODE + 2,
+ RTE_SATP_LOG2_NATT = RTE_SATP_LOG2_MODE + 2,
+ RTE_SATP_LOG2_SQN,
RTE_SATP_LOG2_ESN,
RTE_SATP_LOG2_ECN,
RTE_SATP_LOG2_DSCP
@@ -109,6 +111,10 @@ enum {
#define RTE_IPSEC_SATP_MODE_TUNLV4 (1ULL << RTE_SATP_LOG2_MODE)
#define RTE_IPSEC_SATP_MODE_TUNLV6 (2ULL << RTE_SATP_LOG2_MODE)
+#define RTE_IPSEC_SATP_NATT_MASK (1ULL << RTE_SATP_LOG2_NATT)
+#define RTE_IPSEC_SATP_NATT_DISABLE (0ULL << RTE_SATP_LOG2_NATT)
+#define RTE_IPSEC_SATP_NATT_ENABLE (1ULL << RTE_SATP_LOG2_NATT)
+
#define RTE_IPSEC_SATP_SQN_MASK (1ULL << RTE_SATP_LOG2_SQN)
#define RTE_IPSEC_SATP_SQN_RAW (0ULL << RTE_SATP_LOG2_SQN)
#define RTE_IPSEC_SATP_SQN_ATOM (1ULL << RTE_SATP_LOG2_SQN)
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 2ecbbce0a4..8e369e4618 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -217,6 +217,10 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
} else
return -EINVAL;
+ /* check for UDP encapsulation flag */
+ if (prm->ipsec_xform.options.udp_encap == 1)
+ tp |= RTE_IPSEC_SATP_NATT_ENABLE;
+
/* check for ESN flag */
if (prm->ipsec_xform.options.esn == 0)
tp |= RTE_IPSEC_SATP_ESN_DISABLE;
@@ -372,7 +376,8 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
const struct crypto_xform *cxf)
{
static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
- RTE_IPSEC_SATP_MODE_MASK;
+ RTE_IPSEC_SATP_MODE_MASK |
+ RTE_IPSEC_SATP_NATT_MASK;
if (prm->ipsec_xform.options.ecn)
sa->tos_mask |= RTE_IPV4_HDR_ECN_MASK;
@@ -475,10 +480,16 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
esp_inb_init(sa);
break;
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4 |
+ RTE_IPSEC_SATP_NATT_ENABLE):
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6 |
+ RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
esp_outb_tun_init(sa, prm);
break;
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
+ RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
esp_outb_init(sa, 0);
break;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 5e237f3525..3f38921eb3 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -101,6 +101,10 @@ struct rte_ipsec_sa {
uint64_t msk;
uint64_t val;
} tx_offload;
+ struct {
+ uint16_t sport;
+ uint16_t dport;
+ } natt;
uint32_t salt;
uint8_t algo_type;
uint8_t proto; /* next proto */
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v4 07/10] ipsec: add support for NAT-T
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 07/10] ipsec: add support for NAT-T Radu Nicolau
@ 2021-09-03 12:52 ` Zhang, Roy Fan
2021-09-05 15:00 ` [dpdk-dev] [EXT] " Akhil Goyal
1 sibling, 0 replies; 184+ messages in thread
From: Zhang, Roy Fan @ 2021-09-03 12:52 UTC (permalink / raw)
To: Nicolau, Radu, Ananyev, Konstantin, Iremonger, Bernard,
Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, hemant.agrawal, gakhil, anoobj,
Doherty, Declan, Sinha, Abhijit, Buckley, Daniel M, marchana,
ktejasree, matan
> -----Original Message-----
> From: Nicolau, Radu <radu.nicolau@intel.com>
> Sent: Friday, September 3, 2021 12:26 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Iremonger,
> Bernard <bernard.iremonger@intel.com>; Medvedkin, Vladimir
> <vladimir.medvedkin@intel.com>
> Cc: dev@dpdk.org; mdr@ashroe.eu; Richardson, Bruce
> <bruce.richardson@intel.com>; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> hemant.agrawal@nxp.com; gakhil@marvell.com; anoobj@marvell.com;
> Doherty, Declan <declan.doherty@intel.com>; Sinha, Abhijit
> <abhijit.sinha@intel.com>; Buckley, Daniel M <daniel.m.buckley@intel.com>;
> marchana@marvell.com; ktejasree@marvell.com; matan@nvidia.com;
> Nicolau, Radu <radu.nicolau@intel.com>
> Subject: [PATCH v4 07/10] ipsec: add support for NAT-T
>
> Add support for the IPsec NAT-Traversal use case for Tunnel mode
> packets.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> ---
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH v4 07/10] ipsec: add support for NAT-T
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 07/10] ipsec: add support for NAT-T Radu Nicolau
2021-09-03 12:52 ` Zhang, Roy Fan
@ 2021-09-05 15:00 ` Akhil Goyal
2021-09-06 11:31 ` Nicolau, Radu
1 sibling, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2021-09-05 15:00 UTC (permalink / raw)
To: Radu Nicolau, Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
Anoob Joseph, declan.doherty, abhijit.sinha, daniel.m.buckley,
Archana Muniganti, Tejasree Kondoj, matan
> Add support for the IPsec NAT-Traversal use case for Tunnel mode
> packets.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> ---
> lib/ipsec/iph.h | 13 +++++++++++++
> lib/ipsec/rte_ipsec_sa.h | 8 +++++++-
> lib/ipsec/sa.c | 13 ++++++++++++-
> lib/ipsec/sa.h | 4 ++++
> 4 files changed, 36 insertions(+), 2 deletions(-)
>
> diff --git a/lib/ipsec/iph.h b/lib/ipsec/iph.h
> index 2d223199ac..093f86d34a 100644
> --- a/lib/ipsec/iph.h
> +++ b/lib/ipsec/iph.h
> @@ -251,6 +251,7 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa
> *sa, void *outh,
> {
> struct rte_ipv4_hdr *v4h;
> struct rte_ipv6_hdr *v6h;
> + struct rte_udp_hdr *udph;
> uint8_t is_outh_ipv4;
>
> if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
> @@ -258,11 +259,23 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa
> *sa, void *outh,
> v4h = outh;
> v4h->packet_id = pid;
> v4h->total_length = rte_cpu_to_be_16(plen - l2len);
> +
> + if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
> + udph = (struct rte_udp_hdr *)(v4h + 1);
> + udph->dgram_len = rte_cpu_to_be_16(plen - l2len -
> + (sizeof(*v4h) + sizeof(*udph)));
> + }
> } else {
> is_outh_ipv4 = 0;
> v6h = outh;
> v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
> sizeof(*v6h));
> +
> + if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
> + udph = (struct rte_udp_hdr *)(v6h + 1);
> + udph->dgram_len = rte_cpu_to_be_16(plen - l2len -
> + (sizeof(*v6h) + sizeof(*udph)));
> + }
> }
>
> if (sa->type & TUN_HDR_MSK)
> diff --git a/lib/ipsec/rte_ipsec_sa.h b/lib/ipsec/rte_ipsec_sa.h
> index cf51ad8338..40d1e70d45 100644
> --- a/lib/ipsec/rte_ipsec_sa.h
> +++ b/lib/ipsec/rte_ipsec_sa.h
> @@ -76,6 +76,7 @@ struct rte_ipsec_sa_prm {
> * - inbound/outbound
> * - mode (TRANSPORT/TUNNEL)
> * - for TUNNEL outer IP version (IPv4/IPv6)
> + * - NAT-T UDP encapsulated (TUNNEL mode only)
> * - are SA SQN operations 'atomic'
> * - ESN enabled/disabled
> * ...
> @@ -86,7 +87,8 @@ enum {
> RTE_SATP_LOG2_PROTO,
> RTE_SATP_LOG2_DIR,
> RTE_SATP_LOG2_MODE,
> - RTE_SATP_LOG2_SQN = RTE_SATP_LOG2_MODE + 2,
> + RTE_SATP_LOG2_NATT = RTE_SATP_LOG2_MODE + 2,
> + RTE_SATP_LOG2_SQN,
> RTE_SATP_LOG2_ESN,
> RTE_SATP_LOG2_ECN,
> RTE_SATP_LOG2_DSCP
> @@ -109,6 +111,10 @@ enum {
> #define RTE_IPSEC_SATP_MODE_TUNLV4 (1ULL <<
> RTE_SATP_LOG2_MODE)
> #define RTE_IPSEC_SATP_MODE_TUNLV6 (2ULL <<
> RTE_SATP_LOG2_MODE)
>
> +#define RTE_IPSEC_SATP_NATT_MASK (1ULL <<
> RTE_SATP_LOG2_NATT)
> +#define RTE_IPSEC_SATP_NATT_DISABLE (0ULL <<
> RTE_SATP_LOG2_NATT)
> +#define RTE_IPSEC_SATP_NATT_ENABLE (1ULL <<
> RTE_SATP_LOG2_NATT)
> +
> #define RTE_IPSEC_SATP_SQN_MASK (1ULL <<
> RTE_SATP_LOG2_SQN)
> #define RTE_IPSEC_SATP_SQN_RAW (0ULL <<
> RTE_SATP_LOG2_SQN)
> #define RTE_IPSEC_SATP_SQN_ATOM (1ULL <<
> RTE_SATP_LOG2_SQN)
> diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
> index 2ecbbce0a4..8e369e4618 100644
> --- a/lib/ipsec/sa.c
> +++ b/lib/ipsec/sa.c
> @@ -217,6 +217,10 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm,
> uint64_t *type)
> } else
> return -EINVAL;
>
> + /* check for UDP encapsulation flag */
> + if (prm->ipsec_xform.options.udp_encap == 1)
> + tp |= RTE_IPSEC_SATP_NATT_ENABLE;
> +
> /* check for ESN flag */
> if (prm->ipsec_xform.options.esn == 0)
> tp |= RTE_IPSEC_SATP_ESN_DISABLE;
> @@ -372,7 +376,8 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct
> rte_ipsec_sa_prm *prm,
> const struct crypto_xform *cxf)
> {
> static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
> - RTE_IPSEC_SATP_MODE_MASK;
> + RTE_IPSEC_SATP_MODE_MASK |
> + RTE_IPSEC_SATP_NATT_MASK;
>
> if (prm->ipsec_xform.options.ecn)
> sa->tos_mask |= RTE_IPV4_HDR_ECN_MASK;
> @@ -475,10 +480,16 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct
> rte_ipsec_sa_prm *prm,
> case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
> esp_inb_init(sa);
> break;
> + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4 |
> + RTE_IPSEC_SATP_NATT_ENABLE):
> + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6 |
> + RTE_IPSEC_SATP_NATT_ENABLE):
> case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
> case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
> esp_outb_tun_init(sa, prm);
> break;
> + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
> + RTE_IPSEC_SATP_NATT_ENABLE):
> case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
> esp_outb_init(sa, 0);
> break;
> diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
> index 5e237f3525..3f38921eb3 100644
> --- a/lib/ipsec/sa.h
> +++ b/lib/ipsec/sa.h
> @@ -101,6 +101,10 @@ struct rte_ipsec_sa {
> uint64_t msk;
> uint64_t val;
> } tx_offload;
> + struct {
> + uint16_t sport;
> + uint16_t dport;
> + } natt;
These ports are not getting used in this patch,
As indicated in the previous patch, do we really need these?
As for NAT-T, 4500 is the default port.
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH v4 07/10] ipsec: add support for NAT-T
2021-09-05 15:00 ` [dpdk-dev] [EXT] " Akhil Goyal
@ 2021-09-06 11:31 ` Nicolau, Radu
0 siblings, 0 replies; 184+ messages in thread
From: Nicolau, Radu @ 2021-09-06 11:31 UTC (permalink / raw)
To: Akhil Goyal, Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
Anoob Joseph, declan.doherty, abhijit.sinha, daniel.m.buckley,
Archana Muniganti, Tejasree Kondoj, matan
On 9/5/2021 4:00 PM, Akhil Goyal wrote:
>> Add support for the IPsec NAT-Traversal use case for Tunnel mode
>> packets.
>>
>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
>> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
>> ---
>>
>> These ports are not getting used in this patch,
>> As indicated in the previous patch, do we really need these?
>> As for NAT-T, 4500 is the default port.
Yes, you are right, they aren't used, I will update the patch. About the
need to have them, I replied in the prev patch, from what I see the RFC
doesn't require specific ports for UDP encapsulation.
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v4 08/10] ipsec: add support for SA telemetry
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 " Radu Nicolau
` (6 preceding siblings ...)
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 07/10] ipsec: add support for NAT-T Radu Nicolau
@ 2021-09-03 11:26 ` Radu Nicolau
2021-09-03 12:53 ` Zhang, Roy Fan
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 09/10] ipsec: add support for initial SQN value Radu Nicolau
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 10/10] ipsec: add ol_flags support Radu Nicolau
9 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-09-03 11:26 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin, Ray Kinsella
Cc: dev, bruce.richardson, roy.fan.zhang, hemant.agrawal, gakhil,
anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add telemetry support for ipsec SAs
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/esp_inb.c | 1 +
lib/ipsec/esp_outb.c | 12 +-
lib/ipsec/meson.build | 2 +-
lib/ipsec/rte_ipsec.h | 23 ++++
lib/ipsec/sa.c | 255 +++++++++++++++++++++++++++++++++++++++++-
lib/ipsec/sa.h | 21 ++++
lib/ipsec/version.map | 9 ++
7 files changed, 317 insertions(+), 6 deletions(-)
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index a6ab8fbdd5..8cb4c16302 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -722,6 +722,7 @@ esp_inb_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
/* process packets, extract seq numbers */
k = process(sa, mb, sqn, dr, num, sqh_len);
+ sa->statistics.count += k;
/* handle unprocessed mbufs */
if (k != num && k != 0)
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 9fc7075796..2c02c3bb12 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -617,7 +617,7 @@ uint16_t
esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
uint16_t num)
{
- uint32_t i, k, icv_len, *icv;
+ uint32_t i, k, icv_len, *icv, bytes;
struct rte_mbuf *ml;
struct rte_ipsec_sa *sa;
uint32_t dr[num];
@@ -626,10 +626,12 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
k = 0;
icv_len = sa->icv_len;
+ bytes = 0;
for (i = 0; i != num; i++) {
if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
ml = rte_pktmbuf_lastseg(mb[i]);
+ bytes += mb[i]->data_len;
/* remove high-order 32 bits of esn from packet len */
mb[i]->pkt_len -= sa->sqh_len;
ml->data_len -= sa->sqh_len;
@@ -640,6 +642,8 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
} else
dr[i - k] = i;
}
+ sa->statistics.count += k;
+ sa->statistics.bytes += bytes - (sa->hdr_len * k);
/* handle unprocessed mbufs */
if (k != num) {
@@ -659,16 +663,19 @@ static inline void
inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- uint32_t i, ol_flags;
+ uint32_t i, ol_flags, bytes = 0;
ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
for (i = 0; i != num; i++) {
mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
+ bytes += mb[i]->data_len;
if (ol_flags != 0)
rte_security_set_pkt_metadata(ss->security.ctx,
ss->security.ses, mb[i], NULL);
}
+ ss->sa->statistics.count += num;
+ ss->sa->statistics.bytes += bytes - (ss->sa->hdr_len * num);
}
/* check if packet will exceed MSS and segmentation is required */
@@ -752,6 +759,7 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
sqn += nb_segs[i] - 1;
}
+
/* copy not processed mbufs beyond good ones */
if (k != num && k != 0)
move_bad_mbufs(mb, dr, num, num - k);
diff --git a/lib/ipsec/meson.build b/lib/ipsec/meson.build
index 1497f573bb..f5e44cfe47 100644
--- a/lib/ipsec/meson.build
+++ b/lib/ipsec/meson.build
@@ -6,4 +6,4 @@ sources = files('esp_inb.c', 'esp_outb.c', 'sa.c', 'ses.c', 'ipsec_sad.c')
headers = files('rte_ipsec.h', 'rte_ipsec_sa.h', 'rte_ipsec_sad.h')
indirect_headers += files('rte_ipsec_group.h')
-deps += ['mbuf', 'net', 'cryptodev', 'security', 'hash']
+deps += ['mbuf', 'net', 'cryptodev', 'security', 'hash', 'telemetry']
diff --git a/lib/ipsec/rte_ipsec.h b/lib/ipsec/rte_ipsec.h
index dd60d95915..2bb52f4b8f 100644
--- a/lib/ipsec/rte_ipsec.h
+++ b/lib/ipsec/rte_ipsec.h
@@ -158,6 +158,29 @@ rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
return ss->pkt_func.process(ss, mb, num);
}
+
+struct rte_ipsec_telemetry;
+
+/**
+ * Initialize IPsec library telemetry.
+ * @return
+ * 0 on success, negative value otherwise.
+ */
+__rte_experimental
+int
+rte_ipsec_telemetry_init(void);
+
+/**
+ * Enable per SA telemetry for a specific SA.
+ * @param sa
+ * Pointer to the *rte_ipsec_sa* object that will have telemetry enabled.
+ * @return
+ * 0 on success, negative value otherwise.
+ */
+__rte_experimental
+int
+rte_ipsec_telemetry_sa_add(struct rte_ipsec_sa *sa);
+
#include <rte_ipsec_group.h>
#ifdef __cplusplus
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 8e369e4618..5b55bbc098 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -7,7 +7,7 @@
#include <rte_ip.h>
#include <rte_errno.h>
#include <rte_cryptodev.h>
-
+#include <rte_telemetry.h>
#include "sa.h"
#include "ipsec_sqn.h"
#include "crypto.h"
@@ -25,6 +25,7 @@ struct crypto_xform {
struct rte_crypto_aead_xform *aead;
};
+
/*
* helper routine, fills internal crypto_xform structure.
*/
@@ -532,6 +533,249 @@ rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
wsz = prm->ipsec_xform.replay_win_sz;
return ipsec_sa_size(type, &wsz, &nb);
}
+struct rte_ipsec_telemetry {
+ bool initialized;
+ LIST_HEAD(, rte_ipsec_sa) sa_list_head;
+};
+
+#include <rte_malloc.h>
+
+static struct rte_ipsec_telemetry rte_ipsec_telemetry_instance = {
+ .initialized = false };
+
+static int
+handle_telemetry_cmd_ipsec_sa_list(const char *cmd __rte_unused,
+ const char *params __rte_unused,
+ struct rte_tel_data *data)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ struct rte_ipsec_sa *sa;
+
+ rte_tel_data_start_array(data, RTE_TEL_U64_VAL);
+
+ LIST_FOREACH(sa, &telemetry->sa_list_head, telemetry_next) {
+ rte_tel_data_add_array_u64(data, htonl(sa->spi));
+ }
+
+ return 0;
+}
+
+/**
+ * Handle IPsec SA statistics telemetry request
+ *
+ * Return dict of SA's with dict of key/value counters
+ *
+ * {
+ * "SA_SPI_XX": {"count": 0, "bytes": 0, "errors": 0},
+ * "SA_SPI_YY": {"count": 0, "bytes": 0, "errors": 0}
+ * }
+ *
+ */
+static int
+handle_telemetry_cmd_ipsec_sa_stats(const char *cmd __rte_unused,
+ const char *params,
+ struct rte_tel_data *data)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ struct rte_ipsec_sa *sa;
+ bool user_specified_spi = false;
+ uint32_t sa_spi;
+
+ if (params) {
+ user_specified_spi = true;
+ sa_spi = htonl((uint32_t)atoi(params));
+ }
+
+ rte_tel_data_start_dict(data);
+
+ LIST_FOREACH(sa, &telemetry->sa_list_head, telemetry_next) {
+ char sa_name[64];
+
+ static const char *name_pkt_cnt = "count";
+ static const char *name_byte_cnt = "bytes";
+ static const char *name_error_cnt = "errors";
+ struct rte_tel_data *sa_data;
+
+ /* If user provided SPI only get telemetry for that SA */
+ if (user_specified_spi && (sa_spi != sa->spi))
+ continue;
+
+ /* allocate telemetry data struct for SA telemetry */
+ sa_data = rte_tel_data_alloc();
+ if (!sa_data)
+ return -ENOMEM;
+
+ rte_tel_data_start_dict(sa_data);
+
+ /* add telemetry key/values pairs */
+ rte_tel_data_add_dict_u64(sa_data, name_pkt_cnt,
+ sa->statistics.count);
+
+ rte_tel_data_add_dict_u64(sa_data, name_byte_cnt,
+ sa->statistics.bytes);
+
+ rte_tel_data_add_dict_u64(sa_data, name_error_cnt,
+ sa->statistics.errors.count);
+
+ /* generate telemetry label */
+ snprintf(sa_name, sizeof(sa_name), "SA_SPI_%i", htonl(sa->spi));
+
+ /* add SA telemetry to dictionary container */
+ rte_tel_data_add_dict_container(data, sa_name, sa_data, 0);
+ }
+
+ return 0;
+}
+
+static int
+handle_telemetry_cmd_ipsec_sa_configuration(const char *cmd __rte_unused,
+ const char *params,
+ struct rte_tel_data *data)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ struct rte_ipsec_sa *sa;
+ uint32_t sa_spi;
+
+ if (params)
+ sa_spi = htonl((uint32_t)atoi(params));
+ else
+ return -EINVAL;
+
+ rte_tel_data_start_dict(data);
+
+ LIST_FOREACH(sa, &telemetry->sa_list_head, telemetry_next) {
+ uint64_t mode;
+
+ if (sa_spi != sa->spi)
+ continue;
+
+ /* add SA configuration key/values pairs */
+ rte_tel_data_add_dict_string(data, "Type",
+ (sa->type & RTE_IPSEC_SATP_PROTO_MASK) ==
+ RTE_IPSEC_SATP_PROTO_AH ? "AH" : "ESP");
+
+ rte_tel_data_add_dict_string(data, "Direction",
+ (sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+ RTE_IPSEC_SATP_DIR_IB ? "Inbound" : "Outbound");
+
+ mode = sa->type & RTE_IPSEC_SATP_MODE_MASK;
+
+ if (mode == RTE_IPSEC_SATP_MODE_TRANS) {
+ rte_tel_data_add_dict_string(data, "Mode", "Transport");
+ } else {
+ rte_tel_data_add_dict_string(data, "Mode", "Tunnel");
+
+ if ((sa->type & RTE_IPSEC_SATP_NATT_MASK) ==
+ RTE_IPSEC_SATP_NATT_ENABLE) {
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ } else if (sa->type &
+ RTE_IPSEC_SATP_MODE_TUNLV6) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ }
+ } else {
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ } else if (sa->type &
+ RTE_IPSEC_SATP_MODE_TUNLV6) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ }
+ }
+ }
+
+ rte_tel_data_add_dict_string(data,
+ "extended-sequence-number",
+ (sa->type & RTE_IPSEC_SATP_ESN_MASK) ==
+ RTE_IPSEC_SATP_ESN_ENABLE ?
+ "enabled" : "disabled");
+
+ if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+ RTE_IPSEC_SATP_DIR_IB)
+
+ if (sa->sqn.inb.rsn[sa->sqn.inb.rdidx])
+ rte_tel_data_add_dict_u64(data,
+ "sequence-number",
+ sa->sqn.inb.rsn[sa->sqn.inb.rdidx]->sqn);
+ else
+ rte_tel_data_add_dict_u64(data,
+ "sequence-number", 0);
+ else
+ rte_tel_data_add_dict_u64(data, "sequence-number",
+ sa->sqn.outb);
+
+ rte_tel_data_add_dict_string(data,
+ "explicit-congestion-notification",
+ (sa->type & RTE_IPSEC_SATP_ECN_MASK) ==
+ RTE_IPSEC_SATP_ECN_ENABLE ?
+ "enabled" : "disabled");
+
+ rte_tel_data_add_dict_string(data,
+ "copy-DSCP",
+ (sa->type & RTE_IPSEC_SATP_DSCP_MASK) ==
+ RTE_IPSEC_SATP_DSCP_ENABLE ?
+ "enabled" : "disabled");
+
+ rte_tel_data_add_dict_string(data, "TSO",
+ sa->tso.enabled ? "enabled" : "disabled");
+
+ if (sa->tso.enabled)
+ rte_tel_data_add_dict_u64(data, "TSO-MSS", sa->tso.mss);
+
+ }
+
+ return 0;
+}
+int
+rte_ipsec_telemetry_init(void)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ int rc = 0;
+
+ if (telemetry->initialized)
+ return rc;
+
+ LIST_INIT(&telemetry->sa_list_head);
+
+ rc = rte_telemetry_register_cmd("/ipsec/sa/list",
+ handle_telemetry_cmd_ipsec_sa_list,
+ "Return list of IPsec Security Associations with telemetry enabled.");
+ if (rc)
+ return rc;
+
+ rc = rte_telemetry_register_cmd("/ipsec/sa/stats",
+ handle_telemetry_cmd_ipsec_sa_stats,
+ "Returns IPsec Security Association stastistics. Parameters: int sa_spi");
+ if (rc)
+ return rc;
+
+ rc = rte_telemetry_register_cmd("/ipsec/sa/details",
+ handle_telemetry_cmd_ipsec_sa_configuration,
+ "Returns IPsec Security Association configuration. Parameters: int sa_spi");
+ if (rc)
+ return rc;
+
+ telemetry->initialized = true;
+
+ return rc;
+}
+
+int
+rte_ipsec_telemetry_sa_add(struct rte_ipsec_sa *sa)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+
+ LIST_INSERT_HEAD(&telemetry->sa_list_head, sa, telemetry_next);
+
+ return 0;
+}
int
rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
@@ -644,19 +888,24 @@ uint16_t
pkt_flag_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- uint32_t i, k;
+ uint32_t i, k, bytes = 0;
uint32_t dr[num];
RTE_SET_USED(ss);
k = 0;
for (i = 0; i != num; i++) {
- if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
+ if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
k++;
+ bytes += mb[i]->data_len;
+ }
else
dr[i - k] = i;
}
+ ss->sa->statistics.count += k;
+ ss->sa->statistics.bytes += bytes - (ss->sa->hdr_len * k);
+
/* handle unprocessed mbufs */
if (k != num) {
rte_errno = EBADMSG;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 3f38921eb3..b9b7ebec5b 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -122,9 +122,30 @@ struct rte_ipsec_sa {
uint16_t mss;
} tso;
+ LIST_ENTRY(rte_ipsec_sa) telemetry_next;
+ /**< list entry for telemetry enabled SA */
+
+
+ RTE_MARKER cachealign_statistics __rte_cache_min_aligned;
+
+ /* Statistics */
+ struct {
+ uint64_t count;
+ uint64_t bytes;
+
+ struct {
+ uint64_t count;
+ uint64_t authentication_failed;
+ } errors;
+ } statistics;
+
+ RTE_MARKER cachealign_tunnel_header __rte_cache_min_aligned;
+
/* template for tunnel header */
uint8_t hdr[IPSEC_MAX_HDR_SIZE];
+
+ RTE_MARKER cachealign_tunnel_seq_num_replay_win __rte_cache_min_aligned;
/*
* sqn and replay window
* In case of SA handled by multiple threads *sqn* cacheline
diff --git a/lib/ipsec/version.map b/lib/ipsec/version.map
index ba8753eac4..fed6b6aba1 100644
--- a/lib/ipsec/version.map
+++ b/lib/ipsec/version.map
@@ -19,3 +19,12 @@ DPDK_22 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ # added in 21.11
+ rte_ipsec_telemetry_init;
+ rte_ipsec_telemetry_sa_add;
+
+};
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v4 08/10] ipsec: add support for SA telemetry
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 08/10] ipsec: add support for SA telemetry Radu Nicolau
@ 2021-09-03 12:53 ` Zhang, Roy Fan
0 siblings, 0 replies; 184+ messages in thread
From: Zhang, Roy Fan @ 2021-09-03 12:53 UTC (permalink / raw)
To: Nicolau, Radu, Ananyev, Konstantin, Iremonger, Bernard,
Medvedkin, Vladimir, Ray Kinsella
Cc: dev, Richardson, Bruce, hemant.agrawal, gakhil, anoobj, Doherty,
Declan, Sinha, Abhijit, Buckley, Daniel M, marchana, ktejasree,
matan
> -----Original Message-----
> From: Nicolau, Radu <radu.nicolau@intel.com>
> Sent: Friday, September 3, 2021 12:26 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Iremonger,
> Bernard <bernard.iremonger@intel.com>; Medvedkin, Vladimir
> <vladimir.medvedkin@intel.com>; Ray Kinsella <mdr@ashroe.eu>
> Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>; Zhang,
> Roy Fan <roy.fan.zhang@intel.com>; hemant.agrawal@nxp.com;
> gakhil@marvell.com; anoobj@marvell.com; Doherty, Declan
> <declan.doherty@intel.com>; Sinha, Abhijit <abhijit.sinha@intel.com>;
> Buckley, Daniel M <daniel.m.buckley@intel.com>; marchana@marvell.com;
> ktejasree@marvell.com; matan@nvidia.com; Nicolau, Radu
> <radu.nicolau@intel.com>
> Subject: [PATCH v4 08/10] ipsec: add support for SA telemetry
>
> Add telemetry support for ipsec SAs
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> ---
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v4 09/10] ipsec: add support for initial SQN value
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 " Radu Nicolau
` (7 preceding siblings ...)
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 08/10] ipsec: add support for SA telemetry Radu Nicolau
@ 2021-09-03 11:26 ` Radu Nicolau
2021-09-03 12:53 ` Zhang, Roy Fan
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 10/10] ipsec: add ol_flags support Radu Nicolau
9 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-09-03 11:26 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Update IPsec library to support initial SQN value.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/esp_outb.c | 19 ++++++++++++-------
lib/ipsec/sa.c | 29 ++++++++++++++++++++++-------
2 files changed, 34 insertions(+), 14 deletions(-)
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 2c02c3bb12..8a6d09558f 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -661,7 +661,7 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
*/
static inline void
inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
- struct rte_mbuf *mb[], uint16_t num)
+ struct rte_mbuf *mb[], uint16_t num, uint64_t *sqn)
{
uint32_t i, ol_flags, bytes = 0;
@@ -672,7 +672,7 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
bytes += mb[i]->data_len;
if (ol_flags != 0)
rte_security_set_pkt_metadata(ss->security.ctx,
- ss->security.ses, mb[i], NULL);
+ ss->security.ses, mb[i], sqn);
}
ss->sa->statistics.count += num;
ss->sa->statistics.bytes += bytes - (ss->sa->hdr_len * num);
@@ -764,7 +764,10 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
if (k != num && k != 0)
move_bad_mbufs(mb, dr, num, num - k);
- inline_outb_mbuf_prepare(ss, mb, k);
+ if (sa->sqn_mask > UINT32_MAX)
+ inline_outb_mbuf_prepare(ss, mb, k, &sqn);
+ else
+ inline_outb_mbuf_prepare(ss, mb, k, NULL);
return k;
}
@@ -799,8 +802,7 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
- k = 0;
- for (i = 0; i != num; i++) {
+ for (i = 0, k = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -828,7 +830,10 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
if (k != num && k != 0)
move_bad_mbufs(mb, dr, num, num - k);
- inline_outb_mbuf_prepare(ss, mb, k);
+ if (sa->sqn_mask > UINT32_MAX)
+ inline_outb_mbuf_prepare(ss, mb, k, &sqn);
+ else
+ inline_outb_mbuf_prepare(ss, mb, k, NULL);
return k;
}
@@ -840,6 +845,6 @@ uint16_t
inline_proto_outb_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- inline_outb_mbuf_prepare(ss, mb, num);
+ inline_outb_mbuf_prepare(ss, mb, num, NULL);
return num;
}
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 5b55bbc098..242fdcd461 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -294,11 +294,11 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
* Init ESP outbound specific things.
*/
static void
-esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
+esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen, uint64_t sqn)
{
uint8_t algo_type;
- sa->sqn.outb = 1;
+ sa->sqn.outb = sqn;
algo_type = sa->algo_type;
@@ -356,6 +356,8 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
static void
esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
{
+ uint64_t sqn = prm->ipsec_xform.esn.value > 0 ?
+ prm->ipsec_xform.esn.value : 0;
sa->proto = prm->tun.next_proto;
sa->hdr_len = prm->tun.hdr_len;
sa->hdr_l3_off = prm->tun.hdr_l3_off;
@@ -366,7 +368,7 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
- esp_outb_init(sa, sa->hdr_len);
+ esp_outb_init(sa, sa->hdr_len, sqn);
}
/*
@@ -376,6 +378,8 @@ static int
esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
const struct crypto_xform *cxf)
{
+ uint64_t sqn = prm->ipsec_xform.esn.value > 0 ?
+ prm->ipsec_xform.esn.value : 0;
static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
RTE_IPSEC_SATP_MODE_MASK |
RTE_IPSEC_SATP_NATT_MASK;
@@ -492,7 +496,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
- esp_outb_init(sa, 0);
+ esp_outb_init(sa, 0, sqn);
break;
}
@@ -503,15 +507,19 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
* helper function, init SA replay structure.
*/
static void
-fill_sa_replay(struct rte_ipsec_sa *sa, uint32_t wnd_sz, uint32_t nb_bucket)
+fill_sa_replay(struct rte_ipsec_sa *sa,
+ uint32_t wnd_sz, uint32_t nb_bucket, uint64_t sqn)
{
sa->replay.win_sz = wnd_sz;
sa->replay.nb_bucket = nb_bucket;
sa->replay.bucket_index_mask = nb_bucket - 1;
sa->sqn.inb.rsn[0] = (struct replay_sqn *)(sa + 1);
- if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+ sa->sqn.inb.rsn[0]->sqn = sqn;
+ if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM) {
sa->sqn.inb.rsn[1] = (struct replay_sqn *)
((uintptr_t)sa->sqn.inb.rsn[0] + rsn_size(nb_bucket));
+ sa->sqn.inb.rsn[1]->sqn = sqn;
+ }
}
int
@@ -830,13 +838,20 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
UINT32_MAX : UINT64_MAX;
+ /* if we are starting from a non-zero sn value */
+ if (prm->ipsec_xform.esn.value > 0) {
+ if (prm->ipsec_xform.direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+ sa->sqn.outb = prm->ipsec_xform.esn.value;
+ }
+
rc = esp_sa_init(sa, prm, &cxf);
if (rc != 0)
rte_ipsec_sa_fini(sa);
/* fill replay window related fields */
if (nb != 0)
- fill_sa_replay(sa, wsz, nb);
+ fill_sa_replay(sa, wsz, nb, prm->ipsec_xform.esn.value);
return sz;
}
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v4 09/10] ipsec: add support for initial SQN value
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 09/10] ipsec: add support for initial SQN value Radu Nicolau
@ 2021-09-03 12:53 ` Zhang, Roy Fan
0 siblings, 0 replies; 184+ messages in thread
From: Zhang, Roy Fan @ 2021-09-03 12:53 UTC (permalink / raw)
To: Nicolau, Radu, Ananyev, Konstantin, Iremonger, Bernard,
Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, hemant.agrawal, gakhil, anoobj,
Doherty, Declan, Sinha, Abhijit, Buckley, Daniel M, marchana,
ktejasree, matan
> -----Original Message-----
> From: Nicolau, Radu <radu.nicolau@intel.com>
> Sent: Friday, September 3, 2021 12:26 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Iremonger,
> Bernard <bernard.iremonger@intel.com>; Medvedkin, Vladimir
> <vladimir.medvedkin@intel.com>
> Cc: dev@dpdk.org; mdr@ashroe.eu; Richardson, Bruce
> <bruce.richardson@intel.com>; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> hemant.agrawal@nxp.com; gakhil@marvell.com; anoobj@marvell.com;
> Doherty, Declan <declan.doherty@intel.com>; Sinha, Abhijit
> <abhijit.sinha@intel.com>; Buckley, Daniel M <daniel.m.buckley@intel.com>;
> marchana@marvell.com; ktejasree@marvell.com; matan@nvidia.com;
> Nicolau, Radu <radu.nicolau@intel.com>
> Subject: [PATCH v4 09/10] ipsec: add support for initial SQN value
>
> Update IPsec library to support initial SQN value.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> ---
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v4 10/10] ipsec: add ol_flags support
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 " Radu Nicolau
` (8 preceding siblings ...)
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 09/10] ipsec: add support for initial SQN value Radu Nicolau
@ 2021-09-03 11:26 ` Radu Nicolau
2021-09-05 15:14 ` [dpdk-dev] [EXT] " Akhil Goyal
9 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-09-03 11:26 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Set mbuff->ol_flags for IPsec packets.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/esp_inb.c | 17 ++++++++++++--
lib/ipsec/esp_outb.c | 48 ++++++++++++++++++++++++++++++---------
lib/ipsec/rte_ipsec_sa.h | 3 ++-
lib/ipsec/sa.c | 49 ++++++++++++++++++++++++++++++++++++++--
lib/ipsec/sa.h | 8 +++++++
5 files changed, 109 insertions(+), 16 deletions(-)
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index 8cb4c16302..5fcb41297e 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -559,7 +559,8 @@ trs_process_step3(struct rte_mbuf *mb)
* - tx_offload
*/
static inline void
-tun_process_step3(struct rte_mbuf *mb, uint64_t txof_msk, uint64_t txof_val)
+tun_process_step3(struct rte_mbuf *mb, uint8_t is_ipv4, uint64_t txof_msk,
+ uint64_t txof_val)
{
/* reset mbuf metatdata: L2/L3 len, packet type */
mb->packet_type = RTE_PTYPE_UNKNOWN;
@@ -567,6 +568,14 @@ tun_process_step3(struct rte_mbuf *mb, uint64_t txof_msk, uint64_t txof_val)
/* clear the PKT_RX_SEC_OFFLOAD flag if set */
mb->ol_flags &= ~PKT_RX_SEC_OFFLOAD;
+
+ if (is_ipv4) {
+ mb->l3_len = sizeof(struct rte_ipv4_hdr);
+ mb->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ } else {
+ mb->l3_len = sizeof(struct rte_ipv6_hdr);
+ mb->ol_flags |= PKT_TX_IPV6;
+ }
}
/*
@@ -618,8 +627,12 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
update_tun_inb_l3hdr(sa, outh, inh);
/* update mbuf's metadata */
- tun_process_step3(mb[i], sa->tx_offload.msk,
+ tun_process_step3(mb[i],
+ (sa->type & RTE_IPSEC_SATP_IPV_MASK) ==
+ RTE_IPSEC_SATP_IPV4 ? 1 : 0,
+ sa->tx_offload.msk,
sa->tx_offload.val);
+
k++;
} else
dr[i - k] = i;
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 8a6d09558f..d8e261e6fb 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -19,7 +19,7 @@
typedef int32_t (*esp_outb_prepare_t)(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len);
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t icrypto);
/*
* helper function to fill crypto_sym op for cipher+auth algorithms.
@@ -140,9 +140,9 @@ outb_cop_prepare(struct rte_crypto_op *cop,
static inline int32_t
outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len)
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t icrypto)
{
- uint32_t clen, hlen, l2len, pdlen, pdofs, plen, tlen;
+ uint32_t clen, hlen, l2len, l3len, pdlen, pdofs, plen, tlen;
struct rte_mbuf *ml;
struct rte_esp_hdr *esph;
struct rte_esp_tail *espt;
@@ -154,6 +154,8 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* size of ipsec protected data */
l2len = mb->l2_len;
+ l3len = mb->l3_len;
+
plen = mb->pkt_len - l2len;
/* number of bytes to encrypt */
@@ -190,8 +192,26 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
/* update pkt l2/l3 len */
- mb->tx_offload = (mb->tx_offload & sa->tx_offload.msk) |
- sa->tx_offload.val;
+ if (icrypto) {
+ mb->tx_offload =
+ (mb->tx_offload & sa->inline_crypto.tx_offload.msk) |
+ sa->inline_crypto.tx_offload.val;
+ mb->l3_len = l3len;
+
+ mb->ol_flags |= sa->inline_crypto.tx_ol_flags;
+
+ /* set ip checksum offload for inner */
+ if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4)
+ mb->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if ((sa->type & RTE_IPSEC_SATP_IPV_MASK)
+ == RTE_IPSEC_SATP_IPV6)
+ mb->ol_flags |= PKT_TX_IPV6;
+ } else {
+ mb->tx_offload = (mb->tx_offload & sa->tx_offload.msk) |
+ sa->tx_offload.val;
+
+ mb->ol_flags |= sa->tx_ol_flags;
+ }
/* copy tunnel pkt header */
rte_memcpy(ph, sa->hdr, sa->hdr_len);
@@ -311,7 +331,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
/* try to update the packet itself */
rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv,
- sa->sqh_len);
+ sa->sqh_len, 0);
/* success, setup crypto op */
if (rc >= 0) {
outb_pkt_xprepare(sa, sqc, &icv);
@@ -338,7 +358,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
static inline int32_t
outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len)
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t icrypto __rte_unused)
{
uint8_t np;
uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen;
@@ -394,10 +414,16 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* shift L2/L3 headers */
insert_esph(ph, ph + hlen, uhlen);
+ if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4)
+ mb->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV6)
+ mb->ol_flags |= PKT_TX_IPV6;
+
/* update ip header fields */
np = update_trs_l34hdrs(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
l3len, IPPROTO_ESP, tso);
+
/* update spi, seqn and iv */
esph = (struct rte_esp_hdr *)(ph + uhlen);
iv = (uint64_t *)(esph + 1);
@@ -463,7 +489,7 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
/* try to update the packet itself */
rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv,
- sa->sqh_len);
+ sa->sqh_len, 0);
/* success, setup crypto op */
if (rc >= 0) {
outb_pkt_xprepare(sa, sqc, &icv);
@@ -560,7 +586,7 @@ cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss,
gen_iv(ivbuf[k], sqc);
/* try to update the packet itself */
- rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len);
+ rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len, 0);
/* success, proceed with preparations */
if (rc >= 0) {
@@ -741,7 +767,7 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
gen_iv(iv, sqc);
/* try to update the packet itself */
- rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
+ rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0, 1);
k += (rc >= 0);
@@ -808,7 +834,7 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
gen_iv(iv, sqc);
/* try to update the packet itself */
- rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
+ rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0, 0);
k += (rc >= 0);
diff --git a/lib/ipsec/rte_ipsec_sa.h b/lib/ipsec/rte_ipsec_sa.h
index 40d1e70d45..3c36dcaa77 100644
--- a/lib/ipsec/rte_ipsec_sa.h
+++ b/lib/ipsec/rte_ipsec_sa.h
@@ -38,7 +38,8 @@ struct rte_ipsec_sa_prm {
union {
struct {
uint8_t hdr_len; /**< tunnel header len */
- uint8_t hdr_l3_off; /**< offset for IPv4/IPv6 header */
+ uint8_t hdr_l3_off; /**< tunnel l3 header len */
+ uint8_t hdr_l3_len; /**< tunnel l3 header len */
uint8_t next_proto; /**< next header protocol */
const void *hdr; /**< tunnel header template */
} tun; /**< tunnel mode related parameters */
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 242fdcd461..51f71b30c6 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -17,6 +17,8 @@
#define MBUF_MAX_L2_LEN RTE_LEN2MASK(RTE_MBUF_L2_LEN_BITS, uint64_t)
#define MBUF_MAX_L3_LEN RTE_LEN2MASK(RTE_MBUF_L3_LEN_BITS, uint64_t)
+#define MBUF_MAX_TSO_LEN RTE_LEN2MASK(RTE_MBUF_TSO_SEGSZ_BITS, uint64_t)
+#define MBUF_MAX_OL3_LEN RTE_LEN2MASK(RTE_MBUF_OUTL3_LEN_BITS, uint64_t)
/* some helper structures */
struct crypto_xform {
@@ -348,6 +350,11 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen, uint64_t sqn)
sa->cofs.ofs.cipher.head = sa->ctp.cipher.offset - sa->ctp.auth.offset;
sa->cofs.ofs.cipher.tail = (sa->ctp.auth.offset + sa->ctp.auth.length) -
(sa->ctp.cipher.offset + sa->ctp.cipher.length);
+
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
+ sa->tx_ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV6)
+ sa->tx_ol_flags |= PKT_TX_IPV6;
}
/*
@@ -362,9 +369,43 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
sa->hdr_len = prm->tun.hdr_len;
sa->hdr_l3_off = prm->tun.hdr_l3_off;
+
+ /* update l2_len and l3_len fields for outbound mbuf */
+ sa->inline_crypto.tx_offload.val = rte_mbuf_tx_offload(
+ 0, /* iL2_LEN */
+ 0, /* iL3_LEN */
+ 0, /* iL4_LEN */
+ 0, /* TSO_SEG_SZ */
+ prm->tun.hdr_l3_len, /* oL3_LEN */
+ prm->tun.hdr_l3_off, /* oL2_LEN */
+ 0);
+
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_TUNNEL_ESP;
+
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_OUTER_IPV4;
+ else if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV6)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_OUTER_IPV6;
+
+ if (sa->inline_crypto.tx_ol_flags & PKT_TX_OUTER_IPV4)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_OUTER_IP_CKSUM;
+ if (sa->tx_ol_flags & PKT_TX_IPV4)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_IP_CKSUM;
+
/* update l2_len and l3_len fields for outbound mbuf */
- sa->tx_offload.val = rte_mbuf_tx_offload(sa->hdr_l3_off,
- sa->hdr_len - sa->hdr_l3_off, 0, 0, 0, 0, 0);
+ sa->tx_offload.val = rte_mbuf_tx_offload(
+ prm->tun.hdr_l3_off, /* iL2_LEN */
+ prm->tun.hdr_l3_len, /* iL3_LEN */
+ 0, /* iL4_LEN */
+ 0, /* TSO_SEG_SZ */
+ 0, /* oL3_LEN */
+ 0, /* oL2_LEN */
+ 0);
+
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
+ sa->tx_ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV6)
+ sa->tx_ol_flags |= PKT_TX_IPV6;
memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
@@ -473,6 +514,10 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->salt = prm->ipsec_xform.salt;
/* preserve all values except l2_len and l3_len */
+ sa->inline_crypto.tx_offload.msk =
+ ~rte_mbuf_tx_offload(MBUF_MAX_L2_LEN, MBUF_MAX_L3_LEN,
+ 0, 0, MBUF_MAX_OL3_LEN, 0, 0);
+
sa->tx_offload.msk =
~rte_mbuf_tx_offload(MBUF_MAX_L2_LEN, MBUF_MAX_L3_LEN,
0, 0, 0, 0, 0);
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index b9b7ebec5b..172d094c4b 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -101,6 +101,14 @@ struct rte_ipsec_sa {
uint64_t msk;
uint64_t val;
} tx_offload;
+ uint64_t tx_ol_flags;
+ struct {
+ uint64_t tx_ol_flags;
+ struct {
+ uint64_t msk;
+ uint64_t val;
+ } tx_offload;
+ } inline_crypto;
struct {
uint16_t sport;
uint16_t dport;
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH v4 10/10] ipsec: add ol_flags support
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 10/10] ipsec: add ol_flags support Radu Nicolau
@ 2021-09-05 15:14 ` Akhil Goyal
2021-09-06 11:53 ` Nicolau, Radu
0 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2021-09-05 15:14 UTC (permalink / raw)
To: Radu Nicolau, Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
Anoob Joseph, declan.doherty, abhijit.sinha, daniel.m.buckley,
Archana Muniganti, Tejasree Kondoj, matan
> Set mbuff->ol_flags for IPsec packets.
>
Could you please add more information in the description
About the need of the patch and what issue this patch is resolving.
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> ---
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH v4 10/10] ipsec: add ol_flags support
2021-09-05 15:14 ` [dpdk-dev] [EXT] " Akhil Goyal
@ 2021-09-06 11:53 ` Nicolau, Radu
0 siblings, 0 replies; 184+ messages in thread
From: Nicolau, Radu @ 2021-09-06 11:53 UTC (permalink / raw)
To: Akhil Goyal, Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
Anoob Joseph, declan.doherty, abhijit.sinha, daniel.m.buckley,
Archana Muniganti, Tejasree Kondoj, matan
On 9/5/2021 4:14 PM, Akhil Goyal wrote:
>> Set mbuff->ol_flags for IPsec packets.
>>
> Could you please add more information in the description
> About the need of the patch and what issue this patch is resolving.
I will add something like below, is that ok?
"Update the IPsec library to set mbuff->ol_flags and use the configured
L3 header length when setting the mbuff->tx_offload fields"
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v5 00/10] new features for ipsec and security libraries
2021-07-13 13:35 [dpdk-dev] [PATCH 00/10] new features for ipsec and security libraries Radu Nicolau
` (12 preceding siblings ...)
2021-09-03 11:26 ` [dpdk-dev] [PATCH v4 " Radu Nicolau
@ 2021-09-10 11:32 ` Radu Nicolau
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 01/10] security: add support for TSO on IPsec session Radu Nicolau
` (10 more replies)
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 " Radu Nicolau
` (4 subsequent siblings)
18 siblings, 11 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-09-10 11:32 UTC (permalink / raw)
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, gakhil, anoobj,
declan.doherty, abhijit.sinha, daniel.m.buckley, marchana,
ktejasree, matan, Radu Nicolau
Add support for:
TSO, NAT-T/UDP encapsulation, ESN
AES_CCM, CHACHA20_POLY1305 and AES_GMAC
SA telemetry
mbuf offload flags
Initial SQN value
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Radu Nicolau (10):
security: add support for TSO on IPsec session
security: add UDP params for IPsec NAT-T
security: add ESN field to ipsec_xform
mbuf: add IPsec ESP tunnel type
ipsec: add support for AEAD algorithms
ipsec: add transmit segmentation offload support
ipsec: add support for NAT-T
ipsec: add support for SA telemetry
ipsec: add support for initial SQN value
ipsec: add ol_flags support
lib/ipsec/crypto.h | 137 ++++++++++++
lib/ipsec/esp_inb.c | 88 +++++++-
lib/ipsec/esp_outb.c | 262 +++++++++++++++++++----
lib/ipsec/iph.h | 27 ++-
lib/ipsec/meson.build | 2 +-
lib/ipsec/rte_ipsec.h | 23 ++
lib/ipsec/rte_ipsec_sa.h | 11 +-
lib/ipsec/sa.c | 406 ++++++++++++++++++++++++++++++++++--
lib/ipsec/sa.h | 43 ++++
lib/ipsec/version.map | 9 +
lib/mbuf/rte_mbuf_core.h | 1 +
lib/security/rte_security.h | 31 +++
12 files changed, 967 insertions(+), 73 deletions(-)
--
v2: fixed lib/ipsec/version.map updates to show correct version
v3: fixed build error and corrected misspelled email address
v4: add doxygen comments for the IPsec telemetry APIs
update inline comments refering to the wrong RFC
v5: update commit messages after feedback
update the UDP encapsulation patch to actually use the configured ports
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v5 01/10] security: add support for TSO on IPsec session
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 00/10] new features for ipsec and security libraries Radu Nicolau
@ 2021-09-10 11:32 ` Radu Nicolau
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 02/10] security: add UDP params for IPsec NAT-T Radu Nicolau
` (9 subsequent siblings)
10 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-09-10 11:32 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, anoobj,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau
Allow user to provision a per security session maximum segment size
(MSS) for use when Transmit Segmentation Offload (TSO) is supported.
The MSS value will be used when PKT_TX_TCP_SEG or PKT_TX_UDP_SEG
ol_flags are specified in mbuf.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
lib/security/rte_security.h | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 88d31de0a6..45896a77d0 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -181,6 +181,19 @@ struct rte_security_ipsec_sa_options {
* * 0: Disable per session security statistics collection for this SA.
*/
uint32_t stats : 1;
+
+ /** Transmit Segmentation Offload (TSO)
+ *
+ * * 1: Enable per session security TSO support, use MSS value provide
+ * in IPsec security session when PKT_TX_TCP_SEG or PKT_TX_UDP_SEG
+ * ol_flags are set in mbuf.
+ * this SA, if supported by the driver.
+ * * 0: No TSO support for offload IPsec packets. Hardware will not
+ * attempt to segment packet, and packet transmission will fail if
+ * larger than MTU of interface
+ */
+ uint32_t tso : 1;
+
};
/** IPSec security association direction */
@@ -217,6 +230,8 @@ struct rte_security_ipsec_xform {
/**< Anti replay window size to enable sequence replay attack handling.
* replay checking is disabled if the window size is 0.
*/
+ uint32_t mss;
+ /**< IPsec payload Maximum Segment Size */
};
/**
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v5 02/10] security: add UDP params for IPsec NAT-T
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 00/10] new features for ipsec and security libraries Radu Nicolau
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 01/10] security: add support for TSO on IPsec session Radu Nicolau
@ 2021-09-10 11:32 ` Radu Nicolau
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 03/10] security: add ESN field to ipsec_xform Radu Nicolau
` (8 subsequent siblings)
10 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-09-10 11:32 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, anoobj,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau
Add support for specifying UDP port params for UDP encapsulation option.
RFC3948 section-2.1 does not enforce using specific the UDP ports for
UDP-Encapsulated ESP Header
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
lib/security/rte_security.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 45896a77d0..03572b10ab 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -112,6 +112,12 @@ struct rte_security_ipsec_tunnel_param {
};
};
+struct rte_security_ipsec_udp_param {
+
+ uint16_t sport;
+ uint16_t dport;
+};
+
/**
* IPsec Security Association option flags
*/
@@ -224,6 +230,8 @@ struct rte_security_ipsec_xform {
/**< IPsec SA Mode - transport/tunnel */
struct rte_security_ipsec_tunnel_param tunnel;
/**< Tunnel parameters, NULL for transport mode */
+ struct rte_security_ipsec_udp_param udp;
+ /**< UDP parameters, ignored when udp_encap option not specified */
uint64_t esn_soft_limit;
/**< ESN for which the overflow event need to be raised */
uint32_t replay_win_sz;
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v5 03/10] security: add ESN field to ipsec_xform
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 00/10] new features for ipsec and security libraries Radu Nicolau
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 01/10] security: add support for TSO on IPsec session Radu Nicolau
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 02/10] security: add UDP params for IPsec NAT-T Radu Nicolau
@ 2021-09-10 11:32 ` Radu Nicolau
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 04/10] mbuf: add IPsec ESP tunnel type Radu Nicolau
` (7 subsequent siblings)
10 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-09-10 11:32 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, anoobj,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau
Update ipsec_xform definition to include ESN field.
This allows the application to control the ESN starting value.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
---
lib/security/rte_security.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 03572b10ab..702de58b48 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -240,6 +240,14 @@ struct rte_security_ipsec_xform {
*/
uint32_t mss;
/**< IPsec payload Maximum Segment Size */
+ union {
+ uint64_t value;
+ struct {
+ uint32_t low;
+ uint32_t hi;
+ };
+ } esn;
+ /**< Extended Sequence Number */
};
/**
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v5 04/10] mbuf: add IPsec ESP tunnel type
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 00/10] new features for ipsec and security libraries Radu Nicolau
` (2 preceding siblings ...)
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 03/10] security: add ESN field to ipsec_xform Radu Nicolau
@ 2021-09-10 11:32 ` Radu Nicolau
2021-09-16 12:38 ` Olivier Matz
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 05/10] ipsec: add support for AEAD algorithms Radu Nicolau
` (6 subsequent siblings)
10 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-09-10 11:32 UTC (permalink / raw)
To: Olivier Matz
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, gakhil, anoobj,
declan.doherty, abhijit.sinha, daniel.m.buckley, marchana,
ktejasree, matan, Radu Nicolau
Add tunnel type for IPsec ESP tunnels
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
---
lib/mbuf/rte_mbuf_core.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index bb38d7f581..a4d95deee6 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -253,6 +253,7 @@ extern "C" {
#define PKT_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
#define PKT_TX_TUNNEL_VXLAN_GPE (0x6ULL << 45)
#define PKT_TX_TUNNEL_GTP (0x7ULL << 45)
+#define PKT_TX_TUNNEL_ESP (0x8ULL << 45)
/**
* Generic IP encapsulated tunnel type, used for TSO and checksum offload.
* It can be used for tunnels which are not standards or listed above.
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v5 04/10] mbuf: add IPsec ESP tunnel type
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 04/10] mbuf: add IPsec ESP tunnel type Radu Nicolau
@ 2021-09-16 12:38 ` Olivier Matz
0 siblings, 0 replies; 184+ messages in thread
From: Olivier Matz @ 2021-09-16 12:38 UTC (permalink / raw)
To: Radu Nicolau
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, gakhil, anoobj,
declan.doherty, abhijit.sinha, daniel.m.buckley, marchana,
ktejasree, matan
On Fri, Sep 10, 2021 at 12:32:34PM +0100, Radu Nicolau wrote:
> Add tunnel type for IPsec ESP tunnels
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v5 05/10] ipsec: add support for AEAD algorithms
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 00/10] new features for ipsec and security libraries Radu Nicolau
` (3 preceding siblings ...)
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 04/10] mbuf: add IPsec ESP tunnel type Radu Nicolau
@ 2021-09-10 11:32 ` Radu Nicolau
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 06/10] ipsec: add transmit segmentation offload support Radu Nicolau
` (5 subsequent siblings)
10 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-09-10 11:32 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add support for AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
lib/ipsec/crypto.h | 137 +++++++++++++++++++++++++++++++++++++++++++
lib/ipsec/esp_inb.c | 66 ++++++++++++++++++++-
lib/ipsec/esp_outb.c | 70 +++++++++++++++++++++-
lib/ipsec/sa.c | 54 +++++++++++++++--
lib/ipsec/sa.h | 6 ++
5 files changed, 322 insertions(+), 11 deletions(-)
diff --git a/lib/ipsec/crypto.h b/lib/ipsec/crypto.h
index 3d03034590..93d20aaaa0 100644
--- a/lib/ipsec/crypto.h
+++ b/lib/ipsec/crypto.h
@@ -21,6 +21,37 @@ struct aesctr_cnt_blk {
uint32_t cnt;
} __rte_packed;
+ /*
+ * CHACHA20-POLY1305 devices have some specific requirements
+ * for IV and AAD formats.
+ * Ideally that to be done by the driver itself.
+ */
+
+struct aead_chacha20_poly1305_iv {
+ uint32_t salt;
+ uint64_t iv;
+ uint32_t cnt;
+} __rte_packed;
+
+struct aead_chacha20_poly1305_aad {
+ uint32_t spi;
+ /*
+ * RFC 4106, section 5:
+ * Two formats of the AAD are defined:
+ * one for 32-bit sequence numbers, and one for 64-bit ESN.
+ */
+ union {
+ uint32_t u32[2];
+ uint64_t u64;
+ } sqn;
+ uint32_t align0; /* align to 16B boundary */
+} __rte_packed;
+
+struct chacha20_poly1305_esph_iv {
+ struct rte_esp_hdr esph;
+ uint64_t iv;
+} __rte_packed;
+
/*
* AES-GCM devices have some specific requirements for IV and AAD formats.
* Ideally that to be done by the driver itself.
@@ -51,6 +82,47 @@ struct gcm_esph_iv {
uint64_t iv;
} __rte_packed;
+ /*
+ * AES-CCM devices have some specific requirements for IV and AAD formats.
+ * Ideally that to be done by the driver itself.
+ */
+union aead_ccm_salt {
+ uint32_t salt;
+ struct inner {
+ uint8_t salt8[3];
+ uint8_t ccm_flags;
+ } inner;
+} __rte_packed;
+
+
+struct aead_ccm_iv {
+ uint8_t ccm_flags;
+ uint8_t salt[3];
+ uint64_t iv;
+ uint32_t cnt;
+} __rte_packed;
+
+struct aead_ccm_aad {
+ uint8_t padding[18];
+ uint32_t spi;
+ /*
+ * RFC 4309, section 5:
+ * Two formats of the AAD are defined:
+ * one for 32-bit sequence numbers, and one for 64-bit ESN.
+ */
+ union {
+ uint32_t u32[2];
+ uint64_t u64;
+ } sqn;
+ uint32_t align0; /* align to 16B boundary */
+} __rte_packed;
+
+struct ccm_esph_iv {
+ struct rte_esp_hdr esph;
+ uint64_t iv;
+} __rte_packed;
+
+
static inline void
aes_ctr_cnt_blk_fill(struct aesctr_cnt_blk *ctr, uint64_t iv, uint32_t nonce)
{
@@ -59,6 +131,16 @@ aes_ctr_cnt_blk_fill(struct aesctr_cnt_blk *ctr, uint64_t iv, uint32_t nonce)
ctr->cnt = rte_cpu_to_be_32(1);
}
+static inline void
+aead_chacha20_poly1305_iv_fill(struct aead_chacha20_poly1305_iv
+ *chacha20_poly1305,
+ uint64_t iv, uint32_t salt)
+{
+ chacha20_poly1305->salt = salt;
+ chacha20_poly1305->iv = iv;
+ chacha20_poly1305->cnt = rte_cpu_to_be_32(1);
+}
+
static inline void
aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
{
@@ -67,6 +149,21 @@ aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
gcm->cnt = rte_cpu_to_be_32(1);
}
+static inline void
+aead_ccm_iv_fill(struct aead_ccm_iv *ccm, uint64_t iv, uint32_t salt)
+{
+ union aead_ccm_salt tsalt;
+
+ tsalt.salt = salt;
+ ccm->ccm_flags = tsalt.inner.ccm_flags;
+ ccm->salt[0] = tsalt.inner.salt8[0];
+ ccm->salt[1] = tsalt.inner.salt8[1];
+ ccm->salt[2] = tsalt.inner.salt8[2];
+ ccm->iv = iv;
+ ccm->cnt = rte_cpu_to_be_32(1);
+}
+
+
/*
* RFC 4106, 5 AAD Construction
* spi and sqn should already be converted into network byte order.
@@ -86,6 +183,25 @@ aead_gcm_aad_fill(struct aead_gcm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
aad->align0 = 0;
}
+/*
+ * RFC 4309, 5 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_ccm_aad_fill(struct aead_ccm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
+ int esn)
+{
+ aad->spi = spi;
+ if (esn)
+ aad->sqn.u64 = sqn;
+ else {
+ aad->sqn.u32[0] = sqn_low32(sqn);
+ aad->sqn.u32[1] = 0;
+ }
+ aad->align0 = 0;
+}
+
static inline void
gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
{
@@ -93,6 +209,27 @@ gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
iv[1] = 0;
}
+
+/*
+ * RFC 7634, 2.1 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_chacha20_poly1305_aad_fill(struct aead_chacha20_poly1305_aad *aad,
+ rte_be32_t spi, rte_be64_t sqn,
+ int esn)
+{
+ aad->spi = spi;
+ if (esn)
+ aad->sqn.u64 = sqn;
+ else {
+ aad->sqn.u32[0] = sqn_low32(sqn);
+ aad->sqn.u32[1] = 0;
+ }
+ aad->align0 = 0;
+}
+
/*
* Helper routine to copy IV
* Right now we support only algorithms with IV length equals 0/8/16 bytes.
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index 2b1df6a032..d66c88f05d 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -63,6 +63,8 @@ inb_cop_prepare(struct rte_crypto_op *cop,
{
struct rte_crypto_sym_op *sop;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint64_t *ivc, *ivp;
uint32_t algo;
@@ -83,6 +85,24 @@ inb_cop_prepare(struct rte_crypto_op *cop,
sa->iv_ofs);
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ sop_aead_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ ccm = rte_crypto_op_ctod_offset(cop, struct aead_ccm_iv *,
+ sa->iv_ofs);
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ sop_aead_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ chacha20_poly1305 = rte_crypto_op_ctod_offset(cop,
+ struct aead_chacha20_poly1305_iv *,
+ sa->iv_ofs);
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CBC:
case ALGO_TYPE_3DES_CBC:
sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
@@ -91,6 +111,14 @@ inb_cop_prepare(struct rte_crypto_op *cop,
ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs);
copy_iv(ivc, ivp, sa->iv_len);
break;
+ case ALGO_TYPE_AES_GMAC:
+ sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+ sa->iv_ofs);
+ aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
@@ -110,6 +138,8 @@ inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
uint32_t *pofs, uint32_t plen, void *iv)
{
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint64_t *ivp;
uint32_t clen;
@@ -120,9 +150,19 @@ inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
switch (sa->algo_type) {
case ALGO_TYPE_AES_GCM:
+ case ALGO_TYPE_AES_GMAC:
gcm = (struct aead_gcm_iv *)iv;
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ ccm = (struct aead_ccm_iv *)iv;
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ chacha20_poly1305 = (struct aead_chacha20_poly1305_iv *)iv;
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CBC:
case ALGO_TYPE_3DES_CBC:
copy_iv(iv, ivp, sa->iv_len);
@@ -175,6 +215,8 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
const union sym_op_data *icv)
{
struct aead_gcm_aad *aad;
+ struct aead_ccm_aad *caad;
+ struct aead_chacha20_poly1305_aad *chacha_aad;
/* insert SQN.hi between ESP trailer and ICV */
if (sa->sqh_len != 0)
@@ -184,9 +226,27 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
* fill AAD fields, if any (aad fields are placed after icv),
* right now we support only one AEAD algorithm: AES-GCM.
*/
- if (sa->aad_len != 0) {
- aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
- aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ switch (sa->algo_type) {
+ case ALGO_TYPE_AES_GCM:
+ if (sa->aad_len != 0) {
+ aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+ aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_AES_CCM:
+ if (sa->aad_len != 0) {
+ caad = (struct aead_ccm_aad *)(icv->va + sa->icv_len);
+ aead_ccm_aad_fill(caad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ if (sa->aad_len != 0) {
+ chacha_aad = (struct aead_chacha20_poly1305_aad *)
+ (icv->va + sa->icv_len);
+ aead_chacha20_poly1305_aad_fill(chacha_aad,
+ sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
}
}
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 1e181cf2ce..a3f77469c3 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -63,6 +63,8 @@ outb_cop_prepare(struct rte_crypto_op *cop,
{
struct rte_crypto_sym_op *sop;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint32_t algo;
@@ -80,6 +82,15 @@ outb_cop_prepare(struct rte_crypto_op *cop,
/* NULL case */
sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
break;
+ case ALGO_TYPE_AES_GMAC:
+ /* GMAC case */
+ sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+ sa->iv_ofs);
+ aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_GCM:
/* AEAD (AES_GCM) case */
sop_aead_prepare(sop, sa, icv, hlen, plen);
@@ -89,6 +100,26 @@ outb_cop_prepare(struct rte_crypto_op *cop,
sa->iv_ofs);
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ /* AEAD (AES_CCM) case */
+ sop_aead_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ ccm = rte_crypto_op_ctod_offset(cop, struct aead_ccm_iv *,
+ sa->iv_ofs);
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ /* AEAD (CHACHA20_POLY) case */
+ sop_aead_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ chacha20_poly1305 = rte_crypto_op_ctod_offset(cop,
+ struct aead_chacha20_poly1305_iv *,
+ sa->iv_ofs);
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
/* Cipher-Auth (AES-CTR *) case */
sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
@@ -196,7 +227,9 @@ outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
const union sym_op_data *icv)
{
uint32_t *psqh;
- struct aead_gcm_aad *aad;
+ struct aead_gcm_aad *gaad;
+ struct aead_ccm_aad *caad;
+ struct aead_chacha20_poly1305_aad *chacha20_poly1305_aad;
/* insert SQN.hi between ESP trailer and ICV */
if (sa->sqh_len != 0) {
@@ -208,9 +241,29 @@ outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
* fill IV and AAD fields, if any (aad fields are placed after icv),
* right now we support only one AEAD algorithm: AES-GCM .
*/
+ switch (sa->algo_type) {
+ case ALGO_TYPE_AES_GCM:
if (sa->aad_len != 0) {
- aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
- aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ gaad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+ aead_gcm_aad_fill(gaad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_AES_CCM:
+ if (sa->aad_len != 0) {
+ caad = (struct aead_ccm_aad *)(icv->va + sa->icv_len);
+ aead_ccm_aad_fill(caad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ if (sa->aad_len != 0) {
+ chacha20_poly1305_aad = (struct aead_chacha20_poly1305_aad *)
+ (icv->va + sa->icv_len);
+ aead_chacha20_poly1305_aad_fill(chacha20_poly1305_aad,
+ sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ default:
+ break;
}
}
@@ -418,6 +471,8 @@ outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
{
uint64_t *ivp = iv;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint32_t clen;
@@ -426,6 +481,15 @@ outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
gcm = iv;
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ ccm = iv;
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ chacha20_poly1305 = iv;
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
ctr = iv;
aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt);
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index e59189d215..720e0f365b 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -47,6 +47,15 @@ fill_crypto_xform(struct crypto_xform *xform, uint64_t type,
if (xfn != NULL)
return -EINVAL;
xform->aead = &xf->aead;
+
+ /* GMAC has only auth */
+ } else if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+ xf->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ if (xfn != NULL)
+ return -EINVAL;
+ xform->auth = &xf->auth;
+ xform->cipher = &xfn->cipher;
+
/*
* CIPHER+AUTH xforms are expected in strict order,
* depending on SA direction:
@@ -247,12 +256,13 @@ esp_inb_init(struct rte_ipsec_sa *sa)
sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
/*
- * for AEAD and NULL algorithms we can assume that
+ * for AEAD algorithms we can assume that
* auth and cipher offsets would be equal.
*/
switch (sa->algo_type) {
case ALGO_TYPE_AES_GCM:
- case ALGO_TYPE_NULL:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
sa->ctp.auth.raw = sa->ctp.cipher.raw;
break;
default:
@@ -294,6 +304,8 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
switch (algo_type) {
case ALGO_TYPE_AES_GCM:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
case ALGO_TYPE_AES_CTR:
case ALGO_TYPE_NULL:
sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr) +
@@ -305,15 +317,20 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr);
sa->ctp.cipher.length = sa->iv_len;
break;
+ case ALGO_TYPE_AES_GMAC:
+ sa->ctp.cipher.offset = 0;
+ sa->ctp.cipher.length = 0;
+ break;
}
/*
- * for AEAD and NULL algorithms we can assume that
+ * for AEAD algorithms we can assume that
* auth and cipher offsets would be equal.
*/
switch (algo_type) {
case ALGO_TYPE_AES_GCM:
- case ALGO_TYPE_NULL:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
sa->ctp.auth.raw = sa->ctp.cipher.raw;
break;
default:
@@ -374,13 +391,39 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->pad_align = IPSEC_PAD_AES_GCM;
sa->algo_type = ALGO_TYPE_AES_GCM;
break;
+ case RTE_CRYPTO_AEAD_AES_CCM:
+ /* RFC 4309 */
+ sa->aad_len = sizeof(struct aead_ccm_aad);
+ sa->icv_len = cxf->aead->digest_length;
+ sa->iv_ofs = cxf->aead->iv.offset;
+ sa->iv_len = sizeof(uint64_t);
+ sa->pad_align = IPSEC_PAD_AES_CCM;
+ sa->algo_type = ALGO_TYPE_AES_CCM;
+ break;
+ case RTE_CRYPTO_AEAD_CHACHA20_POLY1305:
+ /* RFC 7634 & 8439*/
+ sa->aad_len = sizeof(struct aead_chacha20_poly1305_aad);
+ sa->icv_len = cxf->aead->digest_length;
+ sa->iv_ofs = cxf->aead->iv.offset;
+ sa->iv_len = sizeof(uint64_t);
+ sa->pad_align = IPSEC_PAD_CHACHA20_POLY1305;
+ sa->algo_type = ALGO_TYPE_CHACHA20_POLY1305;
+ break;
default:
return -EINVAL;
}
+ } else if (cxf->auth->algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ /* RFC 4543 */
+ /* AES-GMAC is a special case of auth that needs IV */
+ sa->pad_align = IPSEC_PAD_AES_GMAC;
+ sa->iv_len = sizeof(uint64_t);
+ sa->icv_len = cxf->auth->digest_length;
+ sa->iv_ofs = cxf->auth->iv.offset;
+ sa->algo_type = ALGO_TYPE_AES_GMAC;
+
} else {
sa->icv_len = cxf->auth->digest_length;
sa->iv_ofs = cxf->cipher->iv.offset;
- sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
switch (cxf->cipher->algo) {
case RTE_CRYPTO_CIPHER_NULL:
@@ -414,6 +457,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
}
}
+ sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
sa->udata = prm->userdata;
sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi);
sa->salt = prm->ipsec_xform.salt;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 1bffe751f5..107ebd1519 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -19,7 +19,10 @@ enum {
IPSEC_PAD_AES_CBC = IPSEC_MAX_IV_SIZE,
IPSEC_PAD_AES_CTR = IPSEC_PAD_DEFAULT,
IPSEC_PAD_AES_GCM = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_AES_CCM = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_CHACHA20_POLY1305 = IPSEC_PAD_DEFAULT,
IPSEC_PAD_NULL = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_AES_GMAC = IPSEC_PAD_DEFAULT,
};
/* iv sizes for different algorithms */
@@ -67,6 +70,9 @@ enum sa_algo_type {
ALGO_TYPE_AES_CBC,
ALGO_TYPE_AES_CTR,
ALGO_TYPE_AES_GCM,
+ ALGO_TYPE_AES_CCM,
+ ALGO_TYPE_CHACHA20_POLY1305,
+ ALGO_TYPE_AES_GMAC,
ALGO_TYPE_MAX
};
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v5 06/10] ipsec: add transmit segmentation offload support
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 00/10] new features for ipsec and security libraries Radu Nicolau
` (4 preceding siblings ...)
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 05/10] ipsec: add support for AEAD algorithms Radu Nicolau
@ 2021-09-10 11:32 ` Radu Nicolau
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 07/10] ipsec: add support for NAT-T Radu Nicolau
` (4 subsequent siblings)
10 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-09-10 11:32 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add support for transmit segmentation offload to inline crypto processing
mode. This offload is not supported by other offload modes, as at a
minimum it requires inline crypto for IPsec to be supported on the
network interface.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
lib/ipsec/esp_inb.c | 4 +-
lib/ipsec/esp_outb.c | 115 +++++++++++++++++++++++++++++++++++--------
lib/ipsec/iph.h | 10 +++-
lib/ipsec/sa.c | 6 +++
lib/ipsec/sa.h | 4 ++
5 files changed, 114 insertions(+), 25 deletions(-)
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index d66c88f05d..a6ab8fbdd5 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -668,8 +668,8 @@ trs_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
/* modify packet's layout */
np = trs_process_step2(mb[i], ml[i], hl[i], cofs,
to[i], tl, sqn + k);
- update_trs_l3hdr(sa, np + l2, mb[i]->pkt_len,
- l2, hl[i] - l2, espt[i].next_proto);
+ update_trs_l34hdrs(sa, np + l2, mb[i]->pkt_len,
+ l2, hl[i] - l2, espt[i].next_proto, 0);
/* update mbuf's metadata */
trs_process_step3(mb[i]);
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index a3f77469c3..9fc7075796 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -2,6 +2,8 @@
* Copyright(c) 2018-2020 Intel Corporation
*/
+#include <math.h>
+
#include <rte_ipsec.h>
#include <rte_esp.h>
#include <rte_ip.h>
@@ -156,11 +158,20 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* number of bytes to encrypt */
clen = plen + sizeof(*espt);
- clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+ /* We don't need to pad/ailgn packet when using TSO offload */
+ if (likely(!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))))
+ clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
/* pad length + esp tail */
pdlen = clen - plen;
- tlen = pdlen + sa->icv_len + sqh_len;
+
+ /* We don't append ICV length when using TSO offload */
+ if (likely(!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))))
+ tlen = pdlen + sa->icv_len + sqh_len;
+ else
+ tlen = pdlen + sqh_len;
/* do append and prepend */
ml = rte_pktmbuf_lastseg(mb);
@@ -337,6 +348,7 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
char *ph, *pt;
uint64_t *iv;
uint32_t l2len, l3len;
+ uint8_t tso = mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG) ? 1 : 0;
l2len = mb->l2_len;
l3len = mb->l3_len;
@@ -349,11 +361,19 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* number of bytes to encrypt */
clen = plen + sizeof(*espt);
- clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+ /* We don't need to pad/ailgn packet when using TSO offload */
+ if (likely(!tso))
+ clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
/* pad length + esp tail */
pdlen = clen - plen;
- tlen = pdlen + sa->icv_len + sqh_len;
+
+ /* We don't append ICV length when using TSO offload */
+ if (likely(!tso))
+ tlen = pdlen + sa->icv_len + sqh_len;
+ else
+ tlen = pdlen + sqh_len;
/* do append and insert */
ml = rte_pktmbuf_lastseg(mb);
@@ -375,8 +395,8 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
insert_esph(ph, ph + hlen, uhlen);
/* update ip header fields */
- np = update_trs_l3hdr(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
- l3len, IPPROTO_ESP);
+ np = update_trs_l34hdrs(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
+ l3len, IPPROTO_ESP, tso);
/* update spi, seqn and iv */
esph = (struct rte_esp_hdr *)(ph + uhlen);
@@ -651,6 +671,33 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
}
}
+/* check if packet will exceed MSS and segmentation is required */
+static inline int
+esn_outb_nb_segments(const struct rte_ipsec_sa *sa, struct rte_mbuf *m) {
+ uint16_t segments = 1;
+ uint16_t pkt_l3len = m->pkt_len - m->l2_len;
+
+ /* Only support segmentation for UDP/TCP flows */
+ if (!(m->packet_type & (RTE_PTYPE_L4_UDP | RTE_PTYPE_L4_TCP)))
+ return segments;
+
+ if (sa->tso.enabled && pkt_l3len > sa->tso.mss) {
+ segments = ceil((float)pkt_l3len / sa->tso.mss);
+
+ if (m->packet_type & RTE_PTYPE_L4_TCP) {
+ m->ol_flags |= (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM);
+ m->l4_len = sizeof(struct rte_tcp_hdr);
+ } else {
+ m->ol_flags |= (PKT_TX_UDP_SEG | PKT_TX_UDP_CKSUM);
+ m->l4_len = sizeof(struct rte_udp_hdr);
+ }
+
+ m->tso_segsz = sa->tso.mss;
+ }
+
+ return segments;
+}
+
/*
* process group of ESP outbound tunnel packets destined for
* INLINE_CRYPTO type of device.
@@ -660,24 +707,29 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
int32_t rc;
- uint32_t i, k, n;
+ uint32_t i, k, nb_sqn = 0, nb_sqn_alloc;
uint64_t sqn;
rte_be64_t sqc;
struct rte_ipsec_sa *sa;
union sym_op_data icv;
uint64_t iv[IPSEC_MAX_IV_QWORD];
uint32_t dr[num];
+ uint16_t nb_segs[num];
sa = ss->sa;
- n = num;
- sqn = esn_outb_update_sqn(sa, &n);
- if (n != num)
+ for (i = 0; i != num; i++) {
+ nb_segs[i] = esn_outb_nb_segments(sa, mb[i]);
+ nb_sqn += nb_segs[i];
+ }
+
+ nb_sqn_alloc = nb_sqn;
+ sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
+ if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
k = 0;
- for (i = 0; i != n; i++) {
-
+ for (i = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -691,11 +743,18 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
dr[i - k] = i;
rte_errno = -rc;
}
+
+ /**
+ * If packet is using tso, increment sqn by the number of
+ * segments for packet
+ */
+ if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
+ sqn += nb_segs[i] - 1;
}
/* copy not processed mbufs beyond good ones */
- if (k != n && k != 0)
- move_bad_mbufs(mb, dr, n, n - k);
+ if (k != num && k != 0)
+ move_bad_mbufs(mb, dr, num, num - k);
inline_outb_mbuf_prepare(ss, mb, k);
return k;
@@ -710,23 +769,30 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
int32_t rc;
- uint32_t i, k, n;
+ uint32_t i, k, nb_sqn, nb_sqn_alloc;
uint64_t sqn;
rte_be64_t sqc;
struct rte_ipsec_sa *sa;
union sym_op_data icv;
uint64_t iv[IPSEC_MAX_IV_QWORD];
uint32_t dr[num];
+ uint16_t nb_segs[num];
sa = ss->sa;
- n = num;
- sqn = esn_outb_update_sqn(sa, &n);
- if (n != num)
+ /* Calculate number of sequence numbers required */
+ for (i = 0, nb_sqn = 0; i != num; i++) {
+ nb_segs[i] = esn_outb_nb_segments(sa, mb[i]);
+ nb_sqn += nb_segs[i];
+ }
+
+ nb_sqn_alloc = nb_sqn;
+ sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
+ if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
k = 0;
- for (i = 0; i != n; i++) {
+ for (i = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -741,11 +807,18 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
dr[i - k] = i;
rte_errno = -rc;
}
+
+ /**
+ * If packet is using tso, increment sqn by the number of
+ * segments for packet
+ */
+ if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
+ sqn += nb_segs[i] - 1;
}
/* copy not processed mbufs beyond good ones */
- if (k != n && k != 0)
- move_bad_mbufs(mb, dr, n, n - k);
+ if (k != num && k != 0)
+ move_bad_mbufs(mb, dr, num, num - k);
inline_outb_mbuf_prepare(ss, mb, k);
return k;
diff --git a/lib/ipsec/iph.h b/lib/ipsec/iph.h
index 861f16905a..2d223199ac 100644
--- a/lib/ipsec/iph.h
+++ b/lib/ipsec/iph.h
@@ -6,6 +6,8 @@
#define _IPH_H_
#include <rte_ip.h>
+#include <rte_udp.h>
+#include <rte_tcp.h>
/**
* @file iph.h
@@ -39,8 +41,8 @@ insert_esph(char *np, char *op, uint32_t hlen)
/* update original ip header fields for transport case */
static inline int
-update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
- uint32_t l2len, uint32_t l3len, uint8_t proto)
+update_trs_l34hdrs(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
+ uint32_t l2len, uint32_t l3len, uint8_t proto, uint8_t tso)
{
int32_t rc;
@@ -51,6 +53,10 @@ update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
v4h = p;
rc = v4h->next_proto_id;
v4h->next_proto_id = proto;
+ if (tso) {
+ v4h->hdr_checksum = 0;
+ v4h->total_length = 0;
+ }
v4h->total_length = rte_cpu_to_be_16(plen - l2len);
/* IPv6 */
} else {
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 720e0f365b..2ecbbce0a4 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -565,6 +565,12 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->type = type;
sa->size = sz;
+
+ if (prm->ipsec_xform.options.tso == 1) {
+ sa->tso.enabled = 1;
+ sa->tso.mss = prm->ipsec_xform.mss;
+ }
+
/* check for ESN flag */
sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
UINT32_MAX : UINT64_MAX;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 107ebd1519..5e237f3525 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -113,6 +113,10 @@ struct rte_ipsec_sa {
uint8_t iv_len;
uint8_t pad_align;
uint8_t tos_mask;
+ struct {
+ uint8_t enabled:1;
+ uint16_t mss;
+ } tso;
/* template for tunnel header */
uint8_t hdr[IPSEC_MAX_HDR_SIZE];
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v5 07/10] ipsec: add support for NAT-T
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 00/10] new features for ipsec and security libraries Radu Nicolau
` (5 preceding siblings ...)
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 06/10] ipsec: add transmit segmentation offload support Radu Nicolau
@ 2021-09-10 11:32 ` Radu Nicolau
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 08/10] ipsec: add support for SA telemetry Radu Nicolau
` (3 subsequent siblings)
10 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-09-10 11:32 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add support for the IPsec NAT-Traversal use case for Tunnel mode
packets.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
lib/ipsec/iph.h | 17 +++++++++++++++++
lib/ipsec/rte_ipsec_sa.h | 8 +++++++-
lib/ipsec/sa.c | 13 ++++++++++++-
lib/ipsec/sa.h | 4 ++++
4 files changed, 40 insertions(+), 2 deletions(-)
diff --git a/lib/ipsec/iph.h b/lib/ipsec/iph.h
index 2d223199ac..c5c213a2b4 100644
--- a/lib/ipsec/iph.h
+++ b/lib/ipsec/iph.h
@@ -251,6 +251,7 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa *sa, void *outh,
{
struct rte_ipv4_hdr *v4h;
struct rte_ipv6_hdr *v6h;
+ struct rte_udp_hdr *udph;
uint8_t is_outh_ipv4;
if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
@@ -258,11 +259,27 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa *sa, void *outh,
v4h = outh;
v4h->packet_id = pid;
v4h->total_length = rte_cpu_to_be_16(plen - l2len);
+
+ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
+ udph = (struct rte_udp_hdr *)(v4h + 1);
+ udph->dst_port = sa->natt.dport;
+ udph->src_port = sa->natt.sport;
+ udph->dgram_len = rte_cpu_to_be_16(plen - l2len -
+ (sizeof(*v4h) + sizeof(*udph)));
+ }
} else {
is_outh_ipv4 = 0;
v6h = outh;
v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
sizeof(*v6h));
+
+ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
+ udph = (struct rte_udp_hdr *)(v6h + 1);
+ udph->dst_port = sa->natt.dport;
+ udph->src_port = sa->natt.sport;
+ udph->dgram_len = rte_cpu_to_be_16(plen - l2len -
+ (sizeof(*v6h) + sizeof(*udph)));
+ }
}
if (sa->type & TUN_HDR_MSK)
diff --git a/lib/ipsec/rte_ipsec_sa.h b/lib/ipsec/rte_ipsec_sa.h
index cf51ad8338..40d1e70d45 100644
--- a/lib/ipsec/rte_ipsec_sa.h
+++ b/lib/ipsec/rte_ipsec_sa.h
@@ -76,6 +76,7 @@ struct rte_ipsec_sa_prm {
* - inbound/outbound
* - mode (TRANSPORT/TUNNEL)
* - for TUNNEL outer IP version (IPv4/IPv6)
+ * - NAT-T UDP encapsulated (TUNNEL mode only)
* - are SA SQN operations 'atomic'
* - ESN enabled/disabled
* ...
@@ -86,7 +87,8 @@ enum {
RTE_SATP_LOG2_PROTO,
RTE_SATP_LOG2_DIR,
RTE_SATP_LOG2_MODE,
- RTE_SATP_LOG2_SQN = RTE_SATP_LOG2_MODE + 2,
+ RTE_SATP_LOG2_NATT = RTE_SATP_LOG2_MODE + 2,
+ RTE_SATP_LOG2_SQN,
RTE_SATP_LOG2_ESN,
RTE_SATP_LOG2_ECN,
RTE_SATP_LOG2_DSCP
@@ -109,6 +111,10 @@ enum {
#define RTE_IPSEC_SATP_MODE_TUNLV4 (1ULL << RTE_SATP_LOG2_MODE)
#define RTE_IPSEC_SATP_MODE_TUNLV6 (2ULL << RTE_SATP_LOG2_MODE)
+#define RTE_IPSEC_SATP_NATT_MASK (1ULL << RTE_SATP_LOG2_NATT)
+#define RTE_IPSEC_SATP_NATT_DISABLE (0ULL << RTE_SATP_LOG2_NATT)
+#define RTE_IPSEC_SATP_NATT_ENABLE (1ULL << RTE_SATP_LOG2_NATT)
+
#define RTE_IPSEC_SATP_SQN_MASK (1ULL << RTE_SATP_LOG2_SQN)
#define RTE_IPSEC_SATP_SQN_RAW (0ULL << RTE_SATP_LOG2_SQN)
#define RTE_IPSEC_SATP_SQN_ATOM (1ULL << RTE_SATP_LOG2_SQN)
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 2ecbbce0a4..8e369e4618 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -217,6 +217,10 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
} else
return -EINVAL;
+ /* check for UDP encapsulation flag */
+ if (prm->ipsec_xform.options.udp_encap == 1)
+ tp |= RTE_IPSEC_SATP_NATT_ENABLE;
+
/* check for ESN flag */
if (prm->ipsec_xform.options.esn == 0)
tp |= RTE_IPSEC_SATP_ESN_DISABLE;
@@ -372,7 +376,8 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
const struct crypto_xform *cxf)
{
static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
- RTE_IPSEC_SATP_MODE_MASK;
+ RTE_IPSEC_SATP_MODE_MASK |
+ RTE_IPSEC_SATP_NATT_MASK;
if (prm->ipsec_xform.options.ecn)
sa->tos_mask |= RTE_IPV4_HDR_ECN_MASK;
@@ -475,10 +480,16 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
esp_inb_init(sa);
break;
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4 |
+ RTE_IPSEC_SATP_NATT_ENABLE):
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6 |
+ RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
esp_outb_tun_init(sa, prm);
break;
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
+ RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
esp_outb_init(sa, 0);
break;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 5e237f3525..3f38921eb3 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -101,6 +101,10 @@ struct rte_ipsec_sa {
uint64_t msk;
uint64_t val;
} tx_offload;
+ struct {
+ uint16_t sport;
+ uint16_t dport;
+ } natt;
uint32_t salt;
uint8_t algo_type;
uint8_t proto; /* next proto */
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v5 08/10] ipsec: add support for SA telemetry
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 00/10] new features for ipsec and security libraries Radu Nicolau
` (6 preceding siblings ...)
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 07/10] ipsec: add support for NAT-T Radu Nicolau
@ 2021-09-10 11:32 ` Radu Nicolau
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 09/10] ipsec: add support for initial SQN value Radu Nicolau
` (2 subsequent siblings)
10 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-09-10 11:32 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin, Ray Kinsella
Cc: dev, bruce.richardson, roy.fan.zhang, hemant.agrawal, gakhil,
anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add telemetry support for ipsec SAs
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
lib/ipsec/esp_inb.c | 1 +
lib/ipsec/esp_outb.c | 12 +-
lib/ipsec/meson.build | 2 +-
lib/ipsec/rte_ipsec.h | 23 ++++
lib/ipsec/sa.c | 255 +++++++++++++++++++++++++++++++++++++++++-
lib/ipsec/sa.h | 21 ++++
lib/ipsec/version.map | 9 ++
7 files changed, 317 insertions(+), 6 deletions(-)
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index a6ab8fbdd5..8cb4c16302 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -722,6 +722,7 @@ esp_inb_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
/* process packets, extract seq numbers */
k = process(sa, mb, sqn, dr, num, sqh_len);
+ sa->statistics.count += k;
/* handle unprocessed mbufs */
if (k != num && k != 0)
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 9fc7075796..2c02c3bb12 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -617,7 +617,7 @@ uint16_t
esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
uint16_t num)
{
- uint32_t i, k, icv_len, *icv;
+ uint32_t i, k, icv_len, *icv, bytes;
struct rte_mbuf *ml;
struct rte_ipsec_sa *sa;
uint32_t dr[num];
@@ -626,10 +626,12 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
k = 0;
icv_len = sa->icv_len;
+ bytes = 0;
for (i = 0; i != num; i++) {
if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
ml = rte_pktmbuf_lastseg(mb[i]);
+ bytes += mb[i]->data_len;
/* remove high-order 32 bits of esn from packet len */
mb[i]->pkt_len -= sa->sqh_len;
ml->data_len -= sa->sqh_len;
@@ -640,6 +642,8 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
} else
dr[i - k] = i;
}
+ sa->statistics.count += k;
+ sa->statistics.bytes += bytes - (sa->hdr_len * k);
/* handle unprocessed mbufs */
if (k != num) {
@@ -659,16 +663,19 @@ static inline void
inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- uint32_t i, ol_flags;
+ uint32_t i, ol_flags, bytes = 0;
ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
for (i = 0; i != num; i++) {
mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
+ bytes += mb[i]->data_len;
if (ol_flags != 0)
rte_security_set_pkt_metadata(ss->security.ctx,
ss->security.ses, mb[i], NULL);
}
+ ss->sa->statistics.count += num;
+ ss->sa->statistics.bytes += bytes - (ss->sa->hdr_len * num);
}
/* check if packet will exceed MSS and segmentation is required */
@@ -752,6 +759,7 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
sqn += nb_segs[i] - 1;
}
+
/* copy not processed mbufs beyond good ones */
if (k != num && k != 0)
move_bad_mbufs(mb, dr, num, num - k);
diff --git a/lib/ipsec/meson.build b/lib/ipsec/meson.build
index 1497f573bb..f5e44cfe47 100644
--- a/lib/ipsec/meson.build
+++ b/lib/ipsec/meson.build
@@ -6,4 +6,4 @@ sources = files('esp_inb.c', 'esp_outb.c', 'sa.c', 'ses.c', 'ipsec_sad.c')
headers = files('rte_ipsec.h', 'rte_ipsec_sa.h', 'rte_ipsec_sad.h')
indirect_headers += files('rte_ipsec_group.h')
-deps += ['mbuf', 'net', 'cryptodev', 'security', 'hash']
+deps += ['mbuf', 'net', 'cryptodev', 'security', 'hash', 'telemetry']
diff --git a/lib/ipsec/rte_ipsec.h b/lib/ipsec/rte_ipsec.h
index dd60d95915..2bb52f4b8f 100644
--- a/lib/ipsec/rte_ipsec.h
+++ b/lib/ipsec/rte_ipsec.h
@@ -158,6 +158,29 @@ rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
return ss->pkt_func.process(ss, mb, num);
}
+
+struct rte_ipsec_telemetry;
+
+/**
+ * Initialize IPsec library telemetry.
+ * @return
+ * 0 on success, negative value otherwise.
+ */
+__rte_experimental
+int
+rte_ipsec_telemetry_init(void);
+
+/**
+ * Enable per SA telemetry for a specific SA.
+ * @param sa
+ * Pointer to the *rte_ipsec_sa* object that will have telemetry enabled.
+ * @return
+ * 0 on success, negative value otherwise.
+ */
+__rte_experimental
+int
+rte_ipsec_telemetry_sa_add(struct rte_ipsec_sa *sa);
+
#include <rte_ipsec_group.h>
#ifdef __cplusplus
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 8e369e4618..5b55bbc098 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -7,7 +7,7 @@
#include <rte_ip.h>
#include <rte_errno.h>
#include <rte_cryptodev.h>
-
+#include <rte_telemetry.h>
#include "sa.h"
#include "ipsec_sqn.h"
#include "crypto.h"
@@ -25,6 +25,7 @@ struct crypto_xform {
struct rte_crypto_aead_xform *aead;
};
+
/*
* helper routine, fills internal crypto_xform structure.
*/
@@ -532,6 +533,249 @@ rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
wsz = prm->ipsec_xform.replay_win_sz;
return ipsec_sa_size(type, &wsz, &nb);
}
+struct rte_ipsec_telemetry {
+ bool initialized;
+ LIST_HEAD(, rte_ipsec_sa) sa_list_head;
+};
+
+#include <rte_malloc.h>
+
+static struct rte_ipsec_telemetry rte_ipsec_telemetry_instance = {
+ .initialized = false };
+
+static int
+handle_telemetry_cmd_ipsec_sa_list(const char *cmd __rte_unused,
+ const char *params __rte_unused,
+ struct rte_tel_data *data)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ struct rte_ipsec_sa *sa;
+
+ rte_tel_data_start_array(data, RTE_TEL_U64_VAL);
+
+ LIST_FOREACH(sa, &telemetry->sa_list_head, telemetry_next) {
+ rte_tel_data_add_array_u64(data, htonl(sa->spi));
+ }
+
+ return 0;
+}
+
+/**
+ * Handle IPsec SA statistics telemetry request
+ *
+ * Return dict of SA's with dict of key/value counters
+ *
+ * {
+ * "SA_SPI_XX": {"count": 0, "bytes": 0, "errors": 0},
+ * "SA_SPI_YY": {"count": 0, "bytes": 0, "errors": 0}
+ * }
+ *
+ */
+static int
+handle_telemetry_cmd_ipsec_sa_stats(const char *cmd __rte_unused,
+ const char *params,
+ struct rte_tel_data *data)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ struct rte_ipsec_sa *sa;
+ bool user_specified_spi = false;
+ uint32_t sa_spi;
+
+ if (params) {
+ user_specified_spi = true;
+ sa_spi = htonl((uint32_t)atoi(params));
+ }
+
+ rte_tel_data_start_dict(data);
+
+ LIST_FOREACH(sa, &telemetry->sa_list_head, telemetry_next) {
+ char sa_name[64];
+
+ static const char *name_pkt_cnt = "count";
+ static const char *name_byte_cnt = "bytes";
+ static const char *name_error_cnt = "errors";
+ struct rte_tel_data *sa_data;
+
+ /* If user provided SPI only get telemetry for that SA */
+ if (user_specified_spi && (sa_spi != sa->spi))
+ continue;
+
+ /* allocate telemetry data struct for SA telemetry */
+ sa_data = rte_tel_data_alloc();
+ if (!sa_data)
+ return -ENOMEM;
+
+ rte_tel_data_start_dict(sa_data);
+
+ /* add telemetry key/values pairs */
+ rte_tel_data_add_dict_u64(sa_data, name_pkt_cnt,
+ sa->statistics.count);
+
+ rte_tel_data_add_dict_u64(sa_data, name_byte_cnt,
+ sa->statistics.bytes);
+
+ rte_tel_data_add_dict_u64(sa_data, name_error_cnt,
+ sa->statistics.errors.count);
+
+ /* generate telemetry label */
+ snprintf(sa_name, sizeof(sa_name), "SA_SPI_%i", htonl(sa->spi));
+
+ /* add SA telemetry to dictionary container */
+ rte_tel_data_add_dict_container(data, sa_name, sa_data, 0);
+ }
+
+ return 0;
+}
+
+static int
+handle_telemetry_cmd_ipsec_sa_configuration(const char *cmd __rte_unused,
+ const char *params,
+ struct rte_tel_data *data)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ struct rte_ipsec_sa *sa;
+ uint32_t sa_spi;
+
+ if (params)
+ sa_spi = htonl((uint32_t)atoi(params));
+ else
+ return -EINVAL;
+
+ rte_tel_data_start_dict(data);
+
+ LIST_FOREACH(sa, &telemetry->sa_list_head, telemetry_next) {
+ uint64_t mode;
+
+ if (sa_spi != sa->spi)
+ continue;
+
+ /* add SA configuration key/values pairs */
+ rte_tel_data_add_dict_string(data, "Type",
+ (sa->type & RTE_IPSEC_SATP_PROTO_MASK) ==
+ RTE_IPSEC_SATP_PROTO_AH ? "AH" : "ESP");
+
+ rte_tel_data_add_dict_string(data, "Direction",
+ (sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+ RTE_IPSEC_SATP_DIR_IB ? "Inbound" : "Outbound");
+
+ mode = sa->type & RTE_IPSEC_SATP_MODE_MASK;
+
+ if (mode == RTE_IPSEC_SATP_MODE_TRANS) {
+ rte_tel_data_add_dict_string(data, "Mode", "Transport");
+ } else {
+ rte_tel_data_add_dict_string(data, "Mode", "Tunnel");
+
+ if ((sa->type & RTE_IPSEC_SATP_NATT_MASK) ==
+ RTE_IPSEC_SATP_NATT_ENABLE) {
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ } else if (sa->type &
+ RTE_IPSEC_SATP_MODE_TUNLV6) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ }
+ } else {
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ } else if (sa->type &
+ RTE_IPSEC_SATP_MODE_TUNLV6) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ }
+ }
+ }
+
+ rte_tel_data_add_dict_string(data,
+ "extended-sequence-number",
+ (sa->type & RTE_IPSEC_SATP_ESN_MASK) ==
+ RTE_IPSEC_SATP_ESN_ENABLE ?
+ "enabled" : "disabled");
+
+ if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+ RTE_IPSEC_SATP_DIR_IB)
+
+ if (sa->sqn.inb.rsn[sa->sqn.inb.rdidx])
+ rte_tel_data_add_dict_u64(data,
+ "sequence-number",
+ sa->sqn.inb.rsn[sa->sqn.inb.rdidx]->sqn);
+ else
+ rte_tel_data_add_dict_u64(data,
+ "sequence-number", 0);
+ else
+ rte_tel_data_add_dict_u64(data, "sequence-number",
+ sa->sqn.outb);
+
+ rte_tel_data_add_dict_string(data,
+ "explicit-congestion-notification",
+ (sa->type & RTE_IPSEC_SATP_ECN_MASK) ==
+ RTE_IPSEC_SATP_ECN_ENABLE ?
+ "enabled" : "disabled");
+
+ rte_tel_data_add_dict_string(data,
+ "copy-DSCP",
+ (sa->type & RTE_IPSEC_SATP_DSCP_MASK) ==
+ RTE_IPSEC_SATP_DSCP_ENABLE ?
+ "enabled" : "disabled");
+
+ rte_tel_data_add_dict_string(data, "TSO",
+ sa->tso.enabled ? "enabled" : "disabled");
+
+ if (sa->tso.enabled)
+ rte_tel_data_add_dict_u64(data, "TSO-MSS", sa->tso.mss);
+
+ }
+
+ return 0;
+}
+int
+rte_ipsec_telemetry_init(void)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ int rc = 0;
+
+ if (telemetry->initialized)
+ return rc;
+
+ LIST_INIT(&telemetry->sa_list_head);
+
+ rc = rte_telemetry_register_cmd("/ipsec/sa/list",
+ handle_telemetry_cmd_ipsec_sa_list,
+ "Return list of IPsec Security Associations with telemetry enabled.");
+ if (rc)
+ return rc;
+
+ rc = rte_telemetry_register_cmd("/ipsec/sa/stats",
+ handle_telemetry_cmd_ipsec_sa_stats,
+ "Returns IPsec Security Association stastistics. Parameters: int sa_spi");
+ if (rc)
+ return rc;
+
+ rc = rte_telemetry_register_cmd("/ipsec/sa/details",
+ handle_telemetry_cmd_ipsec_sa_configuration,
+ "Returns IPsec Security Association configuration. Parameters: int sa_spi");
+ if (rc)
+ return rc;
+
+ telemetry->initialized = true;
+
+ return rc;
+}
+
+int
+rte_ipsec_telemetry_sa_add(struct rte_ipsec_sa *sa)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+
+ LIST_INSERT_HEAD(&telemetry->sa_list_head, sa, telemetry_next);
+
+ return 0;
+}
int
rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
@@ -644,19 +888,24 @@ uint16_t
pkt_flag_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- uint32_t i, k;
+ uint32_t i, k, bytes = 0;
uint32_t dr[num];
RTE_SET_USED(ss);
k = 0;
for (i = 0; i != num; i++) {
- if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
+ if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
k++;
+ bytes += mb[i]->data_len;
+ }
else
dr[i - k] = i;
}
+ ss->sa->statistics.count += k;
+ ss->sa->statistics.bytes += bytes - (ss->sa->hdr_len * k);
+
/* handle unprocessed mbufs */
if (k != num) {
rte_errno = EBADMSG;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 3f38921eb3..b9b7ebec5b 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -122,9 +122,30 @@ struct rte_ipsec_sa {
uint16_t mss;
} tso;
+ LIST_ENTRY(rte_ipsec_sa) telemetry_next;
+ /**< list entry for telemetry enabled SA */
+
+
+ RTE_MARKER cachealign_statistics __rte_cache_min_aligned;
+
+ /* Statistics */
+ struct {
+ uint64_t count;
+ uint64_t bytes;
+
+ struct {
+ uint64_t count;
+ uint64_t authentication_failed;
+ } errors;
+ } statistics;
+
+ RTE_MARKER cachealign_tunnel_header __rte_cache_min_aligned;
+
/* template for tunnel header */
uint8_t hdr[IPSEC_MAX_HDR_SIZE];
+
+ RTE_MARKER cachealign_tunnel_seq_num_replay_win __rte_cache_min_aligned;
/*
* sqn and replay window
* In case of SA handled by multiple threads *sqn* cacheline
diff --git a/lib/ipsec/version.map b/lib/ipsec/version.map
index ba8753eac4..fed6b6aba1 100644
--- a/lib/ipsec/version.map
+++ b/lib/ipsec/version.map
@@ -19,3 +19,12 @@ DPDK_22 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ # added in 21.11
+ rte_ipsec_telemetry_init;
+ rte_ipsec_telemetry_sa_add;
+
+};
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v5 09/10] ipsec: add support for initial SQN value
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 00/10] new features for ipsec and security libraries Radu Nicolau
` (7 preceding siblings ...)
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 08/10] ipsec: add support for SA telemetry Radu Nicolau
@ 2021-09-10 11:32 ` Radu Nicolau
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 10/10] ipsec: add ol_flags support Radu Nicolau
2021-09-15 15:25 ` [dpdk-dev] [PATCH v5 00/10] new features for ipsec and security libraries Ananyev, Konstantin
10 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-09-10 11:32 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Update IPsec library to support initial SQN value.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
lib/ipsec/esp_outb.c | 19 ++++++++++++-------
lib/ipsec/sa.c | 29 ++++++++++++++++++++++-------
2 files changed, 34 insertions(+), 14 deletions(-)
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 2c02c3bb12..8a6d09558f 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -661,7 +661,7 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
*/
static inline void
inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
- struct rte_mbuf *mb[], uint16_t num)
+ struct rte_mbuf *mb[], uint16_t num, uint64_t *sqn)
{
uint32_t i, ol_flags, bytes = 0;
@@ -672,7 +672,7 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
bytes += mb[i]->data_len;
if (ol_flags != 0)
rte_security_set_pkt_metadata(ss->security.ctx,
- ss->security.ses, mb[i], NULL);
+ ss->security.ses, mb[i], sqn);
}
ss->sa->statistics.count += num;
ss->sa->statistics.bytes += bytes - (ss->sa->hdr_len * num);
@@ -764,7 +764,10 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
if (k != num && k != 0)
move_bad_mbufs(mb, dr, num, num - k);
- inline_outb_mbuf_prepare(ss, mb, k);
+ if (sa->sqn_mask > UINT32_MAX)
+ inline_outb_mbuf_prepare(ss, mb, k, &sqn);
+ else
+ inline_outb_mbuf_prepare(ss, mb, k, NULL);
return k;
}
@@ -799,8 +802,7 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
- k = 0;
- for (i = 0; i != num; i++) {
+ for (i = 0, k = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -828,7 +830,10 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
if (k != num && k != 0)
move_bad_mbufs(mb, dr, num, num - k);
- inline_outb_mbuf_prepare(ss, mb, k);
+ if (sa->sqn_mask > UINT32_MAX)
+ inline_outb_mbuf_prepare(ss, mb, k, &sqn);
+ else
+ inline_outb_mbuf_prepare(ss, mb, k, NULL);
return k;
}
@@ -840,6 +845,6 @@ uint16_t
inline_proto_outb_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- inline_outb_mbuf_prepare(ss, mb, num);
+ inline_outb_mbuf_prepare(ss, mb, num, NULL);
return num;
}
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 5b55bbc098..242fdcd461 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -294,11 +294,11 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
* Init ESP outbound specific things.
*/
static void
-esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
+esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen, uint64_t sqn)
{
uint8_t algo_type;
- sa->sqn.outb = 1;
+ sa->sqn.outb = sqn;
algo_type = sa->algo_type;
@@ -356,6 +356,8 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
static void
esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
{
+ uint64_t sqn = prm->ipsec_xform.esn.value > 0 ?
+ prm->ipsec_xform.esn.value : 0;
sa->proto = prm->tun.next_proto;
sa->hdr_len = prm->tun.hdr_len;
sa->hdr_l3_off = prm->tun.hdr_l3_off;
@@ -366,7 +368,7 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
- esp_outb_init(sa, sa->hdr_len);
+ esp_outb_init(sa, sa->hdr_len, sqn);
}
/*
@@ -376,6 +378,8 @@ static int
esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
const struct crypto_xform *cxf)
{
+ uint64_t sqn = prm->ipsec_xform.esn.value > 0 ?
+ prm->ipsec_xform.esn.value : 0;
static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
RTE_IPSEC_SATP_MODE_MASK |
RTE_IPSEC_SATP_NATT_MASK;
@@ -492,7 +496,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
- esp_outb_init(sa, 0);
+ esp_outb_init(sa, 0, sqn);
break;
}
@@ -503,15 +507,19 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
* helper function, init SA replay structure.
*/
static void
-fill_sa_replay(struct rte_ipsec_sa *sa, uint32_t wnd_sz, uint32_t nb_bucket)
+fill_sa_replay(struct rte_ipsec_sa *sa,
+ uint32_t wnd_sz, uint32_t nb_bucket, uint64_t sqn)
{
sa->replay.win_sz = wnd_sz;
sa->replay.nb_bucket = nb_bucket;
sa->replay.bucket_index_mask = nb_bucket - 1;
sa->sqn.inb.rsn[0] = (struct replay_sqn *)(sa + 1);
- if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+ sa->sqn.inb.rsn[0]->sqn = sqn;
+ if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM) {
sa->sqn.inb.rsn[1] = (struct replay_sqn *)
((uintptr_t)sa->sqn.inb.rsn[0] + rsn_size(nb_bucket));
+ sa->sqn.inb.rsn[1]->sqn = sqn;
+ }
}
int
@@ -830,13 +838,20 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
UINT32_MAX : UINT64_MAX;
+ /* if we are starting from a non-zero sn value */
+ if (prm->ipsec_xform.esn.value > 0) {
+ if (prm->ipsec_xform.direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+ sa->sqn.outb = prm->ipsec_xform.esn.value;
+ }
+
rc = esp_sa_init(sa, prm, &cxf);
if (rc != 0)
rte_ipsec_sa_fini(sa);
/* fill replay window related fields */
if (nb != 0)
- fill_sa_replay(sa, wsz, nb);
+ fill_sa_replay(sa, wsz, nb, prm->ipsec_xform.esn.value);
return sz;
}
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v5 10/10] ipsec: add ol_flags support
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 00/10] new features for ipsec and security libraries Radu Nicolau
` (8 preceding siblings ...)
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 09/10] ipsec: add support for initial SQN value Radu Nicolau
@ 2021-09-10 11:32 ` Radu Nicolau
2021-09-15 15:25 ` [dpdk-dev] [PATCH v5 00/10] new features for ipsec and security libraries Ananyev, Konstantin
10 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-09-10 11:32 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Update the IPsec library to set mbuff->ol_flags and use the configured
L3 header length when setting the mbuff->tx_offload fields
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/esp_inb.c | 17 ++++++++++++--
lib/ipsec/esp_outb.c | 48 ++++++++++++++++++++++++++++++---------
lib/ipsec/rte_ipsec_sa.h | 3 ++-
lib/ipsec/sa.c | 49 ++++++++++++++++++++++++++++++++++++++--
lib/ipsec/sa.h | 8 +++++++
5 files changed, 109 insertions(+), 16 deletions(-)
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index 8cb4c16302..5fcb41297e 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -559,7 +559,8 @@ trs_process_step3(struct rte_mbuf *mb)
* - tx_offload
*/
static inline void
-tun_process_step3(struct rte_mbuf *mb, uint64_t txof_msk, uint64_t txof_val)
+tun_process_step3(struct rte_mbuf *mb, uint8_t is_ipv4, uint64_t txof_msk,
+ uint64_t txof_val)
{
/* reset mbuf metatdata: L2/L3 len, packet type */
mb->packet_type = RTE_PTYPE_UNKNOWN;
@@ -567,6 +568,14 @@ tun_process_step3(struct rte_mbuf *mb, uint64_t txof_msk, uint64_t txof_val)
/* clear the PKT_RX_SEC_OFFLOAD flag if set */
mb->ol_flags &= ~PKT_RX_SEC_OFFLOAD;
+
+ if (is_ipv4) {
+ mb->l3_len = sizeof(struct rte_ipv4_hdr);
+ mb->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ } else {
+ mb->l3_len = sizeof(struct rte_ipv6_hdr);
+ mb->ol_flags |= PKT_TX_IPV6;
+ }
}
/*
@@ -618,8 +627,12 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
update_tun_inb_l3hdr(sa, outh, inh);
/* update mbuf's metadata */
- tun_process_step3(mb[i], sa->tx_offload.msk,
+ tun_process_step3(mb[i],
+ (sa->type & RTE_IPSEC_SATP_IPV_MASK) ==
+ RTE_IPSEC_SATP_IPV4 ? 1 : 0,
+ sa->tx_offload.msk,
sa->tx_offload.val);
+
k++;
} else
dr[i - k] = i;
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 8a6d09558f..d8e261e6fb 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -19,7 +19,7 @@
typedef int32_t (*esp_outb_prepare_t)(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len);
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t icrypto);
/*
* helper function to fill crypto_sym op for cipher+auth algorithms.
@@ -140,9 +140,9 @@ outb_cop_prepare(struct rte_crypto_op *cop,
static inline int32_t
outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len)
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t icrypto)
{
- uint32_t clen, hlen, l2len, pdlen, pdofs, plen, tlen;
+ uint32_t clen, hlen, l2len, l3len, pdlen, pdofs, plen, tlen;
struct rte_mbuf *ml;
struct rte_esp_hdr *esph;
struct rte_esp_tail *espt;
@@ -154,6 +154,8 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* size of ipsec protected data */
l2len = mb->l2_len;
+ l3len = mb->l3_len;
+
plen = mb->pkt_len - l2len;
/* number of bytes to encrypt */
@@ -190,8 +192,26 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
/* update pkt l2/l3 len */
- mb->tx_offload = (mb->tx_offload & sa->tx_offload.msk) |
- sa->tx_offload.val;
+ if (icrypto) {
+ mb->tx_offload =
+ (mb->tx_offload & sa->inline_crypto.tx_offload.msk) |
+ sa->inline_crypto.tx_offload.val;
+ mb->l3_len = l3len;
+
+ mb->ol_flags |= sa->inline_crypto.tx_ol_flags;
+
+ /* set ip checksum offload for inner */
+ if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4)
+ mb->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if ((sa->type & RTE_IPSEC_SATP_IPV_MASK)
+ == RTE_IPSEC_SATP_IPV6)
+ mb->ol_flags |= PKT_TX_IPV6;
+ } else {
+ mb->tx_offload = (mb->tx_offload & sa->tx_offload.msk) |
+ sa->tx_offload.val;
+
+ mb->ol_flags |= sa->tx_ol_flags;
+ }
/* copy tunnel pkt header */
rte_memcpy(ph, sa->hdr, sa->hdr_len);
@@ -311,7 +331,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
/* try to update the packet itself */
rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv,
- sa->sqh_len);
+ sa->sqh_len, 0);
/* success, setup crypto op */
if (rc >= 0) {
outb_pkt_xprepare(sa, sqc, &icv);
@@ -338,7 +358,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
static inline int32_t
outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len)
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t icrypto __rte_unused)
{
uint8_t np;
uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen;
@@ -394,10 +414,16 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* shift L2/L3 headers */
insert_esph(ph, ph + hlen, uhlen);
+ if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4)
+ mb->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV6)
+ mb->ol_flags |= PKT_TX_IPV6;
+
/* update ip header fields */
np = update_trs_l34hdrs(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
l3len, IPPROTO_ESP, tso);
+
/* update spi, seqn and iv */
esph = (struct rte_esp_hdr *)(ph + uhlen);
iv = (uint64_t *)(esph + 1);
@@ -463,7 +489,7 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
/* try to update the packet itself */
rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv,
- sa->sqh_len);
+ sa->sqh_len, 0);
/* success, setup crypto op */
if (rc >= 0) {
outb_pkt_xprepare(sa, sqc, &icv);
@@ -560,7 +586,7 @@ cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss,
gen_iv(ivbuf[k], sqc);
/* try to update the packet itself */
- rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len);
+ rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len, 0);
/* success, proceed with preparations */
if (rc >= 0) {
@@ -741,7 +767,7 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
gen_iv(iv, sqc);
/* try to update the packet itself */
- rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
+ rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0, 1);
k += (rc >= 0);
@@ -808,7 +834,7 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
gen_iv(iv, sqc);
/* try to update the packet itself */
- rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
+ rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0, 0);
k += (rc >= 0);
diff --git a/lib/ipsec/rte_ipsec_sa.h b/lib/ipsec/rte_ipsec_sa.h
index 40d1e70d45..3c36dcaa77 100644
--- a/lib/ipsec/rte_ipsec_sa.h
+++ b/lib/ipsec/rte_ipsec_sa.h
@@ -38,7 +38,8 @@ struct rte_ipsec_sa_prm {
union {
struct {
uint8_t hdr_len; /**< tunnel header len */
- uint8_t hdr_l3_off; /**< offset for IPv4/IPv6 header */
+ uint8_t hdr_l3_off; /**< tunnel l3 header len */
+ uint8_t hdr_l3_len; /**< tunnel l3 header len */
uint8_t next_proto; /**< next header protocol */
const void *hdr; /**< tunnel header template */
} tun; /**< tunnel mode related parameters */
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 242fdcd461..51f71b30c6 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -17,6 +17,8 @@
#define MBUF_MAX_L2_LEN RTE_LEN2MASK(RTE_MBUF_L2_LEN_BITS, uint64_t)
#define MBUF_MAX_L3_LEN RTE_LEN2MASK(RTE_MBUF_L3_LEN_BITS, uint64_t)
+#define MBUF_MAX_TSO_LEN RTE_LEN2MASK(RTE_MBUF_TSO_SEGSZ_BITS, uint64_t)
+#define MBUF_MAX_OL3_LEN RTE_LEN2MASK(RTE_MBUF_OUTL3_LEN_BITS, uint64_t)
/* some helper structures */
struct crypto_xform {
@@ -348,6 +350,11 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen, uint64_t sqn)
sa->cofs.ofs.cipher.head = sa->ctp.cipher.offset - sa->ctp.auth.offset;
sa->cofs.ofs.cipher.tail = (sa->ctp.auth.offset + sa->ctp.auth.length) -
(sa->ctp.cipher.offset + sa->ctp.cipher.length);
+
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
+ sa->tx_ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV6)
+ sa->tx_ol_flags |= PKT_TX_IPV6;
}
/*
@@ -362,9 +369,43 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
sa->hdr_len = prm->tun.hdr_len;
sa->hdr_l3_off = prm->tun.hdr_l3_off;
+
+ /* update l2_len and l3_len fields for outbound mbuf */
+ sa->inline_crypto.tx_offload.val = rte_mbuf_tx_offload(
+ 0, /* iL2_LEN */
+ 0, /* iL3_LEN */
+ 0, /* iL4_LEN */
+ 0, /* TSO_SEG_SZ */
+ prm->tun.hdr_l3_len, /* oL3_LEN */
+ prm->tun.hdr_l3_off, /* oL2_LEN */
+ 0);
+
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_TUNNEL_ESP;
+
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_OUTER_IPV4;
+ else if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV6)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_OUTER_IPV6;
+
+ if (sa->inline_crypto.tx_ol_flags & PKT_TX_OUTER_IPV4)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_OUTER_IP_CKSUM;
+ if (sa->tx_ol_flags & PKT_TX_IPV4)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_IP_CKSUM;
+
/* update l2_len and l3_len fields for outbound mbuf */
- sa->tx_offload.val = rte_mbuf_tx_offload(sa->hdr_l3_off,
- sa->hdr_len - sa->hdr_l3_off, 0, 0, 0, 0, 0);
+ sa->tx_offload.val = rte_mbuf_tx_offload(
+ prm->tun.hdr_l3_off, /* iL2_LEN */
+ prm->tun.hdr_l3_len, /* iL3_LEN */
+ 0, /* iL4_LEN */
+ 0, /* TSO_SEG_SZ */
+ 0, /* oL3_LEN */
+ 0, /* oL2_LEN */
+ 0);
+
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
+ sa->tx_ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV6)
+ sa->tx_ol_flags |= PKT_TX_IPV6;
memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
@@ -473,6 +514,10 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->salt = prm->ipsec_xform.salt;
/* preserve all values except l2_len and l3_len */
+ sa->inline_crypto.tx_offload.msk =
+ ~rte_mbuf_tx_offload(MBUF_MAX_L2_LEN, MBUF_MAX_L3_LEN,
+ 0, 0, MBUF_MAX_OL3_LEN, 0, 0);
+
sa->tx_offload.msk =
~rte_mbuf_tx_offload(MBUF_MAX_L2_LEN, MBUF_MAX_L3_LEN,
0, 0, 0, 0, 0);
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index b9b7ebec5b..172d094c4b 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -101,6 +101,14 @@ struct rte_ipsec_sa {
uint64_t msk;
uint64_t val;
} tx_offload;
+ uint64_t tx_ol_flags;
+ struct {
+ uint64_t tx_ol_flags;
+ struct {
+ uint64_t msk;
+ uint64_t val;
+ } tx_offload;
+ } inline_crypto;
struct {
uint16_t sport;
uint16_t dport;
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v5 00/10] new features for ipsec and security libraries
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 00/10] new features for ipsec and security libraries Radu Nicolau
` (9 preceding siblings ...)
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 10/10] ipsec: add ol_flags support Radu Nicolau
@ 2021-09-15 15:25 ` Ananyev, Konstantin
10 siblings, 0 replies; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-09-15 15:25 UTC (permalink / raw)
To: Nicolau, Radu
Cc: dev, mdr, Medvedkin, Vladimir, Richardson, Bruce, Zhang, Roy Fan,
hemant.agrawal, gakhil, anoobj, Doherty, Declan, Sinha, Abhijit,
Buckley, Daniel M, marchana, ktejasree, matan
Hi Radu,
> Add support for:
> TSO, NAT-T/UDP encapsulation, ESN
> AES_CCM, CHACHA20_POLY1305 and AES_GMAC
> SA telemetry
> mbuf offload flags
> Initial SQN value
After applying your patches I am seeing functional ipsec tests
(examples/ipsec-secgw/test) failing - both lookaside and inline mode.
Could you please have a look.
Thanks
Konstantin
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
>
> Radu Nicolau (10):
> security: add support for TSO on IPsec session
> security: add UDP params for IPsec NAT-T
> security: add ESN field to ipsec_xform
> mbuf: add IPsec ESP tunnel type
> ipsec: add support for AEAD algorithms
> ipsec: add transmit segmentation offload support
> ipsec: add support for NAT-T
> ipsec: add support for SA telemetry
> ipsec: add support for initial SQN value
> ipsec: add ol_flags support
>
> lib/ipsec/crypto.h | 137 ++++++++++++
> lib/ipsec/esp_inb.c | 88 +++++++-
> lib/ipsec/esp_outb.c | 262 +++++++++++++++++++----
> lib/ipsec/iph.h | 27 ++-
> lib/ipsec/meson.build | 2 +-
> lib/ipsec/rte_ipsec.h | 23 ++
> lib/ipsec/rte_ipsec_sa.h | 11 +-
> lib/ipsec/sa.c | 406 ++++++++++++++++++++++++++++++++++--
> lib/ipsec/sa.h | 43 ++++
> lib/ipsec/version.map | 9 +
> lib/mbuf/rte_mbuf_core.h | 1 +
> lib/security/rte_security.h | 31 +++
> 12 files changed, 967 insertions(+), 73 deletions(-)
>
> --
> v2: fixed lib/ipsec/version.map updates to show correct version
> v3: fixed build error and corrected misspelled email address
> v4: add doxygen comments for the IPsec telemetry APIs
> update inline comments refering to the wrong RFC
> v5: update commit messages after feedback
> update the UDP encapsulation patch to actually use the configured ports
>
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v6 00/10] new features for ipsec and security libraries
2021-07-13 13:35 [dpdk-dev] [PATCH 00/10] new features for ipsec and security libraries Radu Nicolau
` (13 preceding siblings ...)
2021-09-10 11:32 ` [dpdk-dev] [PATCH v5 00/10] new features for ipsec and security libraries Radu Nicolau
@ 2021-09-17 9:17 ` Radu Nicolau
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 01/10] security: add support for TSO on IPsec session Radu Nicolau
` (10 more replies)
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 0/8] " Radu Nicolau
` (3 subsequent siblings)
18 siblings, 11 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-09-17 9:17 UTC (permalink / raw)
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, gakhil, anoobj,
declan.doherty, abhijit.sinha, daniel.m.buckley, marchana,
ktejasree, matan, Radu Nicolau
Add support for:
TSO, NAT-T/UDP encapsulation, ESN
AES_CCM, CHACHA20_POLY1305 and AES_GMAC
SA telemetry
mbuf offload flags
Initial SQN value
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Radu Nicolau (10):
security: add support for TSO on IPsec session
security: add UDP params for IPsec NAT-T
security: add ESN field to ipsec_xform
mbuf: add IPsec ESP tunnel type
ipsec: add support for AEAD algorithms
ipsec: add transmit segmentation offload support
ipsec: add support for NAT-T
ipsec: add support for SA telemetry
ipsec: add support for initial SQN value
ipsec: add ol_flags support
lib/ipsec/crypto.h | 137 ++++++++++++
lib/ipsec/esp_inb.c | 88 +++++++-
lib/ipsec/esp_outb.c | 262 +++++++++++++++++++----
lib/ipsec/iph.h | 27 ++-
lib/ipsec/meson.build | 2 +-
lib/ipsec/rte_ipsec.h | 23 ++
lib/ipsec/rte_ipsec_sa.h | 11 +-
lib/ipsec/sa.c | 406 ++++++++++++++++++++++++++++++++++--
lib/ipsec/sa.h | 43 ++++
lib/ipsec/version.map | 9 +
lib/mbuf/rte_mbuf_core.h | 1 +
lib/security/rte_security.h | 31 +++
12 files changed, 967 insertions(+), 73 deletions(-)
--
v2: fixed lib/ipsec/version.map updates to show correct version
v3: fixed build error and corrected misspelled email address
v4: add doxygen comments for the IPsec telemetry APIs
update inline comments refering to the wrong RFC
v5: update commit messages after feedback
update the UDP encapsulation patch to actually use the configured ports
v6: fix initial SQN value
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v6 01/10] security: add support for TSO on IPsec session
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 " Radu Nicolau
@ 2021-09-17 9:17 ` Radu Nicolau
2021-09-23 12:35 ` Ananyev, Konstantin
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 02/10] security: add UDP params for IPsec NAT-T Radu Nicolau
` (9 subsequent siblings)
10 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-09-17 9:17 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, anoobj,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau
Allow user to provision a per security session maximum segment size
(MSS) for use when Transmit Segmentation Offload (TSO) is supported.
The MSS value will be used when PKT_TX_TCP_SEG or PKT_TX_UDP_SEG
ol_flags are specified in mbuf.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
lib/security/rte_security.h | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 2e136d7929..495a228915 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -181,6 +181,19 @@ struct rte_security_ipsec_sa_options {
* * 0: Disable per session security statistics collection for this SA.
*/
uint32_t stats : 1;
+
+ /** Transmit Segmentation Offload (TSO)
+ *
+ * * 1: Enable per session security TSO support, use MSS value provide
+ * in IPsec security session when PKT_TX_TCP_SEG or PKT_TX_UDP_SEG
+ * ol_flags are set in mbuf.
+ * this SA, if supported by the driver.
+ * * 0: No TSO support for offload IPsec packets. Hardware will not
+ * attempt to segment packet, and packet transmission will fail if
+ * larger than MTU of interface
+ */
+ uint32_t tso : 1;
+
};
/** IPSec security association direction */
@@ -217,6 +230,8 @@ struct rte_security_ipsec_xform {
/**< Anti replay window size to enable sequence replay attack handling.
* replay checking is disabled if the window size is 0.
*/
+ uint32_t mss;
+ /**< IPsec payload Maximum Segment Size */
};
/**
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v6 01/10] security: add support for TSO on IPsec session
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 01/10] security: add support for TSO on IPsec session Radu Nicolau
@ 2021-09-23 12:35 ` Ananyev, Konstantin
0 siblings, 0 replies; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-09-23 12:35 UTC (permalink / raw)
To: Nicolau, Radu, Akhil Goyal, Doherty, Declan
Cc: dev, mdr, Medvedkin, Vladimir, Richardson, Bruce, Zhang, Roy Fan,
hemant.agrawal, anoobj, Sinha, Abhijit, Buckley, Daniel M,
marchana, ktejasree, matan
> Allow user to provision a per security session maximum segment size
> (MSS) for use when Transmit Segmentation Offload (TSO) is supported.
> The MSS value will be used when PKT_TX_TCP_SEG or PKT_TX_UDP_SEG
> ol_flags are specified in mbuf.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
> lib/security/rte_security.h | 15 +++++++++++++++
> 1 file changed, 15 insertions(+)
>
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index 2e136d7929..495a228915 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -181,6 +181,19 @@ struct rte_security_ipsec_sa_options {
> * * 0: Disable per session security statistics collection for this SA.
> */
> uint32_t stats : 1;
> +
> + /** Transmit Segmentation Offload (TSO)
> + *
> + * * 1: Enable per session security TSO support, use MSS value provide
> + * in IPsec security session when PKT_TX_TCP_SEG or PKT_TX_UDP_SEG
> + * ol_flags are set in mbuf.
> + * this SA, if supported by the driver.
> + * * 0: No TSO support for offload IPsec packets. Hardware will not
> + * attempt to segment packet, and packet transmission will fail if
> + * larger than MTU of interface
> + */
> + uint32_t tso : 1;
> +
> };
>
> /** IPSec security association direction */
> @@ -217,6 +230,8 @@ struct rte_security_ipsec_xform {
> /**< Anti replay window size to enable sequence replay attack handling.
> * replay checking is disabled if the window size is 0.
> */
> + uint32_t mss;
> + /**< IPsec payload Maximum Segment Size */
> };
>
> /**
> --
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v6 02/10] security: add UDP params for IPsec NAT-T
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 " Radu Nicolau
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 01/10] security: add support for TSO on IPsec session Radu Nicolau
@ 2021-09-17 9:17 ` Radu Nicolau
2021-09-23 12:43 ` Ananyev, Konstantin
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 03/10] security: add ESN field to ipsec_xform Radu Nicolau
` (8 subsequent siblings)
10 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-09-17 9:17 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, anoobj,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau
Add support for specifying UDP port params for UDP encapsulation option.
RFC3948 section-2.1 does not enforce using specific the UDP ports for
UDP-Encapsulated ESP Header
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
lib/security/rte_security.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 495a228915..84ba1b08f8 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -112,6 +112,12 @@ struct rte_security_ipsec_tunnel_param {
};
};
+struct rte_security_ipsec_udp_param {
+
+ uint16_t sport;
+ uint16_t dport;
+};
+
/**
* IPsec Security Association option flags
*/
@@ -224,6 +230,8 @@ struct rte_security_ipsec_xform {
/**< IPsec SA Mode - transport/tunnel */
struct rte_security_ipsec_tunnel_param tunnel;
/**< Tunnel parameters, NULL for transport mode */
+ struct rte_security_ipsec_udp_param udp;
+ /**< UDP parameters, ignored when udp_encap option not specified */
uint64_t esn_soft_limit;
/**< ESN for which the overflow event need to be raised */
uint32_t replay_win_sz;
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v6 02/10] security: add UDP params for IPsec NAT-T
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 02/10] security: add UDP params for IPsec NAT-T Radu Nicolau
@ 2021-09-23 12:43 ` Ananyev, Konstantin
2021-09-27 12:14 ` Nicolau, Radu
0 siblings, 1 reply; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-09-23 12:43 UTC (permalink / raw)
To: Nicolau, Radu, Akhil Goyal, Doherty, Declan
Cc: dev, mdr, Medvedkin, Vladimir, Richardson, Bruce, Zhang, Roy Fan,
hemant.agrawal, anoobj, Sinha, Abhijit, Buckley, Daniel M,
marchana, ktejasree, matan
> Add support for specifying UDP port params for UDP encapsulation option.
> RFC3948 section-2.1 does not enforce using specific the UDP ports for
> UDP-Encapsulated ESP Header
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
> lib/security/rte_security.h | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index 495a228915..84ba1b08f8 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -112,6 +112,12 @@ struct rte_security_ipsec_tunnel_param {
> };
> };
>
> +struct rte_security_ipsec_udp_param {
> +
> + uint16_t sport;
> + uint16_t dport;
> +};
Would it be worth to have ability to access 32-bits at once.
Something like:
union rte_security_ipsec_udp_param {
uint32_t raw;
struct {
uint16_t sport, dport;
};
};
?
> +
> /**
> * IPsec Security Association option flags
> */
> @@ -224,6 +230,8 @@ struct rte_security_ipsec_xform {
> /**< IPsec SA Mode - transport/tunnel */
> struct rte_security_ipsec_tunnel_param tunnel;
> /**< Tunnel parameters, NULL for transport mode */
> + struct rte_security_ipsec_udp_param udp;
> + /**< UDP parameters, ignored when udp_encap option not specified */
Any reason to insert it into the middle of the xform struct?
Why not to the end?
> uint64_t esn_soft_limit;
> /**< ESN for which the overflow event need to be raised */
> uint32_t replay_win_sz;
> --
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v6 02/10] security: add UDP params for IPsec NAT-T
2021-09-23 12:43 ` Ananyev, Konstantin
@ 2021-09-27 12:14 ` Nicolau, Radu
0 siblings, 0 replies; 184+ messages in thread
From: Nicolau, Radu @ 2021-09-27 12:14 UTC (permalink / raw)
To: Ananyev, Konstantin, Akhil Goyal, Doherty, Declan
Cc: dev, mdr, Medvedkin, Vladimir, Richardson, Bruce, Zhang, Roy Fan,
hemant.agrawal, anoobj, Sinha, Abhijit, Buckley, Daniel M,
marchana, ktejasree, matan
On 9/23/2021 1:43 PM, Ananyev, Konstantin wrote:
>> Add support for specifying UDP port params for UDP encapsulation option.
>> RFC3948 section-2.1 does not enforce using specific the UDP ports for
>> UDP-Encapsulated ESP Header
>>
>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
>> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
>> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
>> ---
>> lib/security/rte_security.h | 8 ++++++++
>> 1 file changed, 8 insertions(+)
>>
>> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
>> index 495a228915..84ba1b08f8 100644
>> --- a/lib/security/rte_security.h
>> +++ b/lib/security/rte_security.h
>> @@ -112,6 +112,12 @@ struct rte_security_ipsec_tunnel_param {
>> };
>> };
>>
>> +struct rte_security_ipsec_udp_param {
>> +
>> + uint16_t sport;
>> + uint16_t dport;
>> +};
> Would it be worth to have ability to access 32-bits at once.
> Something like:
> union rte_security_ipsec_udp_param {
> uint32_t raw;
> struct {
> uint16_t sport, dport;
> };
> };
> ?
TBH I don't see any reason to access them as a 32b value...
>
>> +
>> /**
>> * IPsec Security Association option flags
>> */
>> @@ -224,6 +230,8 @@ struct rte_security_ipsec_xform {
>> /**< IPsec SA Mode - transport/tunnel */
>> struct rte_security_ipsec_tunnel_param tunnel;
>> /**< Tunnel parameters, NULL for transport mode */
>> + struct rte_security_ipsec_udp_param udp;
>> + /**< UDP parameters, ignored when udp_encap option not specified */
> Any reason to insert it into the middle of the xform struct?
> Why not to the end?
I can't see any good reason I guess it just looked better, I will move
it at the end.
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v6 03/10] security: add ESN field to ipsec_xform
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 " Radu Nicolau
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 01/10] security: add support for TSO on IPsec session Radu Nicolau
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 02/10] security: add UDP params for IPsec NAT-T Radu Nicolau
@ 2021-09-17 9:17 ` Radu Nicolau
2021-09-23 12:46 ` Ananyev, Konstantin
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 04/10] mbuf: add IPsec ESP tunnel type Radu Nicolau
` (7 subsequent siblings)
10 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-09-17 9:17 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, anoobj,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau
Update ipsec_xform definition to include ESN field.
This allows the application to control the ESN starting value.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
---
lib/security/rte_security.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 84ba1b08f8..1bd09e3cc2 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -240,6 +240,14 @@ struct rte_security_ipsec_xform {
*/
uint32_t mss;
/**< IPsec payload Maximum Segment Size */
+ union {
+ uint64_t value;
+ struct {
+ uint32_t low;
+ uint32_t hi;
+ };
+ } esn;
+ /**< Extended Sequence Number */
};
/**
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v6 03/10] security: add ESN field to ipsec_xform
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 03/10] security: add ESN field to ipsec_xform Radu Nicolau
@ 2021-09-23 12:46 ` Ananyev, Konstantin
2021-09-27 12:23 ` Nicolau, Radu
0 siblings, 1 reply; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-09-23 12:46 UTC (permalink / raw)
To: Nicolau, Radu, Akhil Goyal, Doherty, Declan
Cc: dev, mdr, Medvedkin, Vladimir, Richardson, Bruce, Zhang, Roy Fan,
hemant.agrawal, anoobj, Sinha, Abhijit, Buckley, Daniel M,
marchana, ktejasree, matan
>
> Update ipsec_xform definition to include ESN field.
> This allows the application to control the ESN starting value.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> Acked-by: Anoob Joseph <anoobj@marvell.com>
> ---
> lib/security/rte_security.h | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index 84ba1b08f8..1bd09e3cc2 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -240,6 +240,14 @@ struct rte_security_ipsec_xform {
> */
> uint32_t mss;
> /**< IPsec payload Maximum Segment Size */
> + union {
> + uint64_t value;
> + struct {
> + uint32_t low;
> + uint32_t hi;
Do we really need low/hi here?
As I remember ESN is 64bit value, no?
> + };
> + } esn;
> + /**< Extended Sequence Number */
> };
>
> /**
> --
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v6 03/10] security: add ESN field to ipsec_xform
2021-09-23 12:46 ` Ananyev, Konstantin
@ 2021-09-27 12:23 ` Nicolau, Radu
2021-09-27 13:15 ` Ananyev, Konstantin
0 siblings, 1 reply; 184+ messages in thread
From: Nicolau, Radu @ 2021-09-27 12:23 UTC (permalink / raw)
To: Ananyev, Konstantin, Akhil Goyal, Doherty, Declan
Cc: dev, mdr, Medvedkin, Vladimir, Richardson, Bruce, Zhang, Roy Fan,
hemant.agrawal, anoobj, Sinha, Abhijit, Buckley, Daniel M,
marchana, ktejasree, matan
On 9/23/2021 1:46 PM, Ananyev, Konstantin wrote:
>> Update ipsec_xform definition to include ESN field.
>> This allows the application to control the ESN starting value.
>>
>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
>> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
>> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
>> Acked-by: Anoob Joseph <anoobj@marvell.com>
>> ---
>> lib/security/rte_security.h | 8 ++++++++
>> 1 file changed, 8 insertions(+)
>>
>> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
>> index 84ba1b08f8..1bd09e3cc2 100644
>> --- a/lib/security/rte_security.h
>> +++ b/lib/security/rte_security.h
>> @@ -240,6 +240,14 @@ struct rte_security_ipsec_xform {
>> */
>> uint32_t mss;
>> /**< IPsec payload Maximum Segment Size */
>> + union {
>> + uint64_t value;
>> + struct {
>> + uint32_t low;
>> + uint32_t hi;
> Do we really need low/hi here?
> As I remember ESN is 64bit value, no?
The low and high halves are managed differently so I think for better
consistency it's easier to have them as such.
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v6 03/10] security: add ESN field to ipsec_xform
2021-09-27 12:23 ` Nicolau, Radu
@ 2021-09-27 13:15 ` Ananyev, Konstantin
0 siblings, 0 replies; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-09-27 13:15 UTC (permalink / raw)
To: Nicolau, Radu, Akhil Goyal, Doherty, Declan
Cc: dev, mdr, Medvedkin, Vladimir, Richardson, Bruce, Zhang, Roy Fan,
hemant.agrawal, anoobj, Sinha, Abhijit, Buckley, Daniel M,
marchana, ktejasree, matan
>
> On 9/23/2021 1:46 PM, Ananyev, Konstantin wrote:
> >> Update ipsec_xform definition to include ESN field.
> >> This allows the application to control the ESN starting value.
> >>
> >> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> >> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> >> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> >> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> >> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> >> Acked-by: Anoob Joseph <anoobj@marvell.com>
> >> ---
> >> lib/security/rte_security.h | 8 ++++++++
> >> 1 file changed, 8 insertions(+)
> >>
> >> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> >> index 84ba1b08f8..1bd09e3cc2 100644
> >> --- a/lib/security/rte_security.h
> >> +++ b/lib/security/rte_security.h
> >> @@ -240,6 +240,14 @@ struct rte_security_ipsec_xform {
> >> */
> >> uint32_t mss;
> >> /**< IPsec payload Maximum Segment Size */
> >> + union {
> >> + uint64_t value;
> >> + struct {
> >> + uint32_t low;
> >> + uint32_t hi;
> > Do we really need low/hi here?
> > As I remember ESN is 64bit value, no?
> The low and high halves are managed differently so I think for better
> consistency it's easier to have them as such.
Ok, if you believe it would help somehow, I am fine.
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v6 04/10] mbuf: add IPsec ESP tunnel type
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 " Radu Nicolau
` (2 preceding siblings ...)
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 03/10] security: add ESN field to ipsec_xform Radu Nicolau
@ 2021-09-17 9:17 ` Radu Nicolau
2021-09-23 12:59 ` Ananyev, Konstantin
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 05/10] ipsec: add support for AEAD algorithms Radu Nicolau
` (6 subsequent siblings)
10 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-09-17 9:17 UTC (permalink / raw)
To: Olivier Matz
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, gakhil, anoobj,
declan.doherty, abhijit.sinha, daniel.m.buckley, marchana,
ktejasree, matan, Radu Nicolau
Add tunnel type for IPsec ESP tunnels
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
lib/mbuf/rte_mbuf_core.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index bb38d7f581..a4d95deee6 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -253,6 +253,7 @@ extern "C" {
#define PKT_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
#define PKT_TX_TUNNEL_VXLAN_GPE (0x6ULL << 45)
#define PKT_TX_TUNNEL_GTP (0x7ULL << 45)
+#define PKT_TX_TUNNEL_ESP (0x8ULL << 45)
/**
* Generic IP encapsulated tunnel type, used for TSO and checksum offload.
* It can be used for tunnels which are not standards or listed above.
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v6 04/10] mbuf: add IPsec ESP tunnel type
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 04/10] mbuf: add IPsec ESP tunnel type Radu Nicolau
@ 2021-09-23 12:59 ` Ananyev, Konstantin
2021-09-30 9:03 ` Nicolau, Radu
0 siblings, 1 reply; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-09-23 12:59 UTC (permalink / raw)
To: Nicolau, Radu, Olivier Matz
Cc: dev, mdr, Medvedkin, Vladimir, Richardson, Bruce, Zhang, Roy Fan,
hemant.agrawal, gakhil, anoobj, Doherty, Declan, Sinha, Abhijit,
Buckley, Daniel M, marchana, ktejasree, matan
>
> Add tunnel type for IPsec ESP tunnels
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> Acked-by: Akhil Goyal <gakhil@marvell.com>
> Acked-by: Olivier Matz <olivier.matz@6wind.com>
> ---
> lib/mbuf/rte_mbuf_core.h | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
> index bb38d7f581..a4d95deee6 100644
> --- a/lib/mbuf/rte_mbuf_core.h
> +++ b/lib/mbuf/rte_mbuf_core.h
> @@ -253,6 +253,7 @@ extern "C" {
> #define PKT_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
> #define PKT_TX_TUNNEL_VXLAN_GPE (0x6ULL << 45)
> #define PKT_TX_TUNNEL_GTP (0x7ULL << 45)
> +#define PKT_TX_TUNNEL_ESP (0x8ULL << 45)
As I can see, that's not ptype, that's TX flag.
Could you clarify what exactly what flag would mean for PMD TX:
- what is expected from the user who sets this flag
- what is expected from PMD that claims to support it.
BTW, would we need new DEV_TX_OFFLOAD_* for it?
> /**
> * Generic IP encapsulated tunnel type, used for TSO and checksum offload.
> * It can be used for tunnels which are not standards or listed above.
> --
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v6 04/10] mbuf: add IPsec ESP tunnel type
2021-09-23 12:59 ` Ananyev, Konstantin
@ 2021-09-30 9:03 ` Nicolau, Radu
0 siblings, 0 replies; 184+ messages in thread
From: Nicolau, Radu @ 2021-09-30 9:03 UTC (permalink / raw)
To: Ananyev, Konstantin, Olivier Matz
Cc: dev, mdr, Medvedkin, Vladimir, Richardson, Bruce, Zhang, Roy Fan,
hemant.agrawal, gakhil, anoobj, Doherty, Declan, Sinha, Abhijit,
Buckley, Daniel M, marchana, ktejasree, matan
On 9/23/2021 1:59 PM, Ananyev, Konstantin wrote:
>> Add tunnel type for IPsec ESP tunnels
>>
>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
>> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
>> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
>> Acked-by: Akhil Goyal <gakhil@marvell.com>
>> Acked-by: Olivier Matz <olivier.matz@6wind.com>
>> ---
>> lib/mbuf/rte_mbuf_core.h | 1 +
>> 1 file changed, 1 insertion(+)
>>
>> diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
>> index bb38d7f581..a4d95deee6 100644
>> --- a/lib/mbuf/rte_mbuf_core.h
>> +++ b/lib/mbuf/rte_mbuf_core.h
>> @@ -253,6 +253,7 @@ extern "C" {
>> #define PKT_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
>> #define PKT_TX_TUNNEL_VXLAN_GPE (0x6ULL << 45)
>> #define PKT_TX_TUNNEL_GTP (0x7ULL << 45)
>> +#define PKT_TX_TUNNEL_ESP (0x8ULL << 45)
> As I can see, that's not ptype, that's TX flag.
> Could you clarify what exactly what flag would mean for PMD TX:
> - what is expected from the user who sets this flag
> - what is expected from PMD that claims to support it.
>
> BTW, would we need new DEV_TX_OFFLOAD_* for it?
There is documentation above for the other tunnel types, they are
supposed to be used for TSO purposes. I will update the commit message
to clarify this.
>
>> /**
>> * Generic IP encapsulated tunnel type, used for TSO and checksum offload.
>> * It can be used for tunnels which are not standards or listed above.
>> --
>> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v6 05/10] ipsec: add support for AEAD algorithms
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 " Radu Nicolau
` (3 preceding siblings ...)
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 04/10] mbuf: add IPsec ESP tunnel type Radu Nicolau
@ 2021-09-17 9:17 ` Radu Nicolau
2021-09-23 13:07 ` Ananyev, Konstantin
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 06/10] ipsec: add transmit segmentation offload support Radu Nicolau
` (5 subsequent siblings)
10 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-09-17 9:17 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add support for AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
lib/ipsec/crypto.h | 137 +++++++++++++++++++++++++++++++++++++++++++
lib/ipsec/esp_inb.c | 66 ++++++++++++++++++++-
lib/ipsec/esp_outb.c | 70 +++++++++++++++++++++-
lib/ipsec/sa.c | 54 +++++++++++++++--
lib/ipsec/sa.h | 6 ++
5 files changed, 322 insertions(+), 11 deletions(-)
diff --git a/lib/ipsec/crypto.h b/lib/ipsec/crypto.h
index 3d03034590..93d20aaaa0 100644
--- a/lib/ipsec/crypto.h
+++ b/lib/ipsec/crypto.h
@@ -21,6 +21,37 @@ struct aesctr_cnt_blk {
uint32_t cnt;
} __rte_packed;
+ /*
+ * CHACHA20-POLY1305 devices have some specific requirements
+ * for IV and AAD formats.
+ * Ideally that to be done by the driver itself.
+ */
+
+struct aead_chacha20_poly1305_iv {
+ uint32_t salt;
+ uint64_t iv;
+ uint32_t cnt;
+} __rte_packed;
+
+struct aead_chacha20_poly1305_aad {
+ uint32_t spi;
+ /*
+ * RFC 4106, section 5:
+ * Two formats of the AAD are defined:
+ * one for 32-bit sequence numbers, and one for 64-bit ESN.
+ */
+ union {
+ uint32_t u32[2];
+ uint64_t u64;
+ } sqn;
+ uint32_t align0; /* align to 16B boundary */
+} __rte_packed;
+
+struct chacha20_poly1305_esph_iv {
+ struct rte_esp_hdr esph;
+ uint64_t iv;
+} __rte_packed;
+
/*
* AES-GCM devices have some specific requirements for IV and AAD formats.
* Ideally that to be done by the driver itself.
@@ -51,6 +82,47 @@ struct gcm_esph_iv {
uint64_t iv;
} __rte_packed;
+ /*
+ * AES-CCM devices have some specific requirements for IV and AAD formats.
+ * Ideally that to be done by the driver itself.
+ */
+union aead_ccm_salt {
+ uint32_t salt;
+ struct inner {
+ uint8_t salt8[3];
+ uint8_t ccm_flags;
+ } inner;
+} __rte_packed;
+
+
+struct aead_ccm_iv {
+ uint8_t ccm_flags;
+ uint8_t salt[3];
+ uint64_t iv;
+ uint32_t cnt;
+} __rte_packed;
+
+struct aead_ccm_aad {
+ uint8_t padding[18];
+ uint32_t spi;
+ /*
+ * RFC 4309, section 5:
+ * Two formats of the AAD are defined:
+ * one for 32-bit sequence numbers, and one for 64-bit ESN.
+ */
+ union {
+ uint32_t u32[2];
+ uint64_t u64;
+ } sqn;
+ uint32_t align0; /* align to 16B boundary */
+} __rte_packed;
+
+struct ccm_esph_iv {
+ struct rte_esp_hdr esph;
+ uint64_t iv;
+} __rte_packed;
+
+
static inline void
aes_ctr_cnt_blk_fill(struct aesctr_cnt_blk *ctr, uint64_t iv, uint32_t nonce)
{
@@ -59,6 +131,16 @@ aes_ctr_cnt_blk_fill(struct aesctr_cnt_blk *ctr, uint64_t iv, uint32_t nonce)
ctr->cnt = rte_cpu_to_be_32(1);
}
+static inline void
+aead_chacha20_poly1305_iv_fill(struct aead_chacha20_poly1305_iv
+ *chacha20_poly1305,
+ uint64_t iv, uint32_t salt)
+{
+ chacha20_poly1305->salt = salt;
+ chacha20_poly1305->iv = iv;
+ chacha20_poly1305->cnt = rte_cpu_to_be_32(1);
+}
+
static inline void
aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
{
@@ -67,6 +149,21 @@ aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
gcm->cnt = rte_cpu_to_be_32(1);
}
+static inline void
+aead_ccm_iv_fill(struct aead_ccm_iv *ccm, uint64_t iv, uint32_t salt)
+{
+ union aead_ccm_salt tsalt;
+
+ tsalt.salt = salt;
+ ccm->ccm_flags = tsalt.inner.ccm_flags;
+ ccm->salt[0] = tsalt.inner.salt8[0];
+ ccm->salt[1] = tsalt.inner.salt8[1];
+ ccm->salt[2] = tsalt.inner.salt8[2];
+ ccm->iv = iv;
+ ccm->cnt = rte_cpu_to_be_32(1);
+}
+
+
/*
* RFC 4106, 5 AAD Construction
* spi and sqn should already be converted into network byte order.
@@ -86,6 +183,25 @@ aead_gcm_aad_fill(struct aead_gcm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
aad->align0 = 0;
}
+/*
+ * RFC 4309, 5 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_ccm_aad_fill(struct aead_ccm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
+ int esn)
+{
+ aad->spi = spi;
+ if (esn)
+ aad->sqn.u64 = sqn;
+ else {
+ aad->sqn.u32[0] = sqn_low32(sqn);
+ aad->sqn.u32[1] = 0;
+ }
+ aad->align0 = 0;
+}
+
static inline void
gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
{
@@ -93,6 +209,27 @@ gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
iv[1] = 0;
}
+
+/*
+ * RFC 7634, 2.1 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_chacha20_poly1305_aad_fill(struct aead_chacha20_poly1305_aad *aad,
+ rte_be32_t spi, rte_be64_t sqn,
+ int esn)
+{
+ aad->spi = spi;
+ if (esn)
+ aad->sqn.u64 = sqn;
+ else {
+ aad->sqn.u32[0] = sqn_low32(sqn);
+ aad->sqn.u32[1] = 0;
+ }
+ aad->align0 = 0;
+}
+
/*
* Helper routine to copy IV
* Right now we support only algorithms with IV length equals 0/8/16 bytes.
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index 2b1df6a032..d66c88f05d 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -63,6 +63,8 @@ inb_cop_prepare(struct rte_crypto_op *cop,
{
struct rte_crypto_sym_op *sop;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint64_t *ivc, *ivp;
uint32_t algo;
@@ -83,6 +85,24 @@ inb_cop_prepare(struct rte_crypto_op *cop,
sa->iv_ofs);
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ sop_aead_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ ccm = rte_crypto_op_ctod_offset(cop, struct aead_ccm_iv *,
+ sa->iv_ofs);
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ sop_aead_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ chacha20_poly1305 = rte_crypto_op_ctod_offset(cop,
+ struct aead_chacha20_poly1305_iv *,
+ sa->iv_ofs);
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CBC:
case ALGO_TYPE_3DES_CBC:
sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
@@ -91,6 +111,14 @@ inb_cop_prepare(struct rte_crypto_op *cop,
ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs);
copy_iv(ivc, ivp, sa->iv_len);
break;
+ case ALGO_TYPE_AES_GMAC:
+ sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+ sa->iv_ofs);
+ aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
@@ -110,6 +138,8 @@ inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
uint32_t *pofs, uint32_t plen, void *iv)
{
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint64_t *ivp;
uint32_t clen;
@@ -120,9 +150,19 @@ inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
switch (sa->algo_type) {
case ALGO_TYPE_AES_GCM:
+ case ALGO_TYPE_AES_GMAC:
gcm = (struct aead_gcm_iv *)iv;
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ ccm = (struct aead_ccm_iv *)iv;
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ chacha20_poly1305 = (struct aead_chacha20_poly1305_iv *)iv;
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CBC:
case ALGO_TYPE_3DES_CBC:
copy_iv(iv, ivp, sa->iv_len);
@@ -175,6 +215,8 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
const union sym_op_data *icv)
{
struct aead_gcm_aad *aad;
+ struct aead_ccm_aad *caad;
+ struct aead_chacha20_poly1305_aad *chacha_aad;
/* insert SQN.hi between ESP trailer and ICV */
if (sa->sqh_len != 0)
@@ -184,9 +226,27 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
* fill AAD fields, if any (aad fields are placed after icv),
* right now we support only one AEAD algorithm: AES-GCM.
*/
- if (sa->aad_len != 0) {
- aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
- aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ switch (sa->algo_type) {
+ case ALGO_TYPE_AES_GCM:
+ if (sa->aad_len != 0) {
+ aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+ aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_AES_CCM:
+ if (sa->aad_len != 0) {
+ caad = (struct aead_ccm_aad *)(icv->va + sa->icv_len);
+ aead_ccm_aad_fill(caad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ if (sa->aad_len != 0) {
+ chacha_aad = (struct aead_chacha20_poly1305_aad *)
+ (icv->va + sa->icv_len);
+ aead_chacha20_poly1305_aad_fill(chacha_aad,
+ sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
}
}
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 1e181cf2ce..a3f77469c3 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -63,6 +63,8 @@ outb_cop_prepare(struct rte_crypto_op *cop,
{
struct rte_crypto_sym_op *sop;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint32_t algo;
@@ -80,6 +82,15 @@ outb_cop_prepare(struct rte_crypto_op *cop,
/* NULL case */
sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
break;
+ case ALGO_TYPE_AES_GMAC:
+ /* GMAC case */
+ sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+ sa->iv_ofs);
+ aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_GCM:
/* AEAD (AES_GCM) case */
sop_aead_prepare(sop, sa, icv, hlen, plen);
@@ -89,6 +100,26 @@ outb_cop_prepare(struct rte_crypto_op *cop,
sa->iv_ofs);
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ /* AEAD (AES_CCM) case */
+ sop_aead_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ ccm = rte_crypto_op_ctod_offset(cop, struct aead_ccm_iv *,
+ sa->iv_ofs);
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ /* AEAD (CHACHA20_POLY) case */
+ sop_aead_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ chacha20_poly1305 = rte_crypto_op_ctod_offset(cop,
+ struct aead_chacha20_poly1305_iv *,
+ sa->iv_ofs);
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
/* Cipher-Auth (AES-CTR *) case */
sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
@@ -196,7 +227,9 @@ outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
const union sym_op_data *icv)
{
uint32_t *psqh;
- struct aead_gcm_aad *aad;
+ struct aead_gcm_aad *gaad;
+ struct aead_ccm_aad *caad;
+ struct aead_chacha20_poly1305_aad *chacha20_poly1305_aad;
/* insert SQN.hi between ESP trailer and ICV */
if (sa->sqh_len != 0) {
@@ -208,9 +241,29 @@ outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
* fill IV and AAD fields, if any (aad fields are placed after icv),
* right now we support only one AEAD algorithm: AES-GCM .
*/
+ switch (sa->algo_type) {
+ case ALGO_TYPE_AES_GCM:
if (sa->aad_len != 0) {
- aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
- aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ gaad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+ aead_gcm_aad_fill(gaad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_AES_CCM:
+ if (sa->aad_len != 0) {
+ caad = (struct aead_ccm_aad *)(icv->va + sa->icv_len);
+ aead_ccm_aad_fill(caad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ if (sa->aad_len != 0) {
+ chacha20_poly1305_aad = (struct aead_chacha20_poly1305_aad *)
+ (icv->va + sa->icv_len);
+ aead_chacha20_poly1305_aad_fill(chacha20_poly1305_aad,
+ sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ default:
+ break;
}
}
@@ -418,6 +471,8 @@ outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
{
uint64_t *ivp = iv;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint32_t clen;
@@ -426,6 +481,15 @@ outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
gcm = iv;
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ ccm = iv;
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ chacha20_poly1305 = iv;
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
ctr = iv;
aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt);
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index e59189d215..720e0f365b 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -47,6 +47,15 @@ fill_crypto_xform(struct crypto_xform *xform, uint64_t type,
if (xfn != NULL)
return -EINVAL;
xform->aead = &xf->aead;
+
+ /* GMAC has only auth */
+ } else if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+ xf->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ if (xfn != NULL)
+ return -EINVAL;
+ xform->auth = &xf->auth;
+ xform->cipher = &xfn->cipher;
+
/*
* CIPHER+AUTH xforms are expected in strict order,
* depending on SA direction:
@@ -247,12 +256,13 @@ esp_inb_init(struct rte_ipsec_sa *sa)
sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
/*
- * for AEAD and NULL algorithms we can assume that
+ * for AEAD algorithms we can assume that
* auth and cipher offsets would be equal.
*/
switch (sa->algo_type) {
case ALGO_TYPE_AES_GCM:
- case ALGO_TYPE_NULL:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
sa->ctp.auth.raw = sa->ctp.cipher.raw;
break;
default:
@@ -294,6 +304,8 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
switch (algo_type) {
case ALGO_TYPE_AES_GCM:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
case ALGO_TYPE_AES_CTR:
case ALGO_TYPE_NULL:
sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr) +
@@ -305,15 +317,20 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr);
sa->ctp.cipher.length = sa->iv_len;
break;
+ case ALGO_TYPE_AES_GMAC:
+ sa->ctp.cipher.offset = 0;
+ sa->ctp.cipher.length = 0;
+ break;
}
/*
- * for AEAD and NULL algorithms we can assume that
+ * for AEAD algorithms we can assume that
* auth and cipher offsets would be equal.
*/
switch (algo_type) {
case ALGO_TYPE_AES_GCM:
- case ALGO_TYPE_NULL:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
sa->ctp.auth.raw = sa->ctp.cipher.raw;
break;
default:
@@ -374,13 +391,39 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->pad_align = IPSEC_PAD_AES_GCM;
sa->algo_type = ALGO_TYPE_AES_GCM;
break;
+ case RTE_CRYPTO_AEAD_AES_CCM:
+ /* RFC 4309 */
+ sa->aad_len = sizeof(struct aead_ccm_aad);
+ sa->icv_len = cxf->aead->digest_length;
+ sa->iv_ofs = cxf->aead->iv.offset;
+ sa->iv_len = sizeof(uint64_t);
+ sa->pad_align = IPSEC_PAD_AES_CCM;
+ sa->algo_type = ALGO_TYPE_AES_CCM;
+ break;
+ case RTE_CRYPTO_AEAD_CHACHA20_POLY1305:
+ /* RFC 7634 & 8439*/
+ sa->aad_len = sizeof(struct aead_chacha20_poly1305_aad);
+ sa->icv_len = cxf->aead->digest_length;
+ sa->iv_ofs = cxf->aead->iv.offset;
+ sa->iv_len = sizeof(uint64_t);
+ sa->pad_align = IPSEC_PAD_CHACHA20_POLY1305;
+ sa->algo_type = ALGO_TYPE_CHACHA20_POLY1305;
+ break;
default:
return -EINVAL;
}
+ } else if (cxf->auth->algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ /* RFC 4543 */
+ /* AES-GMAC is a special case of auth that needs IV */
+ sa->pad_align = IPSEC_PAD_AES_GMAC;
+ sa->iv_len = sizeof(uint64_t);
+ sa->icv_len = cxf->auth->digest_length;
+ sa->iv_ofs = cxf->auth->iv.offset;
+ sa->algo_type = ALGO_TYPE_AES_GMAC;
+
} else {
sa->icv_len = cxf->auth->digest_length;
sa->iv_ofs = cxf->cipher->iv.offset;
- sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
switch (cxf->cipher->algo) {
case RTE_CRYPTO_CIPHER_NULL:
@@ -414,6 +457,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
}
}
+ sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
sa->udata = prm->userdata;
sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi);
sa->salt = prm->ipsec_xform.salt;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 1bffe751f5..107ebd1519 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -19,7 +19,10 @@ enum {
IPSEC_PAD_AES_CBC = IPSEC_MAX_IV_SIZE,
IPSEC_PAD_AES_CTR = IPSEC_PAD_DEFAULT,
IPSEC_PAD_AES_GCM = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_AES_CCM = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_CHACHA20_POLY1305 = IPSEC_PAD_DEFAULT,
IPSEC_PAD_NULL = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_AES_GMAC = IPSEC_PAD_DEFAULT,
};
/* iv sizes for different algorithms */
@@ -67,6 +70,9 @@ enum sa_algo_type {
ALGO_TYPE_AES_CBC,
ALGO_TYPE_AES_CTR,
ALGO_TYPE_AES_GCM,
+ ALGO_TYPE_AES_CCM,
+ ALGO_TYPE_CHACHA20_POLY1305,
+ ALGO_TYPE_AES_GMAC,
ALGO_TYPE_MAX
};
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v6 05/10] ipsec: add support for AEAD algorithms
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 05/10] ipsec: add support for AEAD algorithms Radu Nicolau
@ 2021-09-23 13:07 ` Ananyev, Konstantin
0 siblings, 0 replies; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-09-23 13:07 UTC (permalink / raw)
To: Nicolau, Radu, Iremonger, Bernard, Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal,
gakhil, anoobj, Doherty, Declan, Sinha, Abhijit, Buckley,
Daniel M, marchana, ktejasree, matan
>
> Add support for AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
I think it would be good to add new test-cases to examples/ipsec-secgw/test harness
to cover these new algs supported.
Apart from that:
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
> lib/ipsec/crypto.h | 137 +++++++++++++++++++++++++++++++++++++++++++
> lib/ipsec/esp_inb.c | 66 ++++++++++++++++++++-
> lib/ipsec/esp_outb.c | 70 +++++++++++++++++++++-
> lib/ipsec/sa.c | 54 +++++++++++++++--
> lib/ipsec/sa.h | 6 ++
> 5 files changed, 322 insertions(+), 11 deletions(-)
>
> diff --git a/lib/ipsec/crypto.h b/lib/ipsec/crypto.h
> index 3d03034590..93d20aaaa0 100644
> --- a/lib/ipsec/crypto.h
> +++ b/lib/ipsec/crypto.h
> @@ -21,6 +21,37 @@ struct aesctr_cnt_blk {
> uint32_t cnt;
> } __rte_packed;
>
> + /*
> + * CHACHA20-POLY1305 devices have some specific requirements
> + * for IV and AAD formats.
> + * Ideally that to be done by the driver itself.
> + */
> +
> +struct aead_chacha20_poly1305_iv {
> + uint32_t salt;
> + uint64_t iv;
> + uint32_t cnt;
> +} __rte_packed;
> +
> +struct aead_chacha20_poly1305_aad {
> + uint32_t spi;
> + /*
> + * RFC 4106, section 5:
> + * Two formats of the AAD are defined:
> + * one for 32-bit sequence numbers, and one for 64-bit ESN.
> + */
> + union {
> + uint32_t u32[2];
> + uint64_t u64;
> + } sqn;
> + uint32_t align0; /* align to 16B boundary */
> +} __rte_packed;
> +
> +struct chacha20_poly1305_esph_iv {
> + struct rte_esp_hdr esph;
> + uint64_t iv;
> +} __rte_packed;
> +
> /*
> * AES-GCM devices have some specific requirements for IV and AAD formats.
> * Ideally that to be done by the driver itself.
> @@ -51,6 +82,47 @@ struct gcm_esph_iv {
> uint64_t iv;
> } __rte_packed;
>
> + /*
> + * AES-CCM devices have some specific requirements for IV and AAD formats.
> + * Ideally that to be done by the driver itself.
> + */
> +union aead_ccm_salt {
> + uint32_t salt;
> + struct inner {
> + uint8_t salt8[3];
> + uint8_t ccm_flags;
> + } inner;
> +} __rte_packed;
> +
> +
> +struct aead_ccm_iv {
> + uint8_t ccm_flags;
> + uint8_t salt[3];
> + uint64_t iv;
> + uint32_t cnt;
> +} __rte_packed;
> +
> +struct aead_ccm_aad {
> + uint8_t padding[18];
> + uint32_t spi;
> + /*
> + * RFC 4309, section 5:
> + * Two formats of the AAD are defined:
> + * one for 32-bit sequence numbers, and one for 64-bit ESN.
> + */
> + union {
> + uint32_t u32[2];
> + uint64_t u64;
> + } sqn;
> + uint32_t align0; /* align to 16B boundary */
> +} __rte_packed;
> +
> +struct ccm_esph_iv {
> + struct rte_esp_hdr esph;
> + uint64_t iv;
> +} __rte_packed;
> +
> +
> static inline void
> aes_ctr_cnt_blk_fill(struct aesctr_cnt_blk *ctr, uint64_t iv, uint32_t nonce)
> {
> @@ -59,6 +131,16 @@ aes_ctr_cnt_blk_fill(struct aesctr_cnt_blk *ctr, uint64_t iv, uint32_t nonce)
> ctr->cnt = rte_cpu_to_be_32(1);
> }
>
> +static inline void
> +aead_chacha20_poly1305_iv_fill(struct aead_chacha20_poly1305_iv
> + *chacha20_poly1305,
> + uint64_t iv, uint32_t salt)
> +{
> + chacha20_poly1305->salt = salt;
> + chacha20_poly1305->iv = iv;
> + chacha20_poly1305->cnt = rte_cpu_to_be_32(1);
> +}
> +
> static inline void
> aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
> {
> @@ -67,6 +149,21 @@ aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
> gcm->cnt = rte_cpu_to_be_32(1);
> }
>
> +static inline void
> +aead_ccm_iv_fill(struct aead_ccm_iv *ccm, uint64_t iv, uint32_t salt)
> +{
> + union aead_ccm_salt tsalt;
> +
> + tsalt.salt = salt;
> + ccm->ccm_flags = tsalt.inner.ccm_flags;
> + ccm->salt[0] = tsalt.inner.salt8[0];
> + ccm->salt[1] = tsalt.inner.salt8[1];
> + ccm->salt[2] = tsalt.inner.salt8[2];
> + ccm->iv = iv;
> + ccm->cnt = rte_cpu_to_be_32(1);
> +}
> +
> +
> /*
> * RFC 4106, 5 AAD Construction
> * spi and sqn should already be converted into network byte order.
> @@ -86,6 +183,25 @@ aead_gcm_aad_fill(struct aead_gcm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
> aad->align0 = 0;
> }
>
> +/*
> + * RFC 4309, 5 AAD Construction
> + * spi and sqn should already be converted into network byte order.
> + * Make sure that not used bytes are zeroed.
> + */
> +static inline void
> +aead_ccm_aad_fill(struct aead_ccm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
> + int esn)
> +{
> + aad->spi = spi;
> + if (esn)
> + aad->sqn.u64 = sqn;
> + else {
> + aad->sqn.u32[0] = sqn_low32(sqn);
> + aad->sqn.u32[1] = 0;
> + }
> + aad->align0 = 0;
> +}
> +
> static inline void
> gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
> {
> @@ -93,6 +209,27 @@ gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
> iv[1] = 0;
> }
>
> +
> +/*
> + * RFC 7634, 2.1 AAD Construction
> + * spi and sqn should already be converted into network byte order.
> + * Make sure that not used bytes are zeroed.
> + */
> +static inline void
> +aead_chacha20_poly1305_aad_fill(struct aead_chacha20_poly1305_aad *aad,
> + rte_be32_t spi, rte_be64_t sqn,
> + int esn)
> +{
> + aad->spi = spi;
> + if (esn)
> + aad->sqn.u64 = sqn;
> + else {
> + aad->sqn.u32[0] = sqn_low32(sqn);
> + aad->sqn.u32[1] = 0;
> + }
> + aad->align0 = 0;
> +}
> +
> /*
> * Helper routine to copy IV
> * Right now we support only algorithms with IV length equals 0/8/16 bytes.
> diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
> index 2b1df6a032..d66c88f05d 100644
> --- a/lib/ipsec/esp_inb.c
> +++ b/lib/ipsec/esp_inb.c
> @@ -63,6 +63,8 @@ inb_cop_prepare(struct rte_crypto_op *cop,
> {
> struct rte_crypto_sym_op *sop;
> struct aead_gcm_iv *gcm;
> + struct aead_ccm_iv *ccm;
> + struct aead_chacha20_poly1305_iv *chacha20_poly1305;
> struct aesctr_cnt_blk *ctr;
> uint64_t *ivc, *ivp;
> uint32_t algo;
> @@ -83,6 +85,24 @@ inb_cop_prepare(struct rte_crypto_op *cop,
> sa->iv_ofs);
> aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
> break;
> + case ALGO_TYPE_AES_CCM:
> + sop_aead_prepare(sop, sa, icv, pofs, plen);
> +
> + /* fill AAD IV (located inside crypto op) */
> + ccm = rte_crypto_op_ctod_offset(cop, struct aead_ccm_iv *,
> + sa->iv_ofs);
> + aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
> + break;
> + case ALGO_TYPE_CHACHA20_POLY1305:
> + sop_aead_prepare(sop, sa, icv, pofs, plen);
> +
> + /* fill AAD IV (located inside crypto op) */
> + chacha20_poly1305 = rte_crypto_op_ctod_offset(cop,
> + struct aead_chacha20_poly1305_iv *,
> + sa->iv_ofs);
> + aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
> + ivp[0], sa->salt);
> + break;
> case ALGO_TYPE_AES_CBC:
> case ALGO_TYPE_3DES_CBC:
> sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
> @@ -91,6 +111,14 @@ inb_cop_prepare(struct rte_crypto_op *cop,
> ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs);
> copy_iv(ivc, ivp, sa->iv_len);
> break;
> + case ALGO_TYPE_AES_GMAC:
> + sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
> +
> + /* fill AAD IV (located inside crypto op) */
> + gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
> + sa->iv_ofs);
> + aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
> + break;
> case ALGO_TYPE_AES_CTR:
> sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
>
> @@ -110,6 +138,8 @@ inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
> uint32_t *pofs, uint32_t plen, void *iv)
> {
> struct aead_gcm_iv *gcm;
> + struct aead_ccm_iv *ccm;
> + struct aead_chacha20_poly1305_iv *chacha20_poly1305;
> struct aesctr_cnt_blk *ctr;
> uint64_t *ivp;
> uint32_t clen;
> @@ -120,9 +150,19 @@ inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
>
> switch (sa->algo_type) {
> case ALGO_TYPE_AES_GCM:
> + case ALGO_TYPE_AES_GMAC:
> gcm = (struct aead_gcm_iv *)iv;
> aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
> break;
> + case ALGO_TYPE_AES_CCM:
> + ccm = (struct aead_ccm_iv *)iv;
> + aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
> + break;
> + case ALGO_TYPE_CHACHA20_POLY1305:
> + chacha20_poly1305 = (struct aead_chacha20_poly1305_iv *)iv;
> + aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
> + ivp[0], sa->salt);
> + break;
> case ALGO_TYPE_AES_CBC:
> case ALGO_TYPE_3DES_CBC:
> copy_iv(iv, ivp, sa->iv_len);
> @@ -175,6 +215,8 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
> const union sym_op_data *icv)
> {
> struct aead_gcm_aad *aad;
> + struct aead_ccm_aad *caad;
> + struct aead_chacha20_poly1305_aad *chacha_aad;
>
> /* insert SQN.hi between ESP trailer and ICV */
> if (sa->sqh_len != 0)
> @@ -184,9 +226,27 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
> * fill AAD fields, if any (aad fields are placed after icv),
> * right now we support only one AEAD algorithm: AES-GCM.
> */
> - if (sa->aad_len != 0) {
> - aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
> - aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
> + switch (sa->algo_type) {
> + case ALGO_TYPE_AES_GCM:
> + if (sa->aad_len != 0) {
> + aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
> + aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
> + }
> + break;
> + case ALGO_TYPE_AES_CCM:
> + if (sa->aad_len != 0) {
> + caad = (struct aead_ccm_aad *)(icv->va + sa->icv_len);
> + aead_ccm_aad_fill(caad, sa->spi, sqc, IS_ESN(sa));
> + }
> + break;
> + case ALGO_TYPE_CHACHA20_POLY1305:
> + if (sa->aad_len != 0) {
> + chacha_aad = (struct aead_chacha20_poly1305_aad *)
> + (icv->va + sa->icv_len);
> + aead_chacha20_poly1305_aad_fill(chacha_aad,
> + sa->spi, sqc, IS_ESN(sa));
> + }
> + break;
> }
> }
>
> diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
> index 1e181cf2ce..a3f77469c3 100644
> --- a/lib/ipsec/esp_outb.c
> +++ b/lib/ipsec/esp_outb.c
> @@ -63,6 +63,8 @@ outb_cop_prepare(struct rte_crypto_op *cop,
> {
> struct rte_crypto_sym_op *sop;
> struct aead_gcm_iv *gcm;
> + struct aead_ccm_iv *ccm;
> + struct aead_chacha20_poly1305_iv *chacha20_poly1305;
> struct aesctr_cnt_blk *ctr;
> uint32_t algo;
>
> @@ -80,6 +82,15 @@ outb_cop_prepare(struct rte_crypto_op *cop,
> /* NULL case */
> sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
> break;
> + case ALGO_TYPE_AES_GMAC:
> + /* GMAC case */
> + sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
> +
> + /* fill AAD IV (located inside crypto op) */
> + gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
> + sa->iv_ofs);
> + aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
> + break;
> case ALGO_TYPE_AES_GCM:
> /* AEAD (AES_GCM) case */
> sop_aead_prepare(sop, sa, icv, hlen, plen);
> @@ -89,6 +100,26 @@ outb_cop_prepare(struct rte_crypto_op *cop,
> sa->iv_ofs);
> aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
> break;
> + case ALGO_TYPE_AES_CCM:
> + /* AEAD (AES_CCM) case */
> + sop_aead_prepare(sop, sa, icv, hlen, plen);
> +
> + /* fill AAD IV (located inside crypto op) */
> + ccm = rte_crypto_op_ctod_offset(cop, struct aead_ccm_iv *,
> + sa->iv_ofs);
> + aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
> + break;
> + case ALGO_TYPE_CHACHA20_POLY1305:
> + /* AEAD (CHACHA20_POLY) case */
> + sop_aead_prepare(sop, sa, icv, hlen, plen);
> +
> + /* fill AAD IV (located inside crypto op) */
> + chacha20_poly1305 = rte_crypto_op_ctod_offset(cop,
> + struct aead_chacha20_poly1305_iv *,
> + sa->iv_ofs);
> + aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
> + ivp[0], sa->salt);
> + break;
> case ALGO_TYPE_AES_CTR:
> /* Cipher-Auth (AES-CTR *) case */
> sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
> @@ -196,7 +227,9 @@ outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
> const union sym_op_data *icv)
> {
> uint32_t *psqh;
> - struct aead_gcm_aad *aad;
> + struct aead_gcm_aad *gaad;
> + struct aead_ccm_aad *caad;
> + struct aead_chacha20_poly1305_aad *chacha20_poly1305_aad;
>
> /* insert SQN.hi between ESP trailer and ICV */
> if (sa->sqh_len != 0) {
> @@ -208,9 +241,29 @@ outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
> * fill IV and AAD fields, if any (aad fields are placed after icv),
> * right now we support only one AEAD algorithm: AES-GCM .
> */
> + switch (sa->algo_type) {
> + case ALGO_TYPE_AES_GCM:
> if (sa->aad_len != 0) {
> - aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
> - aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
> + gaad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
> + aead_gcm_aad_fill(gaad, sa->spi, sqc, IS_ESN(sa));
> + }
> + break;
> + case ALGO_TYPE_AES_CCM:
> + if (sa->aad_len != 0) {
> + caad = (struct aead_ccm_aad *)(icv->va + sa->icv_len);
> + aead_ccm_aad_fill(caad, sa->spi, sqc, IS_ESN(sa));
> + }
> + break;
> + case ALGO_TYPE_CHACHA20_POLY1305:
> + if (sa->aad_len != 0) {
> + chacha20_poly1305_aad = (struct aead_chacha20_poly1305_aad *)
> + (icv->va + sa->icv_len);
> + aead_chacha20_poly1305_aad_fill(chacha20_poly1305_aad,
> + sa->spi, sqc, IS_ESN(sa));
> + }
> + break;
> + default:
> + break;
> }
> }
>
> @@ -418,6 +471,8 @@ outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
> {
> uint64_t *ivp = iv;
> struct aead_gcm_iv *gcm;
> + struct aead_ccm_iv *ccm;
> + struct aead_chacha20_poly1305_iv *chacha20_poly1305;
> struct aesctr_cnt_blk *ctr;
> uint32_t clen;
>
> @@ -426,6 +481,15 @@ outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
> gcm = iv;
> aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
> break;
> + case ALGO_TYPE_AES_CCM:
> + ccm = iv;
> + aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
> + break;
> + case ALGO_TYPE_CHACHA20_POLY1305:
> + chacha20_poly1305 = iv;
> + aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
> + ivp[0], sa->salt);
> + break;
> case ALGO_TYPE_AES_CTR:
> ctr = iv;
> aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt);
> diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
> index e59189d215..720e0f365b 100644
> --- a/lib/ipsec/sa.c
> +++ b/lib/ipsec/sa.c
> @@ -47,6 +47,15 @@ fill_crypto_xform(struct crypto_xform *xform, uint64_t type,
> if (xfn != NULL)
> return -EINVAL;
> xform->aead = &xf->aead;
> +
> + /* GMAC has only auth */
> + } else if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
> + xf->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC) {
> + if (xfn != NULL)
> + return -EINVAL;
> + xform->auth = &xf->auth;
> + xform->cipher = &xfn->cipher;
> +
> /*
> * CIPHER+AUTH xforms are expected in strict order,
> * depending on SA direction:
> @@ -247,12 +256,13 @@ esp_inb_init(struct rte_ipsec_sa *sa)
> sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
>
> /*
> - * for AEAD and NULL algorithms we can assume that
> + * for AEAD algorithms we can assume that
> * auth and cipher offsets would be equal.
> */
> switch (sa->algo_type) {
> case ALGO_TYPE_AES_GCM:
> - case ALGO_TYPE_NULL:
> + case ALGO_TYPE_AES_CCM:
> + case ALGO_TYPE_CHACHA20_POLY1305:
> sa->ctp.auth.raw = sa->ctp.cipher.raw;
> break;
> default:
> @@ -294,6 +304,8 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
>
> switch (algo_type) {
> case ALGO_TYPE_AES_GCM:
> + case ALGO_TYPE_AES_CCM:
> + case ALGO_TYPE_CHACHA20_POLY1305:
> case ALGO_TYPE_AES_CTR:
> case ALGO_TYPE_NULL:
> sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr) +
> @@ -305,15 +317,20 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
> sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr);
> sa->ctp.cipher.length = sa->iv_len;
> break;
> + case ALGO_TYPE_AES_GMAC:
> + sa->ctp.cipher.offset = 0;
> + sa->ctp.cipher.length = 0;
> + break;
> }
>
> /*
> - * for AEAD and NULL algorithms we can assume that
> + * for AEAD algorithms we can assume that
> * auth and cipher offsets would be equal.
> */
> switch (algo_type) {
> case ALGO_TYPE_AES_GCM:
> - case ALGO_TYPE_NULL:
> + case ALGO_TYPE_AES_CCM:
> + case ALGO_TYPE_CHACHA20_POLY1305:
> sa->ctp.auth.raw = sa->ctp.cipher.raw;
> break;
> default:
> @@ -374,13 +391,39 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
> sa->pad_align = IPSEC_PAD_AES_GCM;
> sa->algo_type = ALGO_TYPE_AES_GCM;
> break;
> + case RTE_CRYPTO_AEAD_AES_CCM:
> + /* RFC 4309 */
> + sa->aad_len = sizeof(struct aead_ccm_aad);
> + sa->icv_len = cxf->aead->digest_length;
> + sa->iv_ofs = cxf->aead->iv.offset;
> + sa->iv_len = sizeof(uint64_t);
> + sa->pad_align = IPSEC_PAD_AES_CCM;
> + sa->algo_type = ALGO_TYPE_AES_CCM;
> + break;
> + case RTE_CRYPTO_AEAD_CHACHA20_POLY1305:
> + /* RFC 7634 & 8439*/
> + sa->aad_len = sizeof(struct aead_chacha20_poly1305_aad);
> + sa->icv_len = cxf->aead->digest_length;
> + sa->iv_ofs = cxf->aead->iv.offset;
> + sa->iv_len = sizeof(uint64_t);
> + sa->pad_align = IPSEC_PAD_CHACHA20_POLY1305;
> + sa->algo_type = ALGO_TYPE_CHACHA20_POLY1305;
> + break;
> default:
> return -EINVAL;
> }
> + } else if (cxf->auth->algo == RTE_CRYPTO_AUTH_AES_GMAC) {
> + /* RFC 4543 */
> + /* AES-GMAC is a special case of auth that needs IV */
> + sa->pad_align = IPSEC_PAD_AES_GMAC;
> + sa->iv_len = sizeof(uint64_t);
> + sa->icv_len = cxf->auth->digest_length;
> + sa->iv_ofs = cxf->auth->iv.offset;
> + sa->algo_type = ALGO_TYPE_AES_GMAC;
> +
> } else {
> sa->icv_len = cxf->auth->digest_length;
> sa->iv_ofs = cxf->cipher->iv.offset;
> - sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
>
> switch (cxf->cipher->algo) {
> case RTE_CRYPTO_CIPHER_NULL:
> @@ -414,6 +457,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
> }
> }
>
> + sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
> sa->udata = prm->userdata;
> sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi);
> sa->salt = prm->ipsec_xform.salt;
> diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
> index 1bffe751f5..107ebd1519 100644
> --- a/lib/ipsec/sa.h
> +++ b/lib/ipsec/sa.h
> @@ -19,7 +19,10 @@ enum {
> IPSEC_PAD_AES_CBC = IPSEC_MAX_IV_SIZE,
> IPSEC_PAD_AES_CTR = IPSEC_PAD_DEFAULT,
> IPSEC_PAD_AES_GCM = IPSEC_PAD_DEFAULT,
> + IPSEC_PAD_AES_CCM = IPSEC_PAD_DEFAULT,
> + IPSEC_PAD_CHACHA20_POLY1305 = IPSEC_PAD_DEFAULT,
> IPSEC_PAD_NULL = IPSEC_PAD_DEFAULT,
> + IPSEC_PAD_AES_GMAC = IPSEC_PAD_DEFAULT,
> };
>
> /* iv sizes for different algorithms */
> @@ -67,6 +70,9 @@ enum sa_algo_type {
> ALGO_TYPE_AES_CBC,
> ALGO_TYPE_AES_CTR,
> ALGO_TYPE_AES_GCM,
> + ALGO_TYPE_AES_CCM,
> + ALGO_TYPE_CHACHA20_POLY1305,
> + ALGO_TYPE_AES_GMAC,
> ALGO_TYPE_MAX
> };
>
> --
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v6 06/10] ipsec: add transmit segmentation offload support
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 " Radu Nicolau
` (4 preceding siblings ...)
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 05/10] ipsec: add support for AEAD algorithms Radu Nicolau
@ 2021-09-17 9:17 ` Radu Nicolau
2021-09-23 14:09 ` Ananyev, Konstantin
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 07/10] ipsec: add support for NAT-T Radu Nicolau
` (4 subsequent siblings)
10 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-09-17 9:17 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add support for transmit segmentation offload to inline crypto processing
mode. This offload is not supported by other offload modes, as at a
minimum it requires inline crypto for IPsec to be supported on the
network interface.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
lib/ipsec/esp_inb.c | 4 +-
lib/ipsec/esp_outb.c | 115 +++++++++++++++++++++++++++++++++++--------
lib/ipsec/iph.h | 10 +++-
lib/ipsec/sa.c | 6 +++
lib/ipsec/sa.h | 4 ++
5 files changed, 114 insertions(+), 25 deletions(-)
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index d66c88f05d..a6ab8fbdd5 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -668,8 +668,8 @@ trs_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
/* modify packet's layout */
np = trs_process_step2(mb[i], ml[i], hl[i], cofs,
to[i], tl, sqn + k);
- update_trs_l3hdr(sa, np + l2, mb[i]->pkt_len,
- l2, hl[i] - l2, espt[i].next_proto);
+ update_trs_l34hdrs(sa, np + l2, mb[i]->pkt_len,
+ l2, hl[i] - l2, espt[i].next_proto, 0);
/* update mbuf's metadata */
trs_process_step3(mb[i]);
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index a3f77469c3..9fc7075796 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -2,6 +2,8 @@
* Copyright(c) 2018-2020 Intel Corporation
*/
+#include <math.h>
+
#include <rte_ipsec.h>
#include <rte_esp.h>
#include <rte_ip.h>
@@ -156,11 +158,20 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* number of bytes to encrypt */
clen = plen + sizeof(*espt);
- clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+ /* We don't need to pad/ailgn packet when using TSO offload */
+ if (likely(!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))))
+ clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
/* pad length + esp tail */
pdlen = clen - plen;
- tlen = pdlen + sa->icv_len + sqh_len;
+
+ /* We don't append ICV length when using TSO offload */
+ if (likely(!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))))
+ tlen = pdlen + sa->icv_len + sqh_len;
+ else
+ tlen = pdlen + sqh_len;
/* do append and prepend */
ml = rte_pktmbuf_lastseg(mb);
@@ -337,6 +348,7 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
char *ph, *pt;
uint64_t *iv;
uint32_t l2len, l3len;
+ uint8_t tso = mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG) ? 1 : 0;
l2len = mb->l2_len;
l3len = mb->l3_len;
@@ -349,11 +361,19 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* number of bytes to encrypt */
clen = plen + sizeof(*espt);
- clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+ /* We don't need to pad/ailgn packet when using TSO offload */
+ if (likely(!tso))
+ clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
/* pad length + esp tail */
pdlen = clen - plen;
- tlen = pdlen + sa->icv_len + sqh_len;
+
+ /* We don't append ICV length when using TSO offload */
+ if (likely(!tso))
+ tlen = pdlen + sa->icv_len + sqh_len;
+ else
+ tlen = pdlen + sqh_len;
/* do append and insert */
ml = rte_pktmbuf_lastseg(mb);
@@ -375,8 +395,8 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
insert_esph(ph, ph + hlen, uhlen);
/* update ip header fields */
- np = update_trs_l3hdr(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
- l3len, IPPROTO_ESP);
+ np = update_trs_l34hdrs(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
+ l3len, IPPROTO_ESP, tso);
/* update spi, seqn and iv */
esph = (struct rte_esp_hdr *)(ph + uhlen);
@@ -651,6 +671,33 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
}
}
+/* check if packet will exceed MSS and segmentation is required */
+static inline int
+esn_outb_nb_segments(const struct rte_ipsec_sa *sa, struct rte_mbuf *m) {
+ uint16_t segments = 1;
+ uint16_t pkt_l3len = m->pkt_len - m->l2_len;
+
+ /* Only support segmentation for UDP/TCP flows */
+ if (!(m->packet_type & (RTE_PTYPE_L4_UDP | RTE_PTYPE_L4_TCP)))
+ return segments;
+
+ if (sa->tso.enabled && pkt_l3len > sa->tso.mss) {
+ segments = ceil((float)pkt_l3len / sa->tso.mss);
+
+ if (m->packet_type & RTE_PTYPE_L4_TCP) {
+ m->ol_flags |= (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM);
+ m->l4_len = sizeof(struct rte_tcp_hdr);
+ } else {
+ m->ol_flags |= (PKT_TX_UDP_SEG | PKT_TX_UDP_CKSUM);
+ m->l4_len = sizeof(struct rte_udp_hdr);
+ }
+
+ m->tso_segsz = sa->tso.mss;
+ }
+
+ return segments;
+}
+
/*
* process group of ESP outbound tunnel packets destined for
* INLINE_CRYPTO type of device.
@@ -660,24 +707,29 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
int32_t rc;
- uint32_t i, k, n;
+ uint32_t i, k, nb_sqn = 0, nb_sqn_alloc;
uint64_t sqn;
rte_be64_t sqc;
struct rte_ipsec_sa *sa;
union sym_op_data icv;
uint64_t iv[IPSEC_MAX_IV_QWORD];
uint32_t dr[num];
+ uint16_t nb_segs[num];
sa = ss->sa;
- n = num;
- sqn = esn_outb_update_sqn(sa, &n);
- if (n != num)
+ for (i = 0; i != num; i++) {
+ nb_segs[i] = esn_outb_nb_segments(sa, mb[i]);
+ nb_sqn += nb_segs[i];
+ }
+
+ nb_sqn_alloc = nb_sqn;
+ sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
+ if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
k = 0;
- for (i = 0; i != n; i++) {
-
+ for (i = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -691,11 +743,18 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
dr[i - k] = i;
rte_errno = -rc;
}
+
+ /**
+ * If packet is using tso, increment sqn by the number of
+ * segments for packet
+ */
+ if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
+ sqn += nb_segs[i] - 1;
}
/* copy not processed mbufs beyond good ones */
- if (k != n && k != 0)
- move_bad_mbufs(mb, dr, n, n - k);
+ if (k != num && k != 0)
+ move_bad_mbufs(mb, dr, num, num - k);
inline_outb_mbuf_prepare(ss, mb, k);
return k;
@@ -710,23 +769,30 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
int32_t rc;
- uint32_t i, k, n;
+ uint32_t i, k, nb_sqn, nb_sqn_alloc;
uint64_t sqn;
rte_be64_t sqc;
struct rte_ipsec_sa *sa;
union sym_op_data icv;
uint64_t iv[IPSEC_MAX_IV_QWORD];
uint32_t dr[num];
+ uint16_t nb_segs[num];
sa = ss->sa;
- n = num;
- sqn = esn_outb_update_sqn(sa, &n);
- if (n != num)
+ /* Calculate number of sequence numbers required */
+ for (i = 0, nb_sqn = 0; i != num; i++) {
+ nb_segs[i] = esn_outb_nb_segments(sa, mb[i]);
+ nb_sqn += nb_segs[i];
+ }
+
+ nb_sqn_alloc = nb_sqn;
+ sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
+ if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
k = 0;
- for (i = 0; i != n; i++) {
+ for (i = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -741,11 +807,18 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
dr[i - k] = i;
rte_errno = -rc;
}
+
+ /**
+ * If packet is using tso, increment sqn by the number of
+ * segments for packet
+ */
+ if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
+ sqn += nb_segs[i] - 1;
}
/* copy not processed mbufs beyond good ones */
- if (k != n && k != 0)
- move_bad_mbufs(mb, dr, n, n - k);
+ if (k != num && k != 0)
+ move_bad_mbufs(mb, dr, num, num - k);
inline_outb_mbuf_prepare(ss, mb, k);
return k;
diff --git a/lib/ipsec/iph.h b/lib/ipsec/iph.h
index 861f16905a..2d223199ac 100644
--- a/lib/ipsec/iph.h
+++ b/lib/ipsec/iph.h
@@ -6,6 +6,8 @@
#define _IPH_H_
#include <rte_ip.h>
+#include <rte_udp.h>
+#include <rte_tcp.h>
/**
* @file iph.h
@@ -39,8 +41,8 @@ insert_esph(char *np, char *op, uint32_t hlen)
/* update original ip header fields for transport case */
static inline int
-update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
- uint32_t l2len, uint32_t l3len, uint8_t proto)
+update_trs_l34hdrs(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
+ uint32_t l2len, uint32_t l3len, uint8_t proto, uint8_t tso)
{
int32_t rc;
@@ -51,6 +53,10 @@ update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
v4h = p;
rc = v4h->next_proto_id;
v4h->next_proto_id = proto;
+ if (tso) {
+ v4h->hdr_checksum = 0;
+ v4h->total_length = 0;
+ }
v4h->total_length = rte_cpu_to_be_16(plen - l2len);
/* IPv6 */
} else {
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 720e0f365b..2ecbbce0a4 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -565,6 +565,12 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->type = type;
sa->size = sz;
+
+ if (prm->ipsec_xform.options.tso == 1) {
+ sa->tso.enabled = 1;
+ sa->tso.mss = prm->ipsec_xform.mss;
+ }
+
/* check for ESN flag */
sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
UINT32_MAX : UINT64_MAX;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 107ebd1519..5e237f3525 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -113,6 +113,10 @@ struct rte_ipsec_sa {
uint8_t iv_len;
uint8_t pad_align;
uint8_t tos_mask;
+ struct {
+ uint8_t enabled:1;
+ uint16_t mss;
+ } tso;
/* template for tunnel header */
uint8_t hdr[IPSEC_MAX_HDR_SIZE];
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v6 06/10] ipsec: add transmit segmentation offload support
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 06/10] ipsec: add transmit segmentation offload support Radu Nicolau
@ 2021-09-23 14:09 ` Ananyev, Konstantin
2021-09-28 15:14 ` Nicolau, Radu
0 siblings, 1 reply; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-09-23 14:09 UTC (permalink / raw)
To: Nicolau, Radu, Iremonger, Bernard, Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal,
gakhil, anoobj, Doherty, Declan, Sinha, Abhijit, Buckley,
Daniel M, marchana, ktejasree, matan
> Add support for transmit segmentation offload to inline crypto processing
> mode. This offload is not supported by other offload modes, as at a
> minimum it requires inline crypto for IPsec to be supported on the
> network interface.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
> lib/ipsec/esp_inb.c | 4 +-
> lib/ipsec/esp_outb.c | 115 +++++++++++++++++++++++++++++++++++--------
> lib/ipsec/iph.h | 10 +++-
> lib/ipsec/sa.c | 6 +++
> lib/ipsec/sa.h | 4 ++
> 5 files changed, 114 insertions(+), 25 deletions(-)
>
> diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
> index d66c88f05d..a6ab8fbdd5 100644
> --- a/lib/ipsec/esp_inb.c
> +++ b/lib/ipsec/esp_inb.c
> @@ -668,8 +668,8 @@ trs_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
> /* modify packet's layout */
> np = trs_process_step2(mb[i], ml[i], hl[i], cofs,
> to[i], tl, sqn + k);
> - update_trs_l3hdr(sa, np + l2, mb[i]->pkt_len,
> - l2, hl[i] - l2, espt[i].next_proto);
> + update_trs_l34hdrs(sa, np + l2, mb[i]->pkt_len,
> + l2, hl[i] - l2, espt[i].next_proto, 0);
>
> /* update mbuf's metadata */
> trs_process_step3(mb[i]);
> diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
> index a3f77469c3..9fc7075796 100644
> --- a/lib/ipsec/esp_outb.c
> +++ b/lib/ipsec/esp_outb.c
> @@ -2,6 +2,8 @@
> * Copyright(c) 2018-2020 Intel Corporation
> */
>
> +#include <math.h>
> +
> #include <rte_ipsec.h>
> #include <rte_esp.h>
> #include <rte_ip.h>
> @@ -156,11 +158,20 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
>
> /* number of bytes to encrypt */
> clen = plen + sizeof(*espt);
> - clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
> +
> + /* We don't need to pad/ailgn packet when using TSO offload */
> + if (likely(!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))))
> + clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
> +
Here and everywhere:
It doesn't look nice that we have to pollute generic functions with
checking TSO specific flags all over the place.
Can we probably have a specific prepare/process function for inline+tso case?
As we do have for cpu and inline cases right now.
Or just update inline version?
>
> /* pad length + esp tail */
> pdlen = clen - plen;
> - tlen = pdlen + sa->icv_len + sqh_len;
> +
> + /* We don't append ICV length when using TSO offload */
> + if (likely(!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))))
> + tlen = pdlen + sa->icv_len + sqh_len;
> + else
> + tlen = pdlen + sqh_len;
>
> /* do append and prepend */
> ml = rte_pktmbuf_lastseg(mb);
> @@ -337,6 +348,7 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
> char *ph, *pt;
> uint64_t *iv;
> uint32_t l2len, l3len;
> + uint8_t tso = mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG) ? 1 : 0;
>
> l2len = mb->l2_len;
> l3len = mb->l3_len;
> @@ -349,11 +361,19 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
>
> /* number of bytes to encrypt */
> clen = plen + sizeof(*espt);
> - clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
> +
> + /* We don't need to pad/ailgn packet when using TSO offload */
> + if (likely(!tso))
> + clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
>
> /* pad length + esp tail */
> pdlen = clen - plen;
> - tlen = pdlen + sa->icv_len + sqh_len;
> +
> + /* We don't append ICV length when using TSO offload */
> + if (likely(!tso))
> + tlen = pdlen + sa->icv_len + sqh_len;
> + else
> + tlen = pdlen + sqh_len;
>
> /* do append and insert */
> ml = rte_pktmbuf_lastseg(mb);
> @@ -375,8 +395,8 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
> insert_esph(ph, ph + hlen, uhlen);
>
> /* update ip header fields */
> - np = update_trs_l3hdr(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
> - l3len, IPPROTO_ESP);
> + np = update_trs_l34hdrs(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
> + l3len, IPPROTO_ESP, tso);
>
> /* update spi, seqn and iv */
> esph = (struct rte_esp_hdr *)(ph + uhlen);
> @@ -651,6 +671,33 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
> }
> }
>
> +/* check if packet will exceed MSS and segmentation is required */
> +static inline int
> +esn_outb_nb_segments(const struct rte_ipsec_sa *sa, struct rte_mbuf *m) {
> + uint16_t segments = 1;
> + uint16_t pkt_l3len = m->pkt_len - m->l2_len;
> +
> + /* Only support segmentation for UDP/TCP flows */
> + if (!(m->packet_type & (RTE_PTYPE_L4_UDP | RTE_PTYPE_L4_TCP)))
> + return segments;
> +
> + if (sa->tso.enabled && pkt_l3len > sa->tso.mss) {
> + segments = ceil((float)pkt_l3len / sa->tso.mss);
Float calculations in the middle of data-path?
Just to calculate roundup?
Doesn't look good to me at all.
> +
> + if (m->packet_type & RTE_PTYPE_L4_TCP) {
> + m->ol_flags |= (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM);
That's really strange - why ipsec library will set PKT_TX_TCP_SEG unconditionally?
That should be responsibility of the upper layer, I think.
In the lib we should only check was tso requested for that packet or not.
Same for UDP.
> + m->l4_len = sizeof(struct rte_tcp_hdr);
Hmm, how do we know there are no TCP options present for that packet?
Wouldn't it be better to expect user to provide proper l4_len for such packets?
> + } else {
> + m->ol_flags |= (PKT_TX_UDP_SEG | PKT_TX_UDP_CKSUM);
> + m->l4_len = sizeof(struct rte_udp_hdr);
> + }
> +
> + m->tso_segsz = sa->tso.mss;
> + }
> +
> + return segments;
> +}
> +
> /*
> * process group of ESP outbound tunnel packets destined for
> * INLINE_CRYPTO type of device.
> @@ -660,24 +707,29 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
> struct rte_mbuf *mb[], uint16_t num)
> {
> int32_t rc;
> - uint32_t i, k, n;
> + uint32_t i, k, nb_sqn = 0, nb_sqn_alloc;
> uint64_t sqn;
> rte_be64_t sqc;
> struct rte_ipsec_sa *sa;
> union sym_op_data icv;
> uint64_t iv[IPSEC_MAX_IV_QWORD];
> uint32_t dr[num];
> + uint16_t nb_segs[num];
>
> sa = ss->sa;
>
> - n = num;
> - sqn = esn_outb_update_sqn(sa, &n);
> - if (n != num)
> + for (i = 0; i != num; i++) {
> + nb_segs[i] = esn_outb_nb_segments(sa, mb[i]);
> + nb_sqn += nb_segs[i];
> + }
> +
> + nb_sqn_alloc = nb_sqn;
> + sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
> + if (nb_sqn_alloc != nb_sqn)
> rte_errno = EOVERFLOW;
>
> k = 0;
> - for (i = 0; i != n; i++) {
> -
> + for (i = 0; i != num; i++) {
> sqc = rte_cpu_to_be_64(sqn + i);
> gen_iv(iv, sqc);
>
> @@ -691,11 +743,18 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
> dr[i - k] = i;
> rte_errno = -rc;
> }
> +
> + /**
> + * If packet is using tso, increment sqn by the number of
> + * segments for packet
> + */
> + if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
> + sqn += nb_segs[i] - 1;
> }
>
> /* copy not processed mbufs beyond good ones */
> - if (k != n && k != 0)
> - move_bad_mbufs(mb, dr, n, n - k);
> + if (k != num && k != 0)
> + move_bad_mbufs(mb, dr, num, num - k);
>
> inline_outb_mbuf_prepare(ss, mb, k);
> return k;
> @@ -710,23 +769,30 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
> struct rte_mbuf *mb[], uint16_t num)
> {
> int32_t rc;
> - uint32_t i, k, n;
> + uint32_t i, k, nb_sqn, nb_sqn_alloc;
> uint64_t sqn;
> rte_be64_t sqc;
> struct rte_ipsec_sa *sa;
> union sym_op_data icv;
> uint64_t iv[IPSEC_MAX_IV_QWORD];
> uint32_t dr[num];
> + uint16_t nb_segs[num];
>
> sa = ss->sa;
>
> - n = num;
> - sqn = esn_outb_update_sqn(sa, &n);
> - if (n != num)
> + /* Calculate number of sequence numbers required */
> + for (i = 0, nb_sqn = 0; i != num; i++) {
> + nb_segs[i] = esn_outb_nb_segments(sa, mb[i]);
> + nb_sqn += nb_segs[i];
> + }
> +
> + nb_sqn_alloc = nb_sqn;
> + sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
> + if (nb_sqn_alloc != nb_sqn)
> rte_errno = EOVERFLOW;
>
> k = 0;
> - for (i = 0; i != n; i++) {
> + for (i = 0; i != num; i++) {
>
> sqc = rte_cpu_to_be_64(sqn + i);
> gen_iv(iv, sqc);
> @@ -741,11 +807,18 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
> dr[i - k] = i;
> rte_errno = -rc;
> }
> +
> + /**
> + * If packet is using tso, increment sqn by the number of
> + * segments for packet
> + */
> + if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
> + sqn += nb_segs[i] - 1;
> }
>
> /* copy not processed mbufs beyond good ones */
> - if (k != n && k != 0)
> - move_bad_mbufs(mb, dr, n, n - k);
> + if (k != num && k != 0)
> + move_bad_mbufs(mb, dr, num, num - k);
>
> inline_outb_mbuf_prepare(ss, mb, k);
> return k;
> diff --git a/lib/ipsec/iph.h b/lib/ipsec/iph.h
> index 861f16905a..2d223199ac 100644
> --- a/lib/ipsec/iph.h
> +++ b/lib/ipsec/iph.h
> @@ -6,6 +6,8 @@
> #define _IPH_H_
>
> #include <rte_ip.h>
> +#include <rte_udp.h>
> +#include <rte_tcp.h>
>
> /**
> * @file iph.h
> @@ -39,8 +41,8 @@ insert_esph(char *np, char *op, uint32_t hlen)
>
> /* update original ip header fields for transport case */
> static inline int
> -update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
> - uint32_t l2len, uint32_t l3len, uint8_t proto)
> +update_trs_l34hdrs(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
> + uint32_t l2len, uint32_t l3len, uint8_t proto, uint8_t tso)
Hmm... why to change name of the function?
> {
> int32_t rc;
>
> @@ -51,6 +53,10 @@ update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
> v4h = p;
> rc = v4h->next_proto_id;
> v4h->next_proto_id = proto;
> + if (tso) {
> + v4h->hdr_checksum = 0;
> + v4h->total_length = 0;
total_len will be overwritten unconditionally at next line below.
Another question - why it is necessary?
Is it HW specific requirment or ... ?
> + }
> v4h->total_length = rte_cpu_to_be_16(plen - l2len);
> /* IPv6 */
> } else {
> diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
> index 720e0f365b..2ecbbce0a4 100644
> --- a/lib/ipsec/sa.c
> +++ b/lib/ipsec/sa.c
> @@ -565,6 +565,12 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
> sa->type = type;
> sa->size = sz;
>
> +
> + if (prm->ipsec_xform.options.tso == 1) {
> + sa->tso.enabled = 1;
> + sa->tso.mss = prm->ipsec_xform.mss;
> + }
> +
> /* check for ESN flag */
> sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
> UINT32_MAX : UINT64_MAX;
> diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
> index 107ebd1519..5e237f3525 100644
> --- a/lib/ipsec/sa.h
> +++ b/lib/ipsec/sa.h
> @@ -113,6 +113,10 @@ struct rte_ipsec_sa {
> uint8_t iv_len;
> uint8_t pad_align;
> uint8_t tos_mask;
> + struct {
> + uint8_t enabled:1;
> + uint16_t mss;
> + } tso;
Wouldn't one field be enough?
uint16_t tso_mss;
And it it is zero, then tso is disabled.
In fact, do we need it at all?
Wouldn't it be better to request user to fill mbuf->tso_segsz properly for us?
>
> /* template for tunnel header */
> uint8_t hdr[IPSEC_MAX_HDR_SIZE];
> --
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v6 06/10] ipsec: add transmit segmentation offload support
2021-09-23 14:09 ` Ananyev, Konstantin
@ 2021-09-28 15:14 ` Nicolau, Radu
2021-09-28 22:24 ` Ananyev, Konstantin
0 siblings, 1 reply; 184+ messages in thread
From: Nicolau, Radu @ 2021-09-28 15:14 UTC (permalink / raw)
To: Ananyev, Konstantin, Iremonger, Bernard, Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal,
gakhil, anoobj, Doherty, Declan, Sinha, Abhijit, Buckley,
Daniel M, marchana, ktejasree, matan
On 9/23/2021 3:09 PM, Ananyev, Konstantin wrote:
>
>> Add support for transmit segmentation offload to inline crypto processing
>> mode. This offload is not supported by other offload modes, as at a
>> minimum it requires inline crypto for IPsec to be supported on the
>> network interface.
>>
>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
>> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
>> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
>> ---
>> lib/ipsec/esp_inb.c | 4 +-
>> lib/ipsec/esp_outb.c | 115 +++++++++++++++++++++++++++++++++++--------
>> lib/ipsec/iph.h | 10 +++-
>> lib/ipsec/sa.c | 6 +++
>> lib/ipsec/sa.h | 4 ++
>> 5 files changed, 114 insertions(+), 25 deletions(-)
>>
>> diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
>> index d66c88f05d..a6ab8fbdd5 100644
>> --- a/lib/ipsec/esp_inb.c
>> +++ b/lib/ipsec/esp_inb.c
>> @@ -668,8 +668,8 @@ trs_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
>> /* modify packet's layout */
>> np = trs_process_step2(mb[i], ml[i], hl[i], cofs,
>> to[i], tl, sqn + k);
>> - update_trs_l3hdr(sa, np + l2, mb[i]->pkt_len,
>> - l2, hl[i] - l2, espt[i].next_proto);
>> + update_trs_l34hdrs(sa, np + l2, mb[i]->pkt_len,
>> + l2, hl[i] - l2, espt[i].next_proto, 0);
>>
>> /* update mbuf's metadata */
>> trs_process_step3(mb[i]);
>> diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
>> index a3f77469c3..9fc7075796 100644
>> --- a/lib/ipsec/esp_outb.c
>> +++ b/lib/ipsec/esp_outb.c
>> @@ -2,6 +2,8 @@
>> * Copyright(c) 2018-2020 Intel Corporation
>> */
>>
>> +#include <math.h>
>> +
>> #include <rte_ipsec.h>
>> #include <rte_esp.h>
>> #include <rte_ip.h>
>> @@ -156,11 +158,20 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
>>
>> /* number of bytes to encrypt */
>> clen = plen + sizeof(*espt);
>> - clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
>> +
>> + /* We don't need to pad/ailgn packet when using TSO offload */
>> + if (likely(!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))))
>> + clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
>> +
> Here and everywhere:
> It doesn't look nice that we have to pollute generic functions with
> checking TSO specific flags all over the place.
> Can we probably have a specific prepare/process function for inline+tso case?
> As we do have for cpu and inline cases right now.
> Or just update inline version?
I looked at doing this but unless I copy these 2 functions I can't move
this out.
>
>> /* pad length + esp tail */
>> pdlen = clen - plen;
>> - tlen = pdlen + sa->icv_len + sqh_len;
>> +
>> + /* We don't append ICV length when using TSO offload */
>> + if (likely(!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))))
>> + tlen = pdlen + sa->icv_len + sqh_len;
>> + else
>> + tlen = pdlen + sqh_len;
>>
>> /* do append and prepend */
>> ml = rte_pktmbuf_lastseg(mb);
>> @@ -337,6 +348,7 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
>> char *ph, *pt;
>> uint64_t *iv;
>> uint32_t l2len, l3len;
>> + uint8_t tso = mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG) ? 1 : 0;
>>
>> l2len = mb->l2_len;
>> l3len = mb->l3_len;
>> @@ -349,11 +361,19 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
>>
>> /* number of bytes to encrypt */
>> clen = plen + sizeof(*espt);
>> - clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
>> +
>> + /* We don't need to pad/ailgn packet when using TSO offload */
>> + if (likely(!tso))
>> + clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
>>
>> /* pad length + esp tail */
>> pdlen = clen - plen;
>> - tlen = pdlen + sa->icv_len + sqh_len;
>> +
>> + /* We don't append ICV length when using TSO offload */
>> + if (likely(!tso))
>> + tlen = pdlen + sa->icv_len + sqh_len;
>> + else
>> + tlen = pdlen + sqh_len;
>>
>> /* do append and insert */
>> ml = rte_pktmbuf_lastseg(mb);
>> @@ -375,8 +395,8 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
>> insert_esph(ph, ph + hlen, uhlen);
>>
>> /* update ip header fields */
>> - np = update_trs_l3hdr(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
>> - l3len, IPPROTO_ESP);
>> + np = update_trs_l34hdrs(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
>> + l3len, IPPROTO_ESP, tso);
>>
>> /* update spi, seqn and iv */
>> esph = (struct rte_esp_hdr *)(ph + uhlen);
>> @@ -651,6 +671,33 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
>> }
>> }
>>
>> +/* check if packet will exceed MSS and segmentation is required */
>> +static inline int
>> +esn_outb_nb_segments(const struct rte_ipsec_sa *sa, struct rte_mbuf *m) {
>> + uint16_t segments = 1;
>> + uint16_t pkt_l3len = m->pkt_len - m->l2_len;
>> +
>> + /* Only support segmentation for UDP/TCP flows */
>> + if (!(m->packet_type & (RTE_PTYPE_L4_UDP | RTE_PTYPE_L4_TCP)))
>> + return segments;
>> +
>> + if (sa->tso.enabled && pkt_l3len > sa->tso.mss) {
>> + segments = ceil((float)pkt_l3len / sa->tso.mss);
> Float calculations in the middle of data-path?
> Just to calculate roundup?
> Doesn't look good to me at all.
It doesn't look good to me either - I will rework it.
>
>> +
>> + if (m->packet_type & RTE_PTYPE_L4_TCP) {
>> + m->ol_flags |= (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM);
> That's really strange - why ipsec library will set PKT_TX_TCP_SEG unconditionally?
> That should be responsibility of the upper layer, I think.
> In the lib we should only check was tso requested for that packet or not.
> Same for UDP.
These are under an if(TSO) condition.
>
>> + m->l4_len = sizeof(struct rte_tcp_hdr);
> Hmm, how do we know there are no TCP options present for that packet?
> Wouldn't it be better to expect user to provide proper l4_len for such packets?
You're right, I will update it.
>
>> + } else {
>> + m->ol_flags |= (PKT_TX_UDP_SEG | PKT_TX_UDP_CKSUM);
>> + m->l4_len = sizeof(struct rte_udp_hdr);
>> + }
>> +
>> + m->tso_segsz = sa->tso.mss;
>> + }
>> +
>> + return segments;
>> +}
>> +
>> /*
>> * process group of ESP outbound tunnel packets destined for
>> * INLINE_CRYPTO type of device.
>> @@ -660,24 +707,29 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
>> struct rte_mbuf *mb[], uint16_t num)
>> {
>> int32_t rc;
>> - uint32_t i, k, n;
>> + uint32_t i, k, nb_sqn = 0, nb_sqn_alloc;
>> uint64_t sqn;
>> rte_be64_t sqc;
>> struct rte_ipsec_sa *sa;
>> union sym_op_data icv;
>> uint64_t iv[IPSEC_MAX_IV_QWORD];
>> uint32_t dr[num];
>> + uint16_t nb_segs[num];
>>
>> sa = ss->sa;
>>
>> - n = num;
>> - sqn = esn_outb_update_sqn(sa, &n);
>> - if (n != num)
>> + for (i = 0; i != num; i++) {
>> + nb_segs[i] = esn_outb_nb_segments(sa, mb[i]);
>> + nb_sqn += nb_segs[i];
>> + }
>> +
>> + nb_sqn_alloc = nb_sqn;
>> + sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
>> + if (nb_sqn_alloc != nb_sqn)
>> rte_errno = EOVERFLOW;
>>
>> k = 0;
>> - for (i = 0; i != n; i++) {
>> -
>> + for (i = 0; i != num; i++) {
>> sqc = rte_cpu_to_be_64(sqn + i);
>> gen_iv(iv, sqc);
>>
>> @@ -691,11 +743,18 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
>> dr[i - k] = i;
>> rte_errno = -rc;
>> }
>> +
>> + /**
>> + * If packet is using tso, increment sqn by the number of
>> + * segments for packet
>> + */
>> + if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
>> + sqn += nb_segs[i] - 1;
>> }
>>
>> /* copy not processed mbufs beyond good ones */
>> - if (k != n && k != 0)
>> - move_bad_mbufs(mb, dr, n, n - k);
>> + if (k != num && k != 0)
>> + move_bad_mbufs(mb, dr, num, num - k);
>>
>> inline_outb_mbuf_prepare(ss, mb, k);
>> return k;
>> @@ -710,23 +769,30 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
>> struct rte_mbuf *mb[], uint16_t num)
>> {
>> int32_t rc;
>> - uint32_t i, k, n;
>> + uint32_t i, k, nb_sqn, nb_sqn_alloc;
>> uint64_t sqn;
>> rte_be64_t sqc;
>> struct rte_ipsec_sa *sa;
>> union sym_op_data icv;
>> uint64_t iv[IPSEC_MAX_IV_QWORD];
>> uint32_t dr[num];
>> + uint16_t nb_segs[num];
>>
>> sa = ss->sa;
>>
>> - n = num;
>> - sqn = esn_outb_update_sqn(sa, &n);
>> - if (n != num)
>> + /* Calculate number of sequence numbers required */
>> + for (i = 0, nb_sqn = 0; i != num; i++) {
>> + nb_segs[i] = esn_outb_nb_segments(sa, mb[i]);
>> + nb_sqn += nb_segs[i];
>> + }
>> +
>> + nb_sqn_alloc = nb_sqn;
>> + sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
>> + if (nb_sqn_alloc != nb_sqn)
>> rte_errno = EOVERFLOW;
>>
>> k = 0;
>> - for (i = 0; i != n; i++) {
>> + for (i = 0; i != num; i++) {
>>
>> sqc = rte_cpu_to_be_64(sqn + i);
>> gen_iv(iv, sqc);
>> @@ -741,11 +807,18 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
>> dr[i - k] = i;
>> rte_errno = -rc;
>> }
>> +
>> + /**
>> + * If packet is using tso, increment sqn by the number of
>> + * segments for packet
>> + */
>> + if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
>> + sqn += nb_segs[i] - 1;
>> }
>>
>> /* copy not processed mbufs beyond good ones */
>> - if (k != n && k != 0)
>> - move_bad_mbufs(mb, dr, n, n - k);
>> + if (k != num && k != 0)
>> + move_bad_mbufs(mb, dr, num, num - k);
>>
>> inline_outb_mbuf_prepare(ss, mb, k);
>> return k;
>> diff --git a/lib/ipsec/iph.h b/lib/ipsec/iph.h
>> index 861f16905a..2d223199ac 100644
>> --- a/lib/ipsec/iph.h
>> +++ b/lib/ipsec/iph.h
>> @@ -6,6 +6,8 @@
>> #define _IPH_H_
>>
>> #include <rte_ip.h>
>> +#include <rte_udp.h>
>> +#include <rte_tcp.h>
>>
>> /**
>> * @file iph.h
>> @@ -39,8 +41,8 @@ insert_esph(char *np, char *op, uint32_t hlen)
>>
>> /* update original ip header fields for transport case */
>> static inline int
>> -update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
>> - uint32_t l2len, uint32_t l3len, uint8_t proto)
>> +update_trs_l34hdrs(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
>> + uint32_t l2len, uint32_t l3len, uint8_t proto, uint8_t tso)
> Hmm... why to change name of the function?
>
>> {
>> int32_t rc;
>>
>> @@ -51,6 +53,10 @@ update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
>> v4h = p;
>> rc = v4h->next_proto_id;
>> v4h->next_proto_id = proto;
>> + if (tso) {
>> + v4h->hdr_checksum = 0;
>> + v4h->total_length = 0;
> total_len will be overwritten unconditionally at next line below.
>
> Another question - why it is necessary?
> Is it HW specific requirment or ... ?
It looks wrong I will rewrite this.
>
>
>> + }
>> v4h->total_length = rte_cpu_to_be_16(plen - l2len);
>
>> /* IPv6 */
>> } else {
>> diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
>> index 720e0f365b..2ecbbce0a4 100644
>> --- a/lib/ipsec/sa.c
>> +++ b/lib/ipsec/sa.c
>> @@ -565,6 +565,12 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
>> sa->type = type;
>> sa->size = sz;
>>
>> +
>> + if (prm->ipsec_xform.options.tso == 1) {
>> + sa->tso.enabled = 1;
>> + sa->tso.mss = prm->ipsec_xform.mss;
>> + }
>> +
>> /* check for ESN flag */
>> sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
>> UINT32_MAX : UINT64_MAX;
>> diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
>> index 107ebd1519..5e237f3525 100644
>> --- a/lib/ipsec/sa.h
>> +++ b/lib/ipsec/sa.h
>> @@ -113,6 +113,10 @@ struct rte_ipsec_sa {
>> uint8_t iv_len;
>> uint8_t pad_align;
>> uint8_t tos_mask;
>> + struct {
>> + uint8_t enabled:1;
>> + uint16_t mss;
>> + } tso;
> Wouldn't one field be enough?
> uint16_t tso_mss;
> And it it is zero, then tso is disabled.
> In fact, do we need it at all?
> Wouldn't it be better to request user to fill mbuf->tso_segsz properly for us?
We added an option to rte_security_ipsec_sa_options to allow the user to
enable TSO per SA and specify the MSS in the sessions parameters.
We can request user to fill mbuf->tso_segsz, but with this patch we are
doing it for the user.
>
>> /* template for tunnel header */
>> uint8_t hdr[IPSEC_MAX_HDR_SIZE];
>> --
>> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v6 06/10] ipsec: add transmit segmentation offload support
2021-09-28 15:14 ` Nicolau, Radu
@ 2021-09-28 22:24 ` Ananyev, Konstantin
0 siblings, 0 replies; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-09-28 22:24 UTC (permalink / raw)
To: Nicolau, Radu, Iremonger, Bernard, Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal,
gakhil, anoobj, Doherty, Declan, Sinha, Abhijit, Buckley,
Daniel M, marchana, ktejasree, matan
> On 9/23/2021 3:09 PM, Ananyev, Konstantin wrote:
> >
> >> Add support for transmit segmentation offload to inline crypto processing
> >> mode. This offload is not supported by other offload modes, as at a
> >> minimum it requires inline crypto for IPsec to be supported on the
> >> network interface.
> >>
> >> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> >> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> >> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> >> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> >> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> >> ---
> >> lib/ipsec/esp_inb.c | 4 +-
> >> lib/ipsec/esp_outb.c | 115 +++++++++++++++++++++++++++++++++++--------
> >> lib/ipsec/iph.h | 10 +++-
> >> lib/ipsec/sa.c | 6 +++
> >> lib/ipsec/sa.h | 4 ++
> >> 5 files changed, 114 insertions(+), 25 deletions(-)
> >>
> >> diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
> >> index d66c88f05d..a6ab8fbdd5 100644
> >> --- a/lib/ipsec/esp_inb.c
> >> +++ b/lib/ipsec/esp_inb.c
> >> @@ -668,8 +668,8 @@ trs_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
> >> /* modify packet's layout */
> >> np = trs_process_step2(mb[i], ml[i], hl[i], cofs,
> >> to[i], tl, sqn + k);
> >> - update_trs_l3hdr(sa, np + l2, mb[i]->pkt_len,
> >> - l2, hl[i] - l2, espt[i].next_proto);
> >> + update_trs_l34hdrs(sa, np + l2, mb[i]->pkt_len,
> >> + l2, hl[i] - l2, espt[i].next_proto, 0);
> >>
> >> /* update mbuf's metadata */
> >> trs_process_step3(mb[i]);
> >> diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
> >> index a3f77469c3..9fc7075796 100644
> >> --- a/lib/ipsec/esp_outb.c
> >> +++ b/lib/ipsec/esp_outb.c
> >> @@ -2,6 +2,8 @@
> >> * Copyright(c) 2018-2020 Intel Corporation
> >> */
> >>
> >> +#include <math.h>
> >> +
> >> #include <rte_ipsec.h>
> >> #include <rte_esp.h>
> >> #include <rte_ip.h>
> >> @@ -156,11 +158,20 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
> >>
> >> /* number of bytes to encrypt */
> >> clen = plen + sizeof(*espt);
> >> - clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
> >> +
> >> + /* We don't need to pad/ailgn packet when using TSO offload */
> >> + if (likely(!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))))
> >> + clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
> >> +
> > Here and everywhere:
> > It doesn't look nice that we have to pollute generic functions with
> > checking TSO specific flags all over the place.
> > Can we probably have a specific prepare/process function for inline+tso case?
> > As we do have for cpu and inline cases right now.
> > Or just update inline version?
> I looked at doing this but unless I copy these 2 functions I can't move
> this out.
> >
> >> /* pad length + esp tail */
> >> pdlen = clen - plen;
> >> - tlen = pdlen + sa->icv_len + sqh_len;
> >> +
> >> + /* We don't append ICV length when using TSO offload */
> >> + if (likely(!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))))
> >> + tlen = pdlen + sa->icv_len + sqh_len;
> >> + else
> >> + tlen = pdlen + sqh_len;
> >>
> >> /* do append and prepend */
> >> ml = rte_pktmbuf_lastseg(mb);
> >> @@ -337,6 +348,7 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
> >> char *ph, *pt;
> >> uint64_t *iv;
> >> uint32_t l2len, l3len;
> >> + uint8_t tso = mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG) ? 1 : 0;
> >>
> >> l2len = mb->l2_len;
> >> l3len = mb->l3_len;
> >> @@ -349,11 +361,19 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
> >>
> >> /* number of bytes to encrypt */
> >> clen = plen + sizeof(*espt);
> >> - clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
> >> +
> >> + /* We don't need to pad/ailgn packet when using TSO offload */
> >> + if (likely(!tso))
> >> + clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
> >>
> >> /* pad length + esp tail */
> >> pdlen = clen - plen;
> >> - tlen = pdlen + sa->icv_len + sqh_len;
> >> +
> >> + /* We don't append ICV length when using TSO offload */
> >> + if (likely(!tso))
> >> + tlen = pdlen + sa->icv_len + sqh_len;
> >> + else
> >> + tlen = pdlen + sqh_len;
> >>
> >> /* do append and insert */
> >> ml = rte_pktmbuf_lastseg(mb);
> >> @@ -375,8 +395,8 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
> >> insert_esph(ph, ph + hlen, uhlen);
> >>
> >> /* update ip header fields */
> >> - np = update_trs_l3hdr(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
> >> - l3len, IPPROTO_ESP);
> >> + np = update_trs_l34hdrs(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
> >> + l3len, IPPROTO_ESP, tso);
> >>
> >> /* update spi, seqn and iv */
> >> esph = (struct rte_esp_hdr *)(ph + uhlen);
> >> @@ -651,6 +671,33 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
> >> }
> >> }
> >>
> >> +/* check if packet will exceed MSS and segmentation is required */
> >> +static inline int
> >> +esn_outb_nb_segments(const struct rte_ipsec_sa *sa, struct rte_mbuf *m) {
> >> + uint16_t segments = 1;
> >> + uint16_t pkt_l3len = m->pkt_len - m->l2_len;
> >> +
> >> + /* Only support segmentation for UDP/TCP flows */
> >> + if (!(m->packet_type & (RTE_PTYPE_L4_UDP | RTE_PTYPE_L4_TCP)))
> >> + return segments;
> >> +
> >> + if (sa->tso.enabled && pkt_l3len > sa->tso.mss) {
> >> + segments = ceil((float)pkt_l3len / sa->tso.mss);
> > Float calculations in the middle of data-path?
> > Just to calculate roundup?
> > Doesn't look good to me at all.
> It doesn't look good to me either - I will rework it.
> >
> >> +
> >> + if (m->packet_type & RTE_PTYPE_L4_TCP) {
> >> + m->ol_flags |= (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM);
> > That's really strange - why ipsec library will set PKT_TX_TCP_SEG unconditionally?
> > That should be responsibility of the upper layer, I think.
> > In the lib we should only check was tso requested for that packet or not.
> > Same for UDP.
> These are under an if(TSO) condition.
> >
> >> + m->l4_len = sizeof(struct rte_tcp_hdr);
> > Hmm, how do we know there are no TCP options present for that packet?
> > Wouldn't it be better to expect user to provide proper l4_len for such packets?
> You're right, I will update it.
>
> >
> >> + } else {
> >> + m->ol_flags |= (PKT_TX_UDP_SEG | PKT_TX_UDP_CKSUM);
> >> + m->l4_len = sizeof(struct rte_udp_hdr);
> >> + }
> >> +
> >> + m->tso_segsz = sa->tso.mss;
> >> + }
> >> +
> >> + return segments;
> >> +}
> >> +
> >> /*
> >> * process group of ESP outbound tunnel packets destined for
> >> * INLINE_CRYPTO type of device.
> >> @@ -660,24 +707,29 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
> >> struct rte_mbuf *mb[], uint16_t num)
> >> {
> >> int32_t rc;
> >> - uint32_t i, k, n;
> >> + uint32_t i, k, nb_sqn = 0, nb_sqn_alloc;
> >> uint64_t sqn;
> >> rte_be64_t sqc;
> >> struct rte_ipsec_sa *sa;
> >> union sym_op_data icv;
> >> uint64_t iv[IPSEC_MAX_IV_QWORD];
> >> uint32_t dr[num];
> >> + uint16_t nb_segs[num];
> >>
> >> sa = ss->sa;
> >>
> >> - n = num;
> >> - sqn = esn_outb_update_sqn(sa, &n);
> >> - if (n != num)
> >> + for (i = 0; i != num; i++) {
> >> + nb_segs[i] = esn_outb_nb_segments(sa, mb[i]);
> >> + nb_sqn += nb_segs[i];
> >> + }
> >> +
> >> + nb_sqn_alloc = nb_sqn;
> >> + sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
> >> + if (nb_sqn_alloc != nb_sqn)
> >> rte_errno = EOVERFLOW;
> >>
> >> k = 0;
> >> - for (i = 0; i != n; i++) {
> >> -
> >> + for (i = 0; i != num; i++) {
> >> sqc = rte_cpu_to_be_64(sqn + i);
> >> gen_iv(iv, sqc);
> >>
> >> @@ -691,11 +743,18 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
> >> dr[i - k] = i;
> >> rte_errno = -rc;
> >> }
> >> +
> >> + /**
> >> + * If packet is using tso, increment sqn by the number of
> >> + * segments for packet
> >> + */
> >> + if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
> >> + sqn += nb_segs[i] - 1;
> >> }
> >>
> >> /* copy not processed mbufs beyond good ones */
> >> - if (k != n && k != 0)
> >> - move_bad_mbufs(mb, dr, n, n - k);
> >> + if (k != num && k != 0)
> >> + move_bad_mbufs(mb, dr, num, num - k);
> >>
> >> inline_outb_mbuf_prepare(ss, mb, k);
> >> return k;
> >> @@ -710,23 +769,30 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
> >> struct rte_mbuf *mb[], uint16_t num)
> >> {
> >> int32_t rc;
> >> - uint32_t i, k, n;
> >> + uint32_t i, k, nb_sqn, nb_sqn_alloc;
> >> uint64_t sqn;
> >> rte_be64_t sqc;
> >> struct rte_ipsec_sa *sa;
> >> union sym_op_data icv;
> >> uint64_t iv[IPSEC_MAX_IV_QWORD];
> >> uint32_t dr[num];
> >> + uint16_t nb_segs[num];
> >>
> >> sa = ss->sa;
> >>
> >> - n = num;
> >> - sqn = esn_outb_update_sqn(sa, &n);
> >> - if (n != num)
> >> + /* Calculate number of sequence numbers required */
> >> + for (i = 0, nb_sqn = 0; i != num; i++) {
> >> + nb_segs[i] = esn_outb_nb_segments(sa, mb[i]);
> >> + nb_sqn += nb_segs[i];
> >> + }
> >> +
> >> + nb_sqn_alloc = nb_sqn;
> >> + sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
> >> + if (nb_sqn_alloc != nb_sqn)
> >> rte_errno = EOVERFLOW;
> >>
> >> k = 0;
> >> - for (i = 0; i != n; i++) {
> >> + for (i = 0; i != num; i++) {
> >>
> >> sqc = rte_cpu_to_be_64(sqn + i);
> >> gen_iv(iv, sqc);
> >> @@ -741,11 +807,18 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
> >> dr[i - k] = i;
> >> rte_errno = -rc;
> >> }
> >> +
> >> + /**
> >> + * If packet is using tso, increment sqn by the number of
> >> + * segments for packet
> >> + */
> >> + if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
> >> + sqn += nb_segs[i] - 1;
> >> }
> >>
> >> /* copy not processed mbufs beyond good ones */
> >> - if (k != n && k != 0)
> >> - move_bad_mbufs(mb, dr, n, n - k);
> >> + if (k != num && k != 0)
> >> + move_bad_mbufs(mb, dr, num, num - k);
> >>
> >> inline_outb_mbuf_prepare(ss, mb, k);
> >> return k;
> >> diff --git a/lib/ipsec/iph.h b/lib/ipsec/iph.h
> >> index 861f16905a..2d223199ac 100644
> >> --- a/lib/ipsec/iph.h
> >> +++ b/lib/ipsec/iph.h
> >> @@ -6,6 +6,8 @@
> >> #define _IPH_H_
> >>
> >> #include <rte_ip.h>
> >> +#include <rte_udp.h>
> >> +#include <rte_tcp.h>
> >>
> >> /**
> >> * @file iph.h
> >> @@ -39,8 +41,8 @@ insert_esph(char *np, char *op, uint32_t hlen)
> >>
> >> /* update original ip header fields for transport case */
> >> static inline int
> >> -update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
> >> - uint32_t l2len, uint32_t l3len, uint8_t proto)
> >> +update_trs_l34hdrs(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
> >> + uint32_t l2len, uint32_t l3len, uint8_t proto, uint8_t tso)
> > Hmm... why to change name of the function?
> >
> >> {
> >> int32_t rc;
> >>
> >> @@ -51,6 +53,10 @@ update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
> >> v4h = p;
> >> rc = v4h->next_proto_id;
> >> v4h->next_proto_id = proto;
> >> + if (tso) {
> >> + v4h->hdr_checksum = 0;
> >> + v4h->total_length = 0;
> > total_len will be overwritten unconditionally at next line below.
> >
> > Another question - why it is necessary?
> > Is it HW specific requirment or ... ?
> It looks wrong I will rewrite this.
> >
> >
> >> + }
> >> v4h->total_length = rte_cpu_to_be_16(plen - l2len);
> >
> >> /* IPv6 */
> >> } else {
> >> diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
> >> index 720e0f365b..2ecbbce0a4 100644
> >> --- a/lib/ipsec/sa.c
> >> +++ b/lib/ipsec/sa.c
> >> @@ -565,6 +565,12 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
> >> sa->type = type;
> >> sa->size = sz;
> >>
> >> +
> >> + if (prm->ipsec_xform.options.tso == 1) {
> >> + sa->tso.enabled = 1;
> >> + sa->tso.mss = prm->ipsec_xform.mss;
> >> + }
> >> +
> >> /* check for ESN flag */
> >> sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
> >> UINT32_MAX : UINT64_MAX;
> >> diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
> >> index 107ebd1519..5e237f3525 100644
> >> --- a/lib/ipsec/sa.h
> >> +++ b/lib/ipsec/sa.h
> >> @@ -113,6 +113,10 @@ struct rte_ipsec_sa {
> >> uint8_t iv_len;
> >> uint8_t pad_align;
> >> uint8_t tos_mask;
> >> + struct {
> >> + uint8_t enabled:1;
> >> + uint16_t mss;
> >> + } tso;
> > Wouldn't one field be enough?
> > uint16_t tso_mss;
> > And it it is zero, then tso is disabled.
> > In fact, do we need it at all?
> > Wouldn't it be better to request user to fill mbuf->tso_segsz properly for us?
>
> We added an option to rte_security_ipsec_sa_options to allow the user to
> enable TSO per SA and specify the MSS in the sessions parameters.
After another thought, it doesn’t look like a good approach to me:
from one side same SA can be used for multiple IP addresses,
from other side - MSS value can differ on a per connection basis.
So different TCP connections within same SA can easily have different MSS values.
So I think we shouldn't save mss in SA at all.
Instead, we probably need to request user to fill mbuf->tso_segsz for us.
>
> We can request user to fill mbuf->tso_segsz, but with this patch we are
> doing it for the user.
>
> >
> >> /* template for tunnel header */
> >> uint8_t hdr[IPSEC_MAX_HDR_SIZE];
> >> --
> >> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v6 07/10] ipsec: add support for NAT-T
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 " Radu Nicolau
` (5 preceding siblings ...)
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 06/10] ipsec: add transmit segmentation offload support Radu Nicolau
@ 2021-09-17 9:17 ` Radu Nicolau
2021-09-23 16:43 ` Ananyev, Konstantin
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 08/10] ipsec: add support for SA telemetry Radu Nicolau
` (3 subsequent siblings)
10 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-09-17 9:17 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add support for the IPsec NAT-Traversal use case for Tunnel mode
packets.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
lib/ipsec/iph.h | 17 +++++++++++++++++
lib/ipsec/rte_ipsec_sa.h | 8 +++++++-
lib/ipsec/sa.c | 13 ++++++++++++-
lib/ipsec/sa.h | 4 ++++
4 files changed, 40 insertions(+), 2 deletions(-)
diff --git a/lib/ipsec/iph.h b/lib/ipsec/iph.h
index 2d223199ac..c5c213a2b4 100644
--- a/lib/ipsec/iph.h
+++ b/lib/ipsec/iph.h
@@ -251,6 +251,7 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa *sa, void *outh,
{
struct rte_ipv4_hdr *v4h;
struct rte_ipv6_hdr *v6h;
+ struct rte_udp_hdr *udph;
uint8_t is_outh_ipv4;
if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
@@ -258,11 +259,27 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa *sa, void *outh,
v4h = outh;
v4h->packet_id = pid;
v4h->total_length = rte_cpu_to_be_16(plen - l2len);
+
+ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
+ udph = (struct rte_udp_hdr *)(v4h + 1);
+ udph->dst_port = sa->natt.dport;
+ udph->src_port = sa->natt.sport;
+ udph->dgram_len = rte_cpu_to_be_16(plen - l2len -
+ (sizeof(*v4h) + sizeof(*udph)));
+ }
} else {
is_outh_ipv4 = 0;
v6h = outh;
v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
sizeof(*v6h));
+
+ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
+ udph = (struct rte_udp_hdr *)(v6h + 1);
+ udph->dst_port = sa->natt.dport;
+ udph->src_port = sa->natt.sport;
+ udph->dgram_len = rte_cpu_to_be_16(plen - l2len -
+ (sizeof(*v6h) + sizeof(*udph)));
+ }
}
if (sa->type & TUN_HDR_MSK)
diff --git a/lib/ipsec/rte_ipsec_sa.h b/lib/ipsec/rte_ipsec_sa.h
index cf51ad8338..40d1e70d45 100644
--- a/lib/ipsec/rte_ipsec_sa.h
+++ b/lib/ipsec/rte_ipsec_sa.h
@@ -76,6 +76,7 @@ struct rte_ipsec_sa_prm {
* - inbound/outbound
* - mode (TRANSPORT/TUNNEL)
* - for TUNNEL outer IP version (IPv4/IPv6)
+ * - NAT-T UDP encapsulated (TUNNEL mode only)
* - are SA SQN operations 'atomic'
* - ESN enabled/disabled
* ...
@@ -86,7 +87,8 @@ enum {
RTE_SATP_LOG2_PROTO,
RTE_SATP_LOG2_DIR,
RTE_SATP_LOG2_MODE,
- RTE_SATP_LOG2_SQN = RTE_SATP_LOG2_MODE + 2,
+ RTE_SATP_LOG2_NATT = RTE_SATP_LOG2_MODE + 2,
+ RTE_SATP_LOG2_SQN,
RTE_SATP_LOG2_ESN,
RTE_SATP_LOG2_ECN,
RTE_SATP_LOG2_DSCP
@@ -109,6 +111,10 @@ enum {
#define RTE_IPSEC_SATP_MODE_TUNLV4 (1ULL << RTE_SATP_LOG2_MODE)
#define RTE_IPSEC_SATP_MODE_TUNLV6 (2ULL << RTE_SATP_LOG2_MODE)
+#define RTE_IPSEC_SATP_NATT_MASK (1ULL << RTE_SATP_LOG2_NATT)
+#define RTE_IPSEC_SATP_NATT_DISABLE (0ULL << RTE_SATP_LOG2_NATT)
+#define RTE_IPSEC_SATP_NATT_ENABLE (1ULL << RTE_SATP_LOG2_NATT)
+
#define RTE_IPSEC_SATP_SQN_MASK (1ULL << RTE_SATP_LOG2_SQN)
#define RTE_IPSEC_SATP_SQN_RAW (0ULL << RTE_SATP_LOG2_SQN)
#define RTE_IPSEC_SATP_SQN_ATOM (1ULL << RTE_SATP_LOG2_SQN)
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 2ecbbce0a4..8e369e4618 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -217,6 +217,10 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
} else
return -EINVAL;
+ /* check for UDP encapsulation flag */
+ if (prm->ipsec_xform.options.udp_encap == 1)
+ tp |= RTE_IPSEC_SATP_NATT_ENABLE;
+
/* check for ESN flag */
if (prm->ipsec_xform.options.esn == 0)
tp |= RTE_IPSEC_SATP_ESN_DISABLE;
@@ -372,7 +376,8 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
const struct crypto_xform *cxf)
{
static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
- RTE_IPSEC_SATP_MODE_MASK;
+ RTE_IPSEC_SATP_MODE_MASK |
+ RTE_IPSEC_SATP_NATT_MASK;
if (prm->ipsec_xform.options.ecn)
sa->tos_mask |= RTE_IPV4_HDR_ECN_MASK;
@@ -475,10 +480,16 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
esp_inb_init(sa);
break;
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4 |
+ RTE_IPSEC_SATP_NATT_ENABLE):
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6 |
+ RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
esp_outb_tun_init(sa, prm);
break;
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
+ RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
esp_outb_init(sa, 0);
break;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 5e237f3525..3f38921eb3 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -101,6 +101,10 @@ struct rte_ipsec_sa {
uint64_t msk;
uint64_t val;
} tx_offload;
+ struct {
+ uint16_t sport;
+ uint16_t dport;
+ } natt;
uint32_t salt;
uint8_t algo_type;
uint8_t proto; /* next proto */
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v6 07/10] ipsec: add support for NAT-T
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 07/10] ipsec: add support for NAT-T Radu Nicolau
@ 2021-09-23 16:43 ` Ananyev, Konstantin
2021-09-27 13:27 ` Nicolau, Radu
0 siblings, 1 reply; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-09-23 16:43 UTC (permalink / raw)
To: Nicolau, Radu, Iremonger, Bernard, Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal,
gakhil, anoobj, Doherty, Declan, Sinha, Abhijit, Buckley,
Daniel M, marchana, ktejasree, matan
>
> Add support for the IPsec NAT-Traversal use case for Tunnel mode
> packets.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
> lib/ipsec/iph.h | 17 +++++++++++++++++
> lib/ipsec/rte_ipsec_sa.h | 8 +++++++-
> lib/ipsec/sa.c | 13 ++++++++++++-
> lib/ipsec/sa.h | 4 ++++
> 4 files changed, 40 insertions(+), 2 deletions(-)
>
> diff --git a/lib/ipsec/iph.h b/lib/ipsec/iph.h
> index 2d223199ac..c5c213a2b4 100644
> --- a/lib/ipsec/iph.h
> +++ b/lib/ipsec/iph.h
> @@ -251,6 +251,7 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa *sa, void *outh,
> {
> struct rte_ipv4_hdr *v4h;
> struct rte_ipv6_hdr *v6h;
> + struct rte_udp_hdr *udph;
> uint8_t is_outh_ipv4;
>
> if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
> @@ -258,11 +259,27 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa *sa, void *outh,
> v4h = outh;
> v4h->packet_id = pid;
> v4h->total_length = rte_cpu_to_be_16(plen - l2len);
> +
> + if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
> + udph = (struct rte_udp_hdr *)(v4h + 1);
> + udph->dst_port = sa->natt.dport;
> + udph->src_port = sa->natt.sport;
> + udph->dgram_len = rte_cpu_to_be_16(plen - l2len -
> + (sizeof(*v4h) + sizeof(*udph)));
> + }
> } else {
> is_outh_ipv4 = 0;
> v6h = outh;
> v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
> sizeof(*v6h));
> +
> + if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
> + udph = (struct rte_udp_hdr *)(v6h + 1);
Why you presume there would be always ipv6 with no options?
Shouldn't we use hdr_l3_len provided by user?
Another thing - I am not sure we need 'natt' field in rte_ipsec_sa at all.
UDP header (sport, dport) is consitant and could be part of header template
provided by user at sa initialization time.
> + udph->dst_port = sa->natt.dport;
> + udph->src_port = sa->natt.sport;
> + udph->dgram_len = rte_cpu_to_be_16(plen - l2len -
> + (sizeof(*v6h) + sizeof(*udph)));
Whose responsibility will be to update cksum field?
> + }
> }
>
> if (sa->type & TUN_HDR_MSK)
> diff --git a/lib/ipsec/rte_ipsec_sa.h b/lib/ipsec/rte_ipsec_sa.h
> index cf51ad8338..40d1e70d45 100644
> --- a/lib/ipsec/rte_ipsec_sa.h
> +++ b/lib/ipsec/rte_ipsec_sa.h
> @@ -76,6 +76,7 @@ struct rte_ipsec_sa_prm {
> * - inbound/outbound
> * - mode (TRANSPORT/TUNNEL)
> * - for TUNNEL outer IP version (IPv4/IPv6)
> + * - NAT-T UDP encapsulated (TUNNEL mode only)
> * - are SA SQN operations 'atomic'
> * - ESN enabled/disabled
> * ...
> @@ -86,7 +87,8 @@ enum {
> RTE_SATP_LOG2_PROTO,
> RTE_SATP_LOG2_DIR,
> RTE_SATP_LOG2_MODE,
> - RTE_SATP_LOG2_SQN = RTE_SATP_LOG2_MODE + 2,
> + RTE_SATP_LOG2_NATT = RTE_SATP_LOG2_MODE + 2,
Why to insert it in the middle?
Why not to add to the end, as usually people do for new options?
> + RTE_SATP_LOG2_SQN,
> RTE_SATP_LOG2_ESN,
> RTE_SATP_LOG2_ECN,
> RTE_SATP_LOG2_DSCP
> @@ -109,6 +111,10 @@ enum {
> #define RTE_IPSEC_SATP_MODE_TUNLV4 (1ULL << RTE_SATP_LOG2_MODE)
> #define RTE_IPSEC_SATP_MODE_TUNLV6 (2ULL << RTE_SATP_LOG2_MODE)
>
> +#define RTE_IPSEC_SATP_NATT_MASK (1ULL << RTE_SATP_LOG2_NATT)
> +#define RTE_IPSEC_SATP_NATT_DISABLE (0ULL << RTE_SATP_LOG2_NATT)
> +#define RTE_IPSEC_SATP_NATT_ENABLE (1ULL << RTE_SATP_LOG2_NATT)
> +
> #define RTE_IPSEC_SATP_SQN_MASK (1ULL << RTE_SATP_LOG2_SQN)
> #define RTE_IPSEC_SATP_SQN_RAW (0ULL << RTE_SATP_LOG2_SQN)
> #define RTE_IPSEC_SATP_SQN_ATOM (1ULL << RTE_SATP_LOG2_SQN)
> diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
> index 2ecbbce0a4..8e369e4618 100644
> --- a/lib/ipsec/sa.c
> +++ b/lib/ipsec/sa.c
> @@ -217,6 +217,10 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
> } else
> return -EINVAL;
>
> + /* check for UDP encapsulation flag */
> + if (prm->ipsec_xform.options.udp_encap == 1)
> + tp |= RTE_IPSEC_SATP_NATT_ENABLE;
> +
> /* check for ESN flag */
> if (prm->ipsec_xform.options.esn == 0)
> tp |= RTE_IPSEC_SATP_ESN_DISABLE;
> @@ -372,7 +376,8 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
> const struct crypto_xform *cxf)
> {
> static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
> - RTE_IPSEC_SATP_MODE_MASK;
> + RTE_IPSEC_SATP_MODE_MASK |
> + RTE_IPSEC_SATP_NATT_MASK;
>
> if (prm->ipsec_xform.options.ecn)
> sa->tos_mask |= RTE_IPV4_HDR_ECN_MASK;
> @@ -475,10 +480,16 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
> case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
> esp_inb_init(sa);
> break;
> + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4 |
> + RTE_IPSEC_SATP_NATT_ENABLE):
> + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6 |
> + RTE_IPSEC_SATP_NATT_ENABLE):
> case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
> case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
> esp_outb_tun_init(sa, prm);
> break;
> + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
> + RTE_IPSEC_SATP_NATT_ENABLE):
> case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
> esp_outb_init(sa, 0);
> break;
> diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
> index 5e237f3525..3f38921eb3 100644
> --- a/lib/ipsec/sa.h
> +++ b/lib/ipsec/sa.h
> @@ -101,6 +101,10 @@ struct rte_ipsec_sa {
> uint64_t msk;
> uint64_t val;
> } tx_offload;
> + struct {
> + uint16_t sport;
> + uint16_t dport;
> + } natt;
> uint32_t salt;
> uint8_t algo_type;
> uint8_t proto; /* next proto */
> --
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v6 07/10] ipsec: add support for NAT-T
2021-09-23 16:43 ` Ananyev, Konstantin
@ 2021-09-27 13:27 ` Nicolau, Radu
2021-09-27 14:55 ` Ananyev, Konstantin
0 siblings, 1 reply; 184+ messages in thread
From: Nicolau, Radu @ 2021-09-27 13:27 UTC (permalink / raw)
To: Ananyev, Konstantin, Iremonger, Bernard, Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal,
gakhil, anoobj, Doherty, Declan, Sinha, Abhijit, Buckley,
Daniel M, marchana, ktejasree, matan
On 9/23/2021 5:43 PM, Ananyev, Konstantin wrote:
>
>> Add support for the IPsec NAT-Traversal use case for Tunnel mode
>> packets.
>>
>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
>> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
>> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
>> ---
>> lib/ipsec/iph.h | 17 +++++++++++++++++
>> lib/ipsec/rte_ipsec_sa.h | 8 +++++++-
>> lib/ipsec/sa.c | 13 ++++++++++++-
>> lib/ipsec/sa.h | 4 ++++
>> 4 files changed, 40 insertions(+), 2 deletions(-)
>>
>> diff --git a/lib/ipsec/iph.h b/lib/ipsec/iph.h
>> index 2d223199ac..c5c213a2b4 100644
>> --- a/lib/ipsec/iph.h
>> +++ b/lib/ipsec/iph.h
>> @@ -251,6 +251,7 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa *sa, void *outh,
>> {
>> struct rte_ipv4_hdr *v4h;
>> struct rte_ipv6_hdr *v6h;
>> + struct rte_udp_hdr *udph;
>> uint8_t is_outh_ipv4;
>>
>> if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
>> @@ -258,11 +259,27 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa *sa, void *outh,
>> v4h = outh;
>> v4h->packet_id = pid;
>> v4h->total_length = rte_cpu_to_be_16(plen - l2len);
>> +
>> + if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
>> + udph = (struct rte_udp_hdr *)(v4h + 1);
>> + udph->dst_port = sa->natt.dport;
>> + udph->src_port = sa->natt.sport;
>> + udph->dgram_len = rte_cpu_to_be_16(plen - l2len -
>> + (sizeof(*v4h) + sizeof(*udph)));
>> + }
>> } else {
>> is_outh_ipv4 = 0;
>> v6h = outh;
>> v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
>> sizeof(*v6h));
>> +
>> + if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
>> + udph = (struct rte_udp_hdr *)(v6h + 1);
> Why you presume there would be always ipv6 with no options?
> Shouldn't we use hdr_l3_len provided by user?
Yes, I will use hdr_l3_len.
> Another thing - I am not sure we need 'natt' field in rte_ipsec_sa at all.
> UDP header (sport, dport) is consitant and could be part of header template
> provided by user at sa initialization time.
The rte_security_ipsec_sa_options::udp_encap flag assumes that the UDP
encapsulation i.e. adding the header is not the responsibility of the
user, so we can append it (transparently to the user) to the header
template but the user should not do it. Will this work?
>
>> + udph->dst_port = sa->natt.dport;
>> + udph->src_port = sa->natt.sport;
>> + udph->dgram_len = rte_cpu_to_be_16(plen - l2len -
>> + (sizeof(*v6h) + sizeof(*udph)));
> Whose responsibility will be to update cksum field?
According to the RFC it should be zero and the rx side must not
check/use it. I will set it as zero
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v6 07/10] ipsec: add support for NAT-T
2021-09-27 13:27 ` Nicolau, Radu
@ 2021-09-27 14:55 ` Ananyev, Konstantin
2021-09-27 15:06 ` Nicolau, Radu
0 siblings, 1 reply; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-09-27 14:55 UTC (permalink / raw)
To: Nicolau, Radu, Iremonger, Bernard, Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal,
gakhil, anoobj, Doherty, Declan, Sinha, Abhijit, Buckley,
Daniel M, marchana, ktejasree, matan
> On 9/23/2021 5:43 PM, Ananyev, Konstantin wrote:
> >
> >> Add support for the IPsec NAT-Traversal use case for Tunnel mode
> >> packets.
> >>
> >> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> >> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> >> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> >> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> >> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> >> ---
> >> lib/ipsec/iph.h | 17 +++++++++++++++++
> >> lib/ipsec/rte_ipsec_sa.h | 8 +++++++-
> >> lib/ipsec/sa.c | 13 ++++++++++++-
> >> lib/ipsec/sa.h | 4 ++++
> >> 4 files changed, 40 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/lib/ipsec/iph.h b/lib/ipsec/iph.h
> >> index 2d223199ac..c5c213a2b4 100644
> >> --- a/lib/ipsec/iph.h
> >> +++ b/lib/ipsec/iph.h
> >> @@ -251,6 +251,7 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa *sa, void *outh,
> >> {
> >> struct rte_ipv4_hdr *v4h;
> >> struct rte_ipv6_hdr *v6h;
> >> + struct rte_udp_hdr *udph;
> >> uint8_t is_outh_ipv4;
> >>
> >> if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
> >> @@ -258,11 +259,27 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa *sa, void *outh,
> >> v4h = outh;
> >> v4h->packet_id = pid;
> >> v4h->total_length = rte_cpu_to_be_16(plen - l2len);
> >> +
> >> + if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
> >> + udph = (struct rte_udp_hdr *)(v4h + 1);
> >> + udph->dst_port = sa->natt.dport;
> >> + udph->src_port = sa->natt.sport;
> >> + udph->dgram_len = rte_cpu_to_be_16(plen - l2len -
> >> + (sizeof(*v4h) + sizeof(*udph)));
> >> + }
> >> } else {
> >> is_outh_ipv4 = 0;
> >> v6h = outh;
> >> v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
> >> sizeof(*v6h));
> >> +
> >> + if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
> >> + udph = (struct rte_udp_hdr *)(v6h + 1);
> > Why you presume there would be always ipv6 with no options?
> > Shouldn't we use hdr_l3_len provided by user?
>
> Yes, I will use hdr_l3_len.
>
> > Another thing - I am not sure we need 'natt' field in rte_ipsec_sa at all.
> > UDP header (sport, dport) is consitant and could be part of header template
> > provided by user at sa initialization time.
>
> The rte_security_ipsec_sa_options::udp_encap flag assumes that the UDP
> encapsulation i.e. adding the header is not the responsibility of the
> user, so we can append it (transparently to the user) to the header
> template but the user should not do it. Will this work?
Interesting idea, I suppose that should work...
Do I get it right, this udp header will always be appended to the end of
user provided tun.hdr?
>
>
> >
> >> + udph->dst_port = sa->natt.dport;
> >> + udph->src_port = sa->natt.sport;
> >> + udph->dgram_len = rte_cpu_to_be_16(plen - l2len -
> >> + (sizeof(*v6h) + sizeof(*udph)));
> > Whose responsibility will be to update cksum field?
> According to the RFC it should be zero and the rx side must not
> check/use it. I will set it as zero
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v6 07/10] ipsec: add support for NAT-T
2021-09-27 14:55 ` Ananyev, Konstantin
@ 2021-09-27 15:06 ` Nicolau, Radu
2021-09-27 15:39 ` Ananyev, Konstantin
0 siblings, 1 reply; 184+ messages in thread
From: Nicolau, Radu @ 2021-09-27 15:06 UTC (permalink / raw)
To: Ananyev, Konstantin, Iremonger, Bernard, Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal,
gakhil, anoobj, Doherty, Declan, Sinha, Abhijit, Buckley,
Daniel M, marchana, ktejasree, matan
On 9/27/2021 3:55 PM, Ananyev, Konstantin wrote:
>
>> On 9/23/2021 5:43 PM, Ananyev, Konstantin wrote:
>>>> Add support for the IPsec NAT-Traversal use case for Tunnel mode
>>>> packets.
>>>>
>>>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>>>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>>>> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
>>>> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
>>>> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
>>>> ---
>>>> lib/ipsec/iph.h | 17 +++++++++++++++++
>>>> lib/ipsec/rte_ipsec_sa.h | 8 +++++++-
>>>> lib/ipsec/sa.c | 13 ++++++++++++-
>>>> lib/ipsec/sa.h | 4 ++++
>>>> 4 files changed, 40 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/lib/ipsec/iph.h b/lib/ipsec/iph.h
>>>> index 2d223199ac..c5c213a2b4 100644
>>>> --- a/lib/ipsec/iph.h
>>>> +++ b/lib/ipsec/iph.h
>>>> @@ -251,6 +251,7 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa *sa, void *outh,
>>>> {
>>>> struct rte_ipv4_hdr *v4h;
>>>> struct rte_ipv6_hdr *v6h;
>>>> + struct rte_udp_hdr *udph;
>>>> uint8_t is_outh_ipv4;
>>>>
>>>> if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
>>>> @@ -258,11 +259,27 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa *sa, void *outh,
>>>> v4h = outh;
>>>> v4h->packet_id = pid;
>>>> v4h->total_length = rte_cpu_to_be_16(plen - l2len);
>>>> +
>>>> + if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
>>>> + udph = (struct rte_udp_hdr *)(v4h + 1);
>>>> + udph->dst_port = sa->natt.dport;
>>>> + udph->src_port = sa->natt.sport;
>>>> + udph->dgram_len = rte_cpu_to_be_16(plen - l2len -
>>>> + (sizeof(*v4h) + sizeof(*udph)));
>>>> + }
>>>> } else {
>>>> is_outh_ipv4 = 0;
>>>> v6h = outh;
>>>> v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
>>>> sizeof(*v6h));
>>>> +
>>>> + if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
>>>> + udph = (struct rte_udp_hdr *)(v6h + 1);
>>> Why you presume there would be always ipv6 with no options?
>>> Shouldn't we use hdr_l3_len provided by user?
>> Yes, I will use hdr_l3_len.
>>
>>> Another thing - I am not sure we need 'natt' field in rte_ipsec_sa at all.
>>> UDP header (sport, dport) is consitant and could be part of header template
>>> provided by user at sa initialization time.
>> The rte_security_ipsec_sa_options::udp_encap flag assumes that the UDP
>> encapsulation i.e. adding the header is not the responsibility of the
>> user, so we can append it (transparently to the user) to the header
>> template but the user should not do it. Will this work?
> Interesting idea, I suppose that should work...
> Do I get it right, this udp header will always be appended to the end of
> user provided tun.hdr?
Yes. So normally after whatever user puts in we insert the ESP header.
When the UDP encapsulation is enabled we should insert the UDP header
before the ESP header, so this arrangement should work.
>
>>
>>>> + udph->dst_port = sa->natt.dport;
>>>> + udph->src_port = sa->natt.sport;
>>>> + udph->dgram_len = rte_cpu_to_be_16(plen - l2len -
>>>> + (sizeof(*v6h) + sizeof(*udph)));
>>> Whose responsibility will be to update cksum field?
>> According to the RFC it should be zero and the rx side must not
>> check/use it. I will set it as zero
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v6 07/10] ipsec: add support for NAT-T
2021-09-27 15:06 ` Nicolau, Radu
@ 2021-09-27 15:39 ` Ananyev, Konstantin
0 siblings, 0 replies; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-09-27 15:39 UTC (permalink / raw)
To: Nicolau, Radu, Iremonger, Bernard, Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal,
gakhil, anoobj, Doherty, Declan, Sinha, Abhijit, Buckley,
Daniel M, marchana, ktejasree, matan
>
> On 9/27/2021 3:55 PM, Ananyev, Konstantin wrote:
> >
> >> On 9/23/2021 5:43 PM, Ananyev, Konstantin wrote:
> >>>> Add support for the IPsec NAT-Traversal use case for Tunnel mode
> >>>> packets.
> >>>>
> >>>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> >>>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> >>>> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> >>>> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> >>>> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> >>>> ---
> >>>> lib/ipsec/iph.h | 17 +++++++++++++++++
> >>>> lib/ipsec/rte_ipsec_sa.h | 8 +++++++-
> >>>> lib/ipsec/sa.c | 13 ++++++++++++-
> >>>> lib/ipsec/sa.h | 4 ++++
> >>>> 4 files changed, 40 insertions(+), 2 deletions(-)
> >>>>
> >>>> diff --git a/lib/ipsec/iph.h b/lib/ipsec/iph.h
> >>>> index 2d223199ac..c5c213a2b4 100644
> >>>> --- a/lib/ipsec/iph.h
> >>>> +++ b/lib/ipsec/iph.h
> >>>> @@ -251,6 +251,7 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa *sa, void *outh,
> >>>> {
> >>>> struct rte_ipv4_hdr *v4h;
> >>>> struct rte_ipv6_hdr *v6h;
> >>>> + struct rte_udp_hdr *udph;
> >>>> uint8_t is_outh_ipv4;
> >>>>
> >>>> if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
> >>>> @@ -258,11 +259,27 @@ update_tun_outb_l3hdr(const struct rte_ipsec_sa *sa, void *outh,
> >>>> v4h = outh;
> >>>> v4h->packet_id = pid;
> >>>> v4h->total_length = rte_cpu_to_be_16(plen - l2len);
> >>>> +
> >>>> + if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
> >>>> + udph = (struct rte_udp_hdr *)(v4h + 1);
> >>>> + udph->dst_port = sa->natt.dport;
> >>>> + udph->src_port = sa->natt.sport;
> >>>> + udph->dgram_len = rte_cpu_to_be_16(plen - l2len -
> >>>> + (sizeof(*v4h) + sizeof(*udph)));
> >>>> + }
> >>>> } else {
> >>>> is_outh_ipv4 = 0;
> >>>> v6h = outh;
> >>>> v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
> >>>> sizeof(*v6h));
> >>>> +
> >>>> + if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
> >>>> + udph = (struct rte_udp_hdr *)(v6h + 1);
> >>> Why you presume there would be always ipv6 with no options?
> >>> Shouldn't we use hdr_l3_len provided by user?
> >> Yes, I will use hdr_l3_len.
> >>
> >>> Another thing - I am not sure we need 'natt' field in rte_ipsec_sa at all.
> >>> UDP header (sport, dport) is consitant and could be part of header template
> >>> provided by user at sa initialization time.
> >> The rte_security_ipsec_sa_options::udp_encap flag assumes that the UDP
> >> encapsulation i.e. adding the header is not the responsibility of the
> >> user, so we can append it (transparently to the user) to the header
> >> template but the user should not do it. Will this work?
> > Interesting idea, I suppose that should work...
> > Do I get it right, this udp header will always be appended to the end of
> > user provided tun.hdr?
> Yes. So normally after whatever user puts in we insert the ESP header.
> When the UDP encapsulation is enabled we should insert the UDP header
> before the ESP header, so this arrangement should work.
Ok, thanks for clarification.
Looks like a good approach to me.
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v6 08/10] ipsec: add support for SA telemetry
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 " Radu Nicolau
` (6 preceding siblings ...)
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 07/10] ipsec: add support for NAT-T Radu Nicolau
@ 2021-09-17 9:17 ` Radu Nicolau
2021-09-23 18:31 ` Ananyev, Konstantin
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 09/10] ipsec: add support for initial SQN value Radu Nicolau
` (2 subsequent siblings)
10 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-09-17 9:17 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin, Ray Kinsella
Cc: dev, bruce.richardson, roy.fan.zhang, hemant.agrawal, gakhil,
anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add telemetry support for ipsec SAs
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
lib/ipsec/esp_inb.c | 1 +
lib/ipsec/esp_outb.c | 12 +-
lib/ipsec/meson.build | 2 +-
lib/ipsec/rte_ipsec.h | 23 ++++
lib/ipsec/sa.c | 255 +++++++++++++++++++++++++++++++++++++++++-
lib/ipsec/sa.h | 21 ++++
lib/ipsec/version.map | 9 ++
7 files changed, 317 insertions(+), 6 deletions(-)
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index a6ab8fbdd5..8cb4c16302 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -722,6 +722,7 @@ esp_inb_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
/* process packets, extract seq numbers */
k = process(sa, mb, sqn, dr, num, sqh_len);
+ sa->statistics.count += k;
/* handle unprocessed mbufs */
if (k != num && k != 0)
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 9fc7075796..2c02c3bb12 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -617,7 +617,7 @@ uint16_t
esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
uint16_t num)
{
- uint32_t i, k, icv_len, *icv;
+ uint32_t i, k, icv_len, *icv, bytes;
struct rte_mbuf *ml;
struct rte_ipsec_sa *sa;
uint32_t dr[num];
@@ -626,10 +626,12 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
k = 0;
icv_len = sa->icv_len;
+ bytes = 0;
for (i = 0; i != num; i++) {
if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
ml = rte_pktmbuf_lastseg(mb[i]);
+ bytes += mb[i]->data_len;
/* remove high-order 32 bits of esn from packet len */
mb[i]->pkt_len -= sa->sqh_len;
ml->data_len -= sa->sqh_len;
@@ -640,6 +642,8 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
} else
dr[i - k] = i;
}
+ sa->statistics.count += k;
+ sa->statistics.bytes += bytes - (sa->hdr_len * k);
/* handle unprocessed mbufs */
if (k != num) {
@@ -659,16 +663,19 @@ static inline void
inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- uint32_t i, ol_flags;
+ uint32_t i, ol_flags, bytes = 0;
ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
for (i = 0; i != num; i++) {
mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
+ bytes += mb[i]->data_len;
if (ol_flags != 0)
rte_security_set_pkt_metadata(ss->security.ctx,
ss->security.ses, mb[i], NULL);
}
+ ss->sa->statistics.count += num;
+ ss->sa->statistics.bytes += bytes - (ss->sa->hdr_len * num);
}
/* check if packet will exceed MSS and segmentation is required */
@@ -752,6 +759,7 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
sqn += nb_segs[i] - 1;
}
+
/* copy not processed mbufs beyond good ones */
if (k != num && k != 0)
move_bad_mbufs(mb, dr, num, num - k);
diff --git a/lib/ipsec/meson.build b/lib/ipsec/meson.build
index 1497f573bb..f5e44cfe47 100644
--- a/lib/ipsec/meson.build
+++ b/lib/ipsec/meson.build
@@ -6,4 +6,4 @@ sources = files('esp_inb.c', 'esp_outb.c', 'sa.c', 'ses.c', 'ipsec_sad.c')
headers = files('rte_ipsec.h', 'rte_ipsec_sa.h', 'rte_ipsec_sad.h')
indirect_headers += files('rte_ipsec_group.h')
-deps += ['mbuf', 'net', 'cryptodev', 'security', 'hash']
+deps += ['mbuf', 'net', 'cryptodev', 'security', 'hash', 'telemetry']
diff --git a/lib/ipsec/rte_ipsec.h b/lib/ipsec/rte_ipsec.h
index dd60d95915..2bb52f4b8f 100644
--- a/lib/ipsec/rte_ipsec.h
+++ b/lib/ipsec/rte_ipsec.h
@@ -158,6 +158,29 @@ rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
return ss->pkt_func.process(ss, mb, num);
}
+
+struct rte_ipsec_telemetry;
+
+/**
+ * Initialize IPsec library telemetry.
+ * @return
+ * 0 on success, negative value otherwise.
+ */
+__rte_experimental
+int
+rte_ipsec_telemetry_init(void);
+
+/**
+ * Enable per SA telemetry for a specific SA.
+ * @param sa
+ * Pointer to the *rte_ipsec_sa* object that will have telemetry enabled.
+ * @return
+ * 0 on success, negative value otherwise.
+ */
+__rte_experimental
+int
+rte_ipsec_telemetry_sa_add(struct rte_ipsec_sa *sa);
+
#include <rte_ipsec_group.h>
#ifdef __cplusplus
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 8e369e4618..5b55bbc098 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -7,7 +7,7 @@
#include <rte_ip.h>
#include <rte_errno.h>
#include <rte_cryptodev.h>
-
+#include <rte_telemetry.h>
#include "sa.h"
#include "ipsec_sqn.h"
#include "crypto.h"
@@ -25,6 +25,7 @@ struct crypto_xform {
struct rte_crypto_aead_xform *aead;
};
+
/*
* helper routine, fills internal crypto_xform structure.
*/
@@ -532,6 +533,249 @@ rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
wsz = prm->ipsec_xform.replay_win_sz;
return ipsec_sa_size(type, &wsz, &nb);
}
+struct rte_ipsec_telemetry {
+ bool initialized;
+ LIST_HEAD(, rte_ipsec_sa) sa_list_head;
+};
+
+#include <rte_malloc.h>
+
+static struct rte_ipsec_telemetry rte_ipsec_telemetry_instance = {
+ .initialized = false };
+
+static int
+handle_telemetry_cmd_ipsec_sa_list(const char *cmd __rte_unused,
+ const char *params __rte_unused,
+ struct rte_tel_data *data)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ struct rte_ipsec_sa *sa;
+
+ rte_tel_data_start_array(data, RTE_TEL_U64_VAL);
+
+ LIST_FOREACH(sa, &telemetry->sa_list_head, telemetry_next) {
+ rte_tel_data_add_array_u64(data, htonl(sa->spi));
+ }
+
+ return 0;
+}
+
+/**
+ * Handle IPsec SA statistics telemetry request
+ *
+ * Return dict of SA's with dict of key/value counters
+ *
+ * {
+ * "SA_SPI_XX": {"count": 0, "bytes": 0, "errors": 0},
+ * "SA_SPI_YY": {"count": 0, "bytes": 0, "errors": 0}
+ * }
+ *
+ */
+static int
+handle_telemetry_cmd_ipsec_sa_stats(const char *cmd __rte_unused,
+ const char *params,
+ struct rte_tel_data *data)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ struct rte_ipsec_sa *sa;
+ bool user_specified_spi = false;
+ uint32_t sa_spi;
+
+ if (params) {
+ user_specified_spi = true;
+ sa_spi = htonl((uint32_t)atoi(params));
+ }
+
+ rte_tel_data_start_dict(data);
+
+ LIST_FOREACH(sa, &telemetry->sa_list_head, telemetry_next) {
+ char sa_name[64];
+
+ static const char *name_pkt_cnt = "count";
+ static const char *name_byte_cnt = "bytes";
+ static const char *name_error_cnt = "errors";
+ struct rte_tel_data *sa_data;
+
+ /* If user provided SPI only get telemetry for that SA */
+ if (user_specified_spi && (sa_spi != sa->spi))
+ continue;
+
+ /* allocate telemetry data struct for SA telemetry */
+ sa_data = rte_tel_data_alloc();
+ if (!sa_data)
+ return -ENOMEM;
+
+ rte_tel_data_start_dict(sa_data);
+
+ /* add telemetry key/values pairs */
+ rte_tel_data_add_dict_u64(sa_data, name_pkt_cnt,
+ sa->statistics.count);
+
+ rte_tel_data_add_dict_u64(sa_data, name_byte_cnt,
+ sa->statistics.bytes);
+
+ rte_tel_data_add_dict_u64(sa_data, name_error_cnt,
+ sa->statistics.errors.count);
+
+ /* generate telemetry label */
+ snprintf(sa_name, sizeof(sa_name), "SA_SPI_%i", htonl(sa->spi));
+
+ /* add SA telemetry to dictionary container */
+ rte_tel_data_add_dict_container(data, sa_name, sa_data, 0);
+ }
+
+ return 0;
+}
+
+static int
+handle_telemetry_cmd_ipsec_sa_configuration(const char *cmd __rte_unused,
+ const char *params,
+ struct rte_tel_data *data)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ struct rte_ipsec_sa *sa;
+ uint32_t sa_spi;
+
+ if (params)
+ sa_spi = htonl((uint32_t)atoi(params));
+ else
+ return -EINVAL;
+
+ rte_tel_data_start_dict(data);
+
+ LIST_FOREACH(sa, &telemetry->sa_list_head, telemetry_next) {
+ uint64_t mode;
+
+ if (sa_spi != sa->spi)
+ continue;
+
+ /* add SA configuration key/values pairs */
+ rte_tel_data_add_dict_string(data, "Type",
+ (sa->type & RTE_IPSEC_SATP_PROTO_MASK) ==
+ RTE_IPSEC_SATP_PROTO_AH ? "AH" : "ESP");
+
+ rte_tel_data_add_dict_string(data, "Direction",
+ (sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+ RTE_IPSEC_SATP_DIR_IB ? "Inbound" : "Outbound");
+
+ mode = sa->type & RTE_IPSEC_SATP_MODE_MASK;
+
+ if (mode == RTE_IPSEC_SATP_MODE_TRANS) {
+ rte_tel_data_add_dict_string(data, "Mode", "Transport");
+ } else {
+ rte_tel_data_add_dict_string(data, "Mode", "Tunnel");
+
+ if ((sa->type & RTE_IPSEC_SATP_NATT_MASK) ==
+ RTE_IPSEC_SATP_NATT_ENABLE) {
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ } else if (sa->type &
+ RTE_IPSEC_SATP_MODE_TUNLV6) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ }
+ } else {
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ } else if (sa->type &
+ RTE_IPSEC_SATP_MODE_TUNLV6) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ }
+ }
+ }
+
+ rte_tel_data_add_dict_string(data,
+ "extended-sequence-number",
+ (sa->type & RTE_IPSEC_SATP_ESN_MASK) ==
+ RTE_IPSEC_SATP_ESN_ENABLE ?
+ "enabled" : "disabled");
+
+ if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+ RTE_IPSEC_SATP_DIR_IB)
+
+ if (sa->sqn.inb.rsn[sa->sqn.inb.rdidx])
+ rte_tel_data_add_dict_u64(data,
+ "sequence-number",
+ sa->sqn.inb.rsn[sa->sqn.inb.rdidx]->sqn);
+ else
+ rte_tel_data_add_dict_u64(data,
+ "sequence-number", 0);
+ else
+ rte_tel_data_add_dict_u64(data, "sequence-number",
+ sa->sqn.outb);
+
+ rte_tel_data_add_dict_string(data,
+ "explicit-congestion-notification",
+ (sa->type & RTE_IPSEC_SATP_ECN_MASK) ==
+ RTE_IPSEC_SATP_ECN_ENABLE ?
+ "enabled" : "disabled");
+
+ rte_tel_data_add_dict_string(data,
+ "copy-DSCP",
+ (sa->type & RTE_IPSEC_SATP_DSCP_MASK) ==
+ RTE_IPSEC_SATP_DSCP_ENABLE ?
+ "enabled" : "disabled");
+
+ rte_tel_data_add_dict_string(data, "TSO",
+ sa->tso.enabled ? "enabled" : "disabled");
+
+ if (sa->tso.enabled)
+ rte_tel_data_add_dict_u64(data, "TSO-MSS", sa->tso.mss);
+
+ }
+
+ return 0;
+}
+int
+rte_ipsec_telemetry_init(void)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+ int rc = 0;
+
+ if (telemetry->initialized)
+ return rc;
+
+ LIST_INIT(&telemetry->sa_list_head);
+
+ rc = rte_telemetry_register_cmd("/ipsec/sa/list",
+ handle_telemetry_cmd_ipsec_sa_list,
+ "Return list of IPsec Security Associations with telemetry enabled.");
+ if (rc)
+ return rc;
+
+ rc = rte_telemetry_register_cmd("/ipsec/sa/stats",
+ handle_telemetry_cmd_ipsec_sa_stats,
+ "Returns IPsec Security Association stastistics. Parameters: int sa_spi");
+ if (rc)
+ return rc;
+
+ rc = rte_telemetry_register_cmd("/ipsec/sa/details",
+ handle_telemetry_cmd_ipsec_sa_configuration,
+ "Returns IPsec Security Association configuration. Parameters: int sa_spi");
+ if (rc)
+ return rc;
+
+ telemetry->initialized = true;
+
+ return rc;
+}
+
+int
+rte_ipsec_telemetry_sa_add(struct rte_ipsec_sa *sa)
+{
+ struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
+
+ LIST_INSERT_HEAD(&telemetry->sa_list_head, sa, telemetry_next);
+
+ return 0;
+}
int
rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
@@ -644,19 +888,24 @@ uint16_t
pkt_flag_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- uint32_t i, k;
+ uint32_t i, k, bytes = 0;
uint32_t dr[num];
RTE_SET_USED(ss);
k = 0;
for (i = 0; i != num; i++) {
- if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
+ if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
k++;
+ bytes += mb[i]->data_len;
+ }
else
dr[i - k] = i;
}
+ ss->sa->statistics.count += k;
+ ss->sa->statistics.bytes += bytes - (ss->sa->hdr_len * k);
+
/* handle unprocessed mbufs */
if (k != num) {
rte_errno = EBADMSG;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 3f38921eb3..b9b7ebec5b 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -122,9 +122,30 @@ struct rte_ipsec_sa {
uint16_t mss;
} tso;
+ LIST_ENTRY(rte_ipsec_sa) telemetry_next;
+ /**< list entry for telemetry enabled SA */
+
+
+ RTE_MARKER cachealign_statistics __rte_cache_min_aligned;
+
+ /* Statistics */
+ struct {
+ uint64_t count;
+ uint64_t bytes;
+
+ struct {
+ uint64_t count;
+ uint64_t authentication_failed;
+ } errors;
+ } statistics;
+
+ RTE_MARKER cachealign_tunnel_header __rte_cache_min_aligned;
+
/* template for tunnel header */
uint8_t hdr[IPSEC_MAX_HDR_SIZE];
+
+ RTE_MARKER cachealign_tunnel_seq_num_replay_win __rte_cache_min_aligned;
/*
* sqn and replay window
* In case of SA handled by multiple threads *sqn* cacheline
diff --git a/lib/ipsec/version.map b/lib/ipsec/version.map
index ba8753eac4..fed6b6aba1 100644
--- a/lib/ipsec/version.map
+++ b/lib/ipsec/version.map
@@ -19,3 +19,12 @@ DPDK_22 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ # added in 21.11
+ rte_ipsec_telemetry_init;
+ rte_ipsec_telemetry_sa_add;
+
+};
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v6 08/10] ipsec: add support for SA telemetry
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 08/10] ipsec: add support for SA telemetry Radu Nicolau
@ 2021-09-23 18:31 ` Ananyev, Konstantin
0 siblings, 0 replies; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-09-23 18:31 UTC (permalink / raw)
To: Nicolau, Radu, Iremonger, Bernard, Medvedkin, Vladimir, Ray Kinsella
Cc: dev, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal, gakhil,
anoobj, Doherty, Declan, Sinha, Abhijit, Buckley, Daniel M,
marchana, ktejasree, matan
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
> lib/ipsec/esp_inb.c | 1 +
> lib/ipsec/esp_outb.c | 12 +-
> lib/ipsec/meson.build | 2 +-
> lib/ipsec/rte_ipsec.h | 23 ++++
> lib/ipsec/sa.c | 255 +++++++++++++++++++++++++++++++++++++++++-
> lib/ipsec/sa.h | 21 ++++
> lib/ipsec/version.map | 9 ++
> 7 files changed, 317 insertions(+), 6 deletions(-)
>
> diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
> index a6ab8fbdd5..8cb4c16302 100644
> --- a/lib/ipsec/esp_inb.c
> +++ b/lib/ipsec/esp_inb.c
> @@ -722,6 +722,7 @@ esp_inb_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
>
> /* process packets, extract seq numbers */
> k = process(sa, mb, sqn, dr, num, sqh_len);
> + sa->statistics.count += k;
>
> /* handle unprocessed mbufs */
> if (k != num && k != 0)
> diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
> index 9fc7075796..2c02c3bb12 100644
> --- a/lib/ipsec/esp_outb.c
> +++ b/lib/ipsec/esp_outb.c
> @@ -617,7 +617,7 @@ uint16_t
> esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
> uint16_t num)
> {
> - uint32_t i, k, icv_len, *icv;
> + uint32_t i, k, icv_len, *icv, bytes;
> struct rte_mbuf *ml;
> struct rte_ipsec_sa *sa;
> uint32_t dr[num];
> @@ -626,10 +626,12 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
>
> k = 0;
> icv_len = sa->icv_len;
> + bytes = 0;
>
> for (i = 0; i != num; i++) {
> if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
> ml = rte_pktmbuf_lastseg(mb[i]);
> + bytes += mb[i]->data_len;
Shouldn't it be pkt_len?
> /* remove high-order 32 bits of esn from packet len */
> mb[i]->pkt_len -= sa->sqh_len;
> ml->data_len -= sa->sqh_len;
> @@ -640,6 +642,8 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
> } else
> dr[i - k] = i;
> }
> + sa->statistics.count += k;
> + sa->statistics.bytes += bytes - (sa->hdr_len * k);
I don't think you need to do multiplication here.
It can be postponed for reporting phase (sa->hdr_len is a constant value per sa).
>
> /* handle unprocessed mbufs */
> if (k != num) {
> @@ -659,16 +663,19 @@ static inline void
> inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
> struct rte_mbuf *mb[], uint16_t num)
> {
> - uint32_t i, ol_flags;
> + uint32_t i, ol_flags, bytes = 0;
Lets keep coding style consistent: please do assignment as separate statement.
>
> ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
> for (i = 0; i != num; i++) {
>
> mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
> + bytes += mb[i]->data_len;
pkt_len?
> if (ol_flags != 0)
> rte_security_set_pkt_metadata(ss->security.ctx,
> ss->security.ses, mb[i], NULL);
> }
> + ss->sa->statistics.count += num;
> + ss->sa->statistics.bytes += bytes - (ss->sa->hdr_len * num);
> }
>
> /* check if packet will exceed MSS and segmentation is required */
> @@ -752,6 +759,7 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
> sqn += nb_segs[i] - 1;
> }
>
> +
Empty line.
> /* copy not processed mbufs beyond good ones */
> if (k != num && k != 0)
> move_bad_mbufs(mb, dr, num, num - k);
> diff --git a/lib/ipsec/meson.build b/lib/ipsec/meson.build
> index 1497f573bb..f5e44cfe47 100644
> --- a/lib/ipsec/meson.build
> +++ b/lib/ipsec/meson.build
> @@ -6,4 +6,4 @@ sources = files('esp_inb.c', 'esp_outb.c', 'sa.c', 'ses.c', 'ipsec_sad.c')
> headers = files('rte_ipsec.h', 'rte_ipsec_sa.h', 'rte_ipsec_sad.h')
> indirect_headers += files('rte_ipsec_group.h')
>
> -deps += ['mbuf', 'net', 'cryptodev', 'security', 'hash']
> +deps += ['mbuf', 'net', 'cryptodev', 'security', 'hash', 'telemetry']
> diff --git a/lib/ipsec/rte_ipsec.h b/lib/ipsec/rte_ipsec.h
> index dd60d95915..2bb52f4b8f 100644
> --- a/lib/ipsec/rte_ipsec.h
> +++ b/lib/ipsec/rte_ipsec.h
> @@ -158,6 +158,29 @@ rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
> return ss->pkt_func.process(ss, mb, num);
> }
>
> +
> +struct rte_ipsec_telemetry;
> +
> +/**
> + * Initialize IPsec library telemetry.
> + * @return
> + * 0 on success, negative value otherwise.
> + */
> +__rte_experimental
> +int
> +rte_ipsec_telemetry_init(void);
> +
> +/**
> + * Enable per SA telemetry for a specific SA.
> + * @param sa
> + * Pointer to the *rte_ipsec_sa* object that will have telemetry enabled.
> + * @return
> + * 0 on success, negative value otherwise.
> + */
> +__rte_experimental
> +int
> +rte_ipsec_telemetry_sa_add(struct rte_ipsec_sa *sa);
> +
Why we don't have sa_delete() here?
What user supposed to do when he destroys an sa?
Another question what concurrency model is implied here?
> #include <rte_ipsec_group.h>
>
> #ifdef __cplusplus
> diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
> index 8e369e4618..5b55bbc098 100644
> --- a/lib/ipsec/sa.c
> +++ b/lib/ipsec/sa.c
> @@ -7,7 +7,7 @@
> #include <rte_ip.h>
> #include <rte_errno.h>
> #include <rte_cryptodev.h>
> -
> +#include <rte_telemetry.h>
As a generic one - can we move all telemetry related functions into new .c file
(sa_telemtry or so)? No point to have it here.
> #include "sa.h"
> #include "ipsec_sqn.h"
> #include "crypto.h"
> @@ -25,6 +25,7 @@ struct crypto_xform {
> struct rte_crypto_aead_xform *aead;
> };
>
> +
> /*
> * helper routine, fills internal crypto_xform structure.
> */
> @@ -532,6 +533,249 @@ rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
> wsz = prm->ipsec_xform.replay_win_sz;
> return ipsec_sa_size(type, &wsz, &nb);
> }
> +struct rte_ipsec_telemetry {
> + bool initialized;
Why 'initilized' is needed at all?
I think there is a static initializer for list: LIST_HEAD_INITIALIZER
> + LIST_HEAD(, rte_ipsec_sa) sa_list_head;
> +};
> +
> +#include <rte_malloc.h>
> +
> +static struct rte_ipsec_telemetry rte_ipsec_telemetry_instance = {
> + .initialized = false };
> +
> +static int
> +handle_telemetry_cmd_ipsec_sa_list(const char *cmd __rte_unused,
> + const char *params __rte_unused,
> + struct rte_tel_data *data)
> +{
> + struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
> + struct rte_ipsec_sa *sa;
> +
> + rte_tel_data_start_array(data, RTE_TEL_U64_VAL);
> +
> + LIST_FOREACH(sa, &telemetry->sa_list_head, telemetry_next) {
> + rte_tel_data_add_array_u64(data, htonl(sa->spi));
Should be ntohl() I believe.
BTW, why not use rte_be_to_cpu... functions here?
> + }
> +
> + return 0;
> +}
> +
> +/**
> + * Handle IPsec SA statistics telemetry request
> + *
> + * Return dict of SA's with dict of key/value counters
> + *
> + * {
> + * "SA_SPI_XX": {"count": 0, "bytes": 0, "errors": 0},
> + * "SA_SPI_YY": {"count": 0, "bytes": 0, "errors": 0}
> + * }
> + *
> + */
> +static int
> +handle_telemetry_cmd_ipsec_sa_stats(const char *cmd __rte_unused,
> + const char *params,
> + struct rte_tel_data *data)
> +{
> + struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
> + struct rte_ipsec_sa *sa;
> + bool user_specified_spi = false;
> + uint32_t sa_spi;
> +
> + if (params) {
> + user_specified_spi = true;
> + sa_spi = htonl((uint32_t)atoi(params));
strtoul() would be a better choice here.
Another nit - you probably don't need user_specified_spi.
As I remember SPI=0 is a reserved value, so I think It would be enough to:
sa_spi=0; if (params) {sa_spi=..}
> + }
> +
> + rte_tel_data_start_dict(data);
> +
> + LIST_FOREACH(sa, &telemetry->sa_list_head, telemetry_next) {
> + char sa_name[64];
> +
> + static const char *name_pkt_cnt = "count";
> + static const char *name_byte_cnt = "bytes";
> + static const char *name_error_cnt = "errors";
> + struct rte_tel_data *sa_data;
> +
> + /* If user provided SPI only get telemetry for that SA */
> + if (user_specified_spi && (sa_spi != sa->spi))
> + continue;
> +
> + /* allocate telemetry data struct for SA telemetry */
> + sa_data = rte_tel_data_alloc();
> + if (!sa_data)
> + return -ENOMEM;
> +
> + rte_tel_data_start_dict(sa_data);
> +
> + /* add telemetry key/values pairs */
> + rte_tel_data_add_dict_u64(sa_data, name_pkt_cnt,
> + sa->statistics.count);
> +
> + rte_tel_data_add_dict_u64(sa_data, name_byte_cnt,
> + sa->statistics.bytes);
> +
> + rte_tel_data_add_dict_u64(sa_data, name_error_cnt,
> + sa->statistics.errors.count);
> +
> + /* generate telemetry label */
> + snprintf(sa_name, sizeof(sa_name), "SA_SPI_%i", htonl(sa->spi));
Again - ntohl().
> +
> + /* add SA telemetry to dictionary container */
> + rte_tel_data_add_dict_container(data, sa_name, sa_data, 0);
> + }
> +
> + return 0;
> +}
> +
> +static int
> +handle_telemetry_cmd_ipsec_sa_configuration(const char *cmd __rte_unused,
> + const char *params,
> + struct rte_tel_data *data)
> +{
> + struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
> + struct rte_ipsec_sa *sa;
> + uint32_t sa_spi;
> +
> + if (params)
> + sa_spi = htonl((uint32_t)atoi(params));
> + else
> + return -EINVAL;
> +
> + rte_tel_data_start_dict(data);
> +
> + LIST_FOREACH(sa, &telemetry->sa_list_head, telemetry_next) {
> + uint64_t mode;
> +
> + if (sa_spi != sa->spi)
> + continue;
> +
> + /* add SA configuration key/values pairs */
> + rte_tel_data_add_dict_string(data, "Type",
> + (sa->type & RTE_IPSEC_SATP_PROTO_MASK) ==
> + RTE_IPSEC_SATP_PROTO_AH ? "AH" : "ESP");
> +
> + rte_tel_data_add_dict_string(data, "Direction",
> + (sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
> + RTE_IPSEC_SATP_DIR_IB ? "Inbound" : "Outbound");
> +
> + mode = sa->type & RTE_IPSEC_SATP_MODE_MASK;
> +
> + if (mode == RTE_IPSEC_SATP_MODE_TRANS) {
> + rte_tel_data_add_dict_string(data, "Mode", "Transport");
> + } else {
> + rte_tel_data_add_dict_string(data, "Mode", "Tunnel");
> +
> + if ((sa->type & RTE_IPSEC_SATP_NATT_MASK) ==
> + RTE_IPSEC_SATP_NATT_ENABLE) {
> + if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
> + rte_tel_data_add_dict_string(data,
> + "Tunnel-Type",
> + "IPv4-UDP");
> + } else if (sa->type &
> + RTE_IPSEC_SATP_MODE_TUNLV6) {
> + rte_tel_data_add_dict_string(data,
> + "Tunnel-Type",
> + "IPv4-UDP");
> + }
> + } else {
> + if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
> + rte_tel_data_add_dict_string(data,
> + "Tunnel-Type",
> + "IPv4-UDP");
> + } else if (sa->type &
> + RTE_IPSEC_SATP_MODE_TUNLV6) {
> + rte_tel_data_add_dict_string(data,
> + "Tunnel-Type",
> + "IPv4-UDP");
> + }
> + }
> + }
> +
> + rte_tel_data_add_dict_string(data,
> + "extended-sequence-number",
> + (sa->type & RTE_IPSEC_SATP_ESN_MASK) ==
> + RTE_IPSEC_SATP_ESN_ENABLE ?
> + "enabled" : "disabled");
> +
> + if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
> + RTE_IPSEC_SATP_DIR_IB)
> +
> + if (sa->sqn.inb.rsn[sa->sqn.inb.rdidx])
> + rte_tel_data_add_dict_u64(data,
> + "sequence-number",
> + sa->sqn.inb.rsn[sa->sqn.inb.rdidx]->sqn);
> + else
> + rte_tel_data_add_dict_u64(data,
> + "sequence-number", 0);
> + else
> + rte_tel_data_add_dict_u64(data, "sequence-number",
> + sa->sqn.outb);
> +
> + rte_tel_data_add_dict_string(data,
> + "explicit-congestion-notification",
> + (sa->type & RTE_IPSEC_SATP_ECN_MASK) ==
> + RTE_IPSEC_SATP_ECN_ENABLE ?
> + "enabled" : "disabled");
> +
> + rte_tel_data_add_dict_string(data,
> + "copy-DSCP",
> + (sa->type & RTE_IPSEC_SATP_DSCP_MASK) ==
> + RTE_IPSEC_SATP_DSCP_ENABLE ?
> + "enabled" : "disabled");
> +
> + rte_tel_data_add_dict_string(data, "TSO",
> + sa->tso.enabled ? "enabled" : "disabled");
> +
> + if (sa->tso.enabled)
> + rte_tel_data_add_dict_u64(data, "TSO-MSS", sa->tso.mss);
> +
> + }
> +
> + return 0;
> +}
> +int
> +rte_ipsec_telemetry_init(void)
> +{
> + struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
> + int rc = 0;
> +
> + if (telemetry->initialized)
> + return rc;
> +
> + LIST_INIT(&telemetry->sa_list_head);
> +
> + rc = rte_telemetry_register_cmd("/ipsec/sa/list",
> + handle_telemetry_cmd_ipsec_sa_list,
> + "Return list of IPsec Security Associations with telemetry enabled.");
> + if (rc)
> + return rc;
> +
> + rc = rte_telemetry_register_cmd("/ipsec/sa/stats",
> + handle_telemetry_cmd_ipsec_sa_stats,
> + "Returns IPsec Security Association stastistics. Parameters: int sa_spi");
> + if (rc)
> + return rc;
> +
> + rc = rte_telemetry_register_cmd("/ipsec/sa/details",
> + handle_telemetry_cmd_ipsec_sa_configuration,
> + "Returns IPsec Security Association configuration. Parameters: int sa_spi");
> + if (rc)
> + return rc;
> +
> + telemetry->initialized = true;
> +
> + return rc;
> +}
> +
> +int
> +rte_ipsec_telemetry_sa_add(struct rte_ipsec_sa *sa)
> +{
> + struct rte_ipsec_telemetry *telemetry = &rte_ipsec_telemetry_instance;
> +
> + LIST_INSERT_HEAD(&telemetry->sa_list_head, sa, telemetry_next);
> +
> + return 0;
> +}
>
> int
> rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
> @@ -644,19 +888,24 @@ uint16_t
> pkt_flag_process(const struct rte_ipsec_session *ss,
> struct rte_mbuf *mb[], uint16_t num)
> {
> - uint32_t i, k;
> + uint32_t i, k, bytes = 0;
> uint32_t dr[num];
>
> RTE_SET_USED(ss);
>
> k = 0;
> for (i = 0; i != num; i++) {
> - if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
> + if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
> k++;
> + bytes += mb[i]->data_len;
> + }
> else
> dr[i - k] = i;
> }
>
> + ss->sa->statistics.count += k;
> + ss->sa->statistics.bytes += bytes - (ss->sa->hdr_len * k);
> +
> /* handle unprocessed mbufs */
> if (k != num) {
> rte_errno = EBADMSG;
> diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
> index 3f38921eb3..b9b7ebec5b 100644
> --- a/lib/ipsec/sa.h
> +++ b/lib/ipsec/sa.h
> @@ -122,9 +122,30 @@ struct rte_ipsec_sa {
> uint16_t mss;
> } tso;
>
> + LIST_ENTRY(rte_ipsec_sa) telemetry_next;
> + /**< list entry for telemetry enabled SA */
I am not really fond of idea to have telemetry list stuff embedded into rte_ipsec_sa structure.
Creates all sort of concurrency problem for adding/removing SA, while reading telemetry data, etc.
Another issue if SA is shared my multiple-processes.
Instead would be much cleaner if telemetry list will contain just a pointer to SA.
Then it would be user responsibility to add del/add sa to the telelmetry list in an appropriate time.
Also MT working model for this new API needs to be documented properly.
> +
> +
> + RTE_MARKER cachealign_statistics __rte_cache_min_aligned;
What is the reason for all these extra alignments?
> +
> + /* Statistics */
> + struct {
> + uint64_t count;
> + uint64_t bytes;
> +
> + struct {
> + uint64_t count;
> + uint64_t authentication_failed;
> + } errors;
> + } statistics;
> +
> + RTE_MARKER cachealign_tunnel_header __rte_cache_min_aligned;
> +
> /* template for tunnel header */
> uint8_t hdr[IPSEC_MAX_HDR_SIZE];
>
> +
> + RTE_MARKER cachealign_tunnel_seq_num_replay_win __rte_cache_min_aligned;
> /*
> * sqn and replay window
> * In case of SA handled by multiple threads *sqn* cacheline
> diff --git a/lib/ipsec/version.map b/lib/ipsec/version.map
> index ba8753eac4..fed6b6aba1 100644
> --- a/lib/ipsec/version.map
> +++ b/lib/ipsec/version.map
> @@ -19,3 +19,12 @@ DPDK_22 {
>
> local: *;
> };
> +
> +EXPERIMENTAL {
> + global:
> +
> + # added in 21.11
> + rte_ipsec_telemetry_init;
> + rte_ipsec_telemetry_sa_add;
> +
> +};
> --
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v6 09/10] ipsec: add support for initial SQN value
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 " Radu Nicolau
` (7 preceding siblings ...)
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 08/10] ipsec: add support for SA telemetry Radu Nicolau
@ 2021-09-17 9:17 ` Radu Nicolau
2021-09-24 10:22 ` Ananyev, Konstantin
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 10/10] ipsec: add ol_flags support Radu Nicolau
2021-09-24 12:42 ` [dpdk-dev] [PATCH v6 00/10] new features for ipsec and security libraries Ananyev, Konstantin
10 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-09-17 9:17 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Update IPsec library to support initial SQN value.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
lib/ipsec/esp_outb.c | 19 ++++++++++++-------
lib/ipsec/sa.c | 29 ++++++++++++++++++++++-------
2 files changed, 34 insertions(+), 14 deletions(-)
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 2c02c3bb12..8a6d09558f 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -661,7 +661,7 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
*/
static inline void
inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
- struct rte_mbuf *mb[], uint16_t num)
+ struct rte_mbuf *mb[], uint16_t num, uint64_t *sqn)
{
uint32_t i, ol_flags, bytes = 0;
@@ -672,7 +672,7 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
bytes += mb[i]->data_len;
if (ol_flags != 0)
rte_security_set_pkt_metadata(ss->security.ctx,
- ss->security.ses, mb[i], NULL);
+ ss->security.ses, mb[i], sqn);
}
ss->sa->statistics.count += num;
ss->sa->statistics.bytes += bytes - (ss->sa->hdr_len * num);
@@ -764,7 +764,10 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
if (k != num && k != 0)
move_bad_mbufs(mb, dr, num, num - k);
- inline_outb_mbuf_prepare(ss, mb, k);
+ if (sa->sqn_mask > UINT32_MAX)
+ inline_outb_mbuf_prepare(ss, mb, k, &sqn);
+ else
+ inline_outb_mbuf_prepare(ss, mb, k, NULL);
return k;
}
@@ -799,8 +802,7 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
- k = 0;
- for (i = 0; i != num; i++) {
+ for (i = 0, k = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -828,7 +830,10 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
if (k != num && k != 0)
move_bad_mbufs(mb, dr, num, num - k);
- inline_outb_mbuf_prepare(ss, mb, k);
+ if (sa->sqn_mask > UINT32_MAX)
+ inline_outb_mbuf_prepare(ss, mb, k, &sqn);
+ else
+ inline_outb_mbuf_prepare(ss, mb, k, NULL);
return k;
}
@@ -840,6 +845,6 @@ uint16_t
inline_proto_outb_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- inline_outb_mbuf_prepare(ss, mb, num);
+ inline_outb_mbuf_prepare(ss, mb, num, NULL);
return num;
}
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 5b55bbc098..d94684cf96 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -294,11 +294,11 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
* Init ESP outbound specific things.
*/
static void
-esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
+esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen, uint64_t sqn)
{
uint8_t algo_type;
- sa->sqn.outb = 1;
+ sa->sqn.outb = sqn;
algo_type = sa->algo_type;
@@ -356,6 +356,8 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
static void
esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
{
+ uint64_t sqn = prm->ipsec_xform.esn.value > 0 ?
+ prm->ipsec_xform.esn.value : 1;
sa->proto = prm->tun.next_proto;
sa->hdr_len = prm->tun.hdr_len;
sa->hdr_l3_off = prm->tun.hdr_l3_off;
@@ -366,7 +368,7 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
- esp_outb_init(sa, sa->hdr_len);
+ esp_outb_init(sa, sa->hdr_len, sqn);
}
/*
@@ -376,6 +378,8 @@ static int
esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
const struct crypto_xform *cxf)
{
+ uint64_t sqn = prm->ipsec_xform.esn.value > 0 ?
+ prm->ipsec_xform.esn.value : 1;
static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
RTE_IPSEC_SATP_MODE_MASK |
RTE_IPSEC_SATP_NATT_MASK;
@@ -492,7 +496,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
- esp_outb_init(sa, 0);
+ esp_outb_init(sa, 0, sqn);
break;
}
@@ -503,15 +507,19 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
* helper function, init SA replay structure.
*/
static void
-fill_sa_replay(struct rte_ipsec_sa *sa, uint32_t wnd_sz, uint32_t nb_bucket)
+fill_sa_replay(struct rte_ipsec_sa *sa,
+ uint32_t wnd_sz, uint32_t nb_bucket, uint64_t sqn)
{
sa->replay.win_sz = wnd_sz;
sa->replay.nb_bucket = nb_bucket;
sa->replay.bucket_index_mask = nb_bucket - 1;
sa->sqn.inb.rsn[0] = (struct replay_sqn *)(sa + 1);
- if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+ sa->sqn.inb.rsn[0]->sqn = sqn;
+ if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM) {
sa->sqn.inb.rsn[1] = (struct replay_sqn *)
((uintptr_t)sa->sqn.inb.rsn[0] + rsn_size(nb_bucket));
+ sa->sqn.inb.rsn[1]->sqn = sqn;
+ }
}
int
@@ -830,13 +838,20 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
UINT32_MAX : UINT64_MAX;
+ /* if we are starting from a non-zero sn value */
+ if (prm->ipsec_xform.esn.value > 0) {
+ if (prm->ipsec_xform.direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+ sa->sqn.outb = prm->ipsec_xform.esn.value;
+ }
+
rc = esp_sa_init(sa, prm, &cxf);
if (rc != 0)
rte_ipsec_sa_fini(sa);
/* fill replay window related fields */
if (nb != 0)
- fill_sa_replay(sa, wsz, nb);
+ fill_sa_replay(sa, wsz, nb, prm->ipsec_xform.esn.value);
return sz;
}
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v6 09/10] ipsec: add support for initial SQN value
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 09/10] ipsec: add support for initial SQN value Radu Nicolau
@ 2021-09-24 10:22 ` Ananyev, Konstantin
0 siblings, 0 replies; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-09-24 10:22 UTC (permalink / raw)
To: Nicolau, Radu, Iremonger, Bernard, Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal,
gakhil, anoobj, Doherty, Declan, Sinha, Abhijit, Buckley,
Daniel M, marchana, ktejasree, matan
> Update IPsec library to support initial SQN value.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
> lib/ipsec/esp_outb.c | 19 ++++++++++++-------
> lib/ipsec/sa.c | 29 ++++++++++++++++++++++-------
> 2 files changed, 34 insertions(+), 14 deletions(-)
>
> diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
> index 2c02c3bb12..8a6d09558f 100644
> --- a/lib/ipsec/esp_outb.c
> +++ b/lib/ipsec/esp_outb.c
> @@ -661,7 +661,7 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
> */
> static inline void
> inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
> - struct rte_mbuf *mb[], uint16_t num)
> + struct rte_mbuf *mb[], uint16_t num, uint64_t *sqn)
> {
> uint32_t i, ol_flags, bytes = 0;
>
> @@ -672,7 +672,7 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
> bytes += mb[i]->data_len;
> if (ol_flags != 0)
> rte_security_set_pkt_metadata(ss->security.ctx,
> - ss->security.ses, mb[i], NULL);
> + ss->security.ses, mb[i], sqn);
rte_security_set_pkt_metadata() doc says that param is device specific parameter...
Could you probably explain what is the intention here:
Why we need to set pointer to sqn value as device specific parameter?
What PMD expects to do here?
What will happen if PMD expects that parameter to be something else
(not a pointer to sqn value)?
> }
> ss->sa->statistics.count += num;
> ss->sa->statistics.bytes += bytes - (ss->sa->hdr_len * num);
> @@ -764,7 +764,10 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
> if (k != num && k != 0)
> move_bad_mbufs(mb, dr, num, num - k);
>
> - inline_outb_mbuf_prepare(ss, mb, k);
> + if (sa->sqn_mask > UINT32_MAX)
Here and in other places:
there is a special macro: IS_ESN(sa) for that.
> + inline_outb_mbuf_prepare(ss, mb, k, &sqn);
> + else
> + inline_outb_mbuf_prepare(ss, mb, k, NULL);
Ok, so why we need to pass sqn to metadata only for ESN case?
Is that because inside ESP header we store only lower 32-bits of SQN value?
But, as I remember SQN.hi are still stored inside the packet, just in different place
(between ESP trailer and ICV).
> return k;
> }
>
> @@ -799,8 +802,7 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
> if (nb_sqn_alloc != nb_sqn)
> rte_errno = EOVERFLOW;
>
> - k = 0;
> - for (i = 0; i != num; i++) {
> + for (i = 0, k = 0; i != num; i++) {
No reason for change.
>
> sqc = rte_cpu_to_be_64(sqn + i);
> gen_iv(iv, sqc);
> @@ -828,7 +830,10 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
> if (k != num && k != 0)
> move_bad_mbufs(mb, dr, num, num - k);
>
> - inline_outb_mbuf_prepare(ss, mb, k);
> + if (sa->sqn_mask > UINT32_MAX)
> + inline_outb_mbuf_prepare(ss, mb, k, &sqn);
> + else
> + inline_outb_mbuf_prepare(ss, mb, k, NULL);
> return k;
> }
>
> @@ -840,6 +845,6 @@ uint16_t
> inline_proto_outb_pkt_process(const struct rte_ipsec_session *ss,
> struct rte_mbuf *mb[], uint16_t num)
> {
> - inline_outb_mbuf_prepare(ss, mb, num);
> + inline_outb_mbuf_prepare(ss, mb, num, NULL);
> return num;
> }
> diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
> index 5b55bbc098..d94684cf96 100644
> --- a/lib/ipsec/sa.c
> +++ b/lib/ipsec/sa.c
> @@ -294,11 +294,11 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
> * Init ESP outbound specific things.
> */
> static void
> -esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
> +esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen, uint64_t sqn)
> {
> uint8_t algo_type;
>
> - sa->sqn.outb = 1;
> + sa->sqn.outb = sqn;
>
> algo_type = sa->algo_type;
>
> @@ -356,6 +356,8 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
> static void
> esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
> {
> + uint64_t sqn = prm->ipsec_xform.esn.value > 0 ?
> + prm->ipsec_xform.esn.value : 1;
No need to do same thing twice - esp_outb_tun_init can take sqn value as a parameter.
> sa->proto = prm->tun.next_proto;
> sa->hdr_len = prm->tun.hdr_len;
> sa->hdr_l3_off = prm->tun.hdr_l3_off;
> @@ -366,7 +368,7 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
>
> memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
>
> - esp_outb_init(sa, sa->hdr_len);
> + esp_outb_init(sa, sa->hdr_len, sqn);
> }
>
> /*
> @@ -376,6 +378,8 @@ static int
> esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
> const struct crypto_xform *cxf)
> {
> + uint64_t sqn = prm->ipsec_xform.esn.value > 0 ?
> + prm->ipsec_xform.esn.value : 1;
Here and everywhere:
Please try to keep variable definition and its value assignement at different statements.
Will keep coding-style similar across the file and is easier to follow (at least to me).
> static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
> RTE_IPSEC_SATP_MODE_MASK |
> RTE_IPSEC_SATP_NATT_MASK;
> @@ -492,7 +496,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
> case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
> RTE_IPSEC_SATP_NATT_ENABLE):
> case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
> - esp_outb_init(sa, 0);
> + esp_outb_init(sa, 0, sqn);
> break;
> }
>
> @@ -503,15 +507,19 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
> * helper function, init SA replay structure.
> */
> static void
> -fill_sa_replay(struct rte_ipsec_sa *sa, uint32_t wnd_sz, uint32_t nb_bucket)
> +fill_sa_replay(struct rte_ipsec_sa *sa,
> + uint32_t wnd_sz, uint32_t nb_bucket, uint64_t sqn)
> {
> sa->replay.win_sz = wnd_sz;
> sa->replay.nb_bucket = nb_bucket;
> sa->replay.bucket_index_mask = nb_bucket - 1;
> sa->sqn.inb.rsn[0] = (struct replay_sqn *)(sa + 1);
> - if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
> + sa->sqn.inb.rsn[0]->sqn = sqn;
> + if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM) {
> sa->sqn.inb.rsn[1] = (struct replay_sqn *)
> ((uintptr_t)sa->sqn.inb.rsn[0] + rsn_size(nb_bucket));
> + sa->sqn.inb.rsn[1]->sqn = sqn;
> + }
> }
>
> int
> @@ -830,13 +838,20 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
> sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
> UINT32_MAX : UINT64_MAX;
>
> + /* if we are starting from a non-zero sn value */
> + if (prm->ipsec_xform.esn.value > 0) {
> + if (prm->ipsec_xform.direction ==
> + RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
> + sa->sqn.outb = prm->ipsec_xform.esn.value;
Hmm... you do set sa->sqn.outb inside esp_outb_init().
Why do you need to duplicate it here?
> + }
> +
> rc = esp_sa_init(sa, prm, &cxf);
> if (rc != 0)
> rte_ipsec_sa_fini(sa);
>
> /* fill replay window related fields */
> if (nb != 0)
> - fill_sa_replay(sa, wsz, nb);
> + fill_sa_replay(sa, wsz, nb, prm->ipsec_xform.esn.value);
>
> return sz;
> }
> --
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v6 10/10] ipsec: add ol_flags support
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 " Radu Nicolau
` (8 preceding siblings ...)
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 09/10] ipsec: add support for initial SQN value Radu Nicolau
@ 2021-09-17 9:17 ` Radu Nicolau
2021-09-22 13:18 ` Zhang, Roy Fan
2021-09-24 11:39 ` Ananyev, Konstantin
2021-09-24 12:42 ` [dpdk-dev] [PATCH v6 00/10] new features for ipsec and security libraries Ananyev, Konstantin
10 siblings, 2 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-09-17 9:17 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Update the IPsec library to set mbuff->ol_flags and use the configured
L3 header length when setting the mbuff->tx_offload fields
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
---
lib/ipsec/esp_inb.c | 17 ++++++++++++--
lib/ipsec/esp_outb.c | 48 ++++++++++++++++++++++++++++++---------
lib/ipsec/rte_ipsec_sa.h | 3 ++-
lib/ipsec/sa.c | 49 ++++++++++++++++++++++++++++++++++++++--
lib/ipsec/sa.h | 8 +++++++
5 files changed, 109 insertions(+), 16 deletions(-)
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index 8cb4c16302..5fcb41297e 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -559,7 +559,8 @@ trs_process_step3(struct rte_mbuf *mb)
* - tx_offload
*/
static inline void
-tun_process_step3(struct rte_mbuf *mb, uint64_t txof_msk, uint64_t txof_val)
+tun_process_step3(struct rte_mbuf *mb, uint8_t is_ipv4, uint64_t txof_msk,
+ uint64_t txof_val)
{
/* reset mbuf metatdata: L2/L3 len, packet type */
mb->packet_type = RTE_PTYPE_UNKNOWN;
@@ -567,6 +568,14 @@ tun_process_step3(struct rte_mbuf *mb, uint64_t txof_msk, uint64_t txof_val)
/* clear the PKT_RX_SEC_OFFLOAD flag if set */
mb->ol_flags &= ~PKT_RX_SEC_OFFLOAD;
+
+ if (is_ipv4) {
+ mb->l3_len = sizeof(struct rte_ipv4_hdr);
+ mb->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ } else {
+ mb->l3_len = sizeof(struct rte_ipv6_hdr);
+ mb->ol_flags |= PKT_TX_IPV6;
+ }
}
/*
@@ -618,8 +627,12 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
update_tun_inb_l3hdr(sa, outh, inh);
/* update mbuf's metadata */
- tun_process_step3(mb[i], sa->tx_offload.msk,
+ tun_process_step3(mb[i],
+ (sa->type & RTE_IPSEC_SATP_IPV_MASK) ==
+ RTE_IPSEC_SATP_IPV4 ? 1 : 0,
+ sa->tx_offload.msk,
sa->tx_offload.val);
+
k++;
} else
dr[i - k] = i;
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 8a6d09558f..d8e261e6fb 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -19,7 +19,7 @@
typedef int32_t (*esp_outb_prepare_t)(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len);
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t icrypto);
/*
* helper function to fill crypto_sym op for cipher+auth algorithms.
@@ -140,9 +140,9 @@ outb_cop_prepare(struct rte_crypto_op *cop,
static inline int32_t
outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len)
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t icrypto)
{
- uint32_t clen, hlen, l2len, pdlen, pdofs, plen, tlen;
+ uint32_t clen, hlen, l2len, l3len, pdlen, pdofs, plen, tlen;
struct rte_mbuf *ml;
struct rte_esp_hdr *esph;
struct rte_esp_tail *espt;
@@ -154,6 +154,8 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* size of ipsec protected data */
l2len = mb->l2_len;
+ l3len = mb->l3_len;
+
plen = mb->pkt_len - l2len;
/* number of bytes to encrypt */
@@ -190,8 +192,26 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
/* update pkt l2/l3 len */
- mb->tx_offload = (mb->tx_offload & sa->tx_offload.msk) |
- sa->tx_offload.val;
+ if (icrypto) {
+ mb->tx_offload =
+ (mb->tx_offload & sa->inline_crypto.tx_offload.msk) |
+ sa->inline_crypto.tx_offload.val;
+ mb->l3_len = l3len;
+
+ mb->ol_flags |= sa->inline_crypto.tx_ol_flags;
+
+ /* set ip checksum offload for inner */
+ if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4)
+ mb->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if ((sa->type & RTE_IPSEC_SATP_IPV_MASK)
+ == RTE_IPSEC_SATP_IPV6)
+ mb->ol_flags |= PKT_TX_IPV6;
+ } else {
+ mb->tx_offload = (mb->tx_offload & sa->tx_offload.msk) |
+ sa->tx_offload.val;
+
+ mb->ol_flags |= sa->tx_ol_flags;
+ }
/* copy tunnel pkt header */
rte_memcpy(ph, sa->hdr, sa->hdr_len);
@@ -311,7 +331,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
/* try to update the packet itself */
rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv,
- sa->sqh_len);
+ sa->sqh_len, 0);
/* success, setup crypto op */
if (rc >= 0) {
outb_pkt_xprepare(sa, sqc, &icv);
@@ -338,7 +358,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
static inline int32_t
outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len)
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t icrypto __rte_unused)
{
uint8_t np;
uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen;
@@ -394,10 +414,16 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* shift L2/L3 headers */
insert_esph(ph, ph + hlen, uhlen);
+ if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4)
+ mb->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV6)
+ mb->ol_flags |= PKT_TX_IPV6;
+
/* update ip header fields */
np = update_trs_l34hdrs(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
l3len, IPPROTO_ESP, tso);
+
/* update spi, seqn and iv */
esph = (struct rte_esp_hdr *)(ph + uhlen);
iv = (uint64_t *)(esph + 1);
@@ -463,7 +489,7 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
/* try to update the packet itself */
rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv,
- sa->sqh_len);
+ sa->sqh_len, 0);
/* success, setup crypto op */
if (rc >= 0) {
outb_pkt_xprepare(sa, sqc, &icv);
@@ -560,7 +586,7 @@ cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss,
gen_iv(ivbuf[k], sqc);
/* try to update the packet itself */
- rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len);
+ rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len, 0);
/* success, proceed with preparations */
if (rc >= 0) {
@@ -741,7 +767,7 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
gen_iv(iv, sqc);
/* try to update the packet itself */
- rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
+ rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0, 1);
k += (rc >= 0);
@@ -808,7 +834,7 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
gen_iv(iv, sqc);
/* try to update the packet itself */
- rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
+ rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0, 0);
k += (rc >= 0);
diff --git a/lib/ipsec/rte_ipsec_sa.h b/lib/ipsec/rte_ipsec_sa.h
index 40d1e70d45..3c36dcaa77 100644
--- a/lib/ipsec/rte_ipsec_sa.h
+++ b/lib/ipsec/rte_ipsec_sa.h
@@ -38,7 +38,8 @@ struct rte_ipsec_sa_prm {
union {
struct {
uint8_t hdr_len; /**< tunnel header len */
- uint8_t hdr_l3_off; /**< offset for IPv4/IPv6 header */
+ uint8_t hdr_l3_off; /**< tunnel l3 header len */
+ uint8_t hdr_l3_len; /**< tunnel l3 header len */
uint8_t next_proto; /**< next header protocol */
const void *hdr; /**< tunnel header template */
} tun; /**< tunnel mode related parameters */
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index d94684cf96..149ed5dd4f 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -17,6 +17,8 @@
#define MBUF_MAX_L2_LEN RTE_LEN2MASK(RTE_MBUF_L2_LEN_BITS, uint64_t)
#define MBUF_MAX_L3_LEN RTE_LEN2MASK(RTE_MBUF_L3_LEN_BITS, uint64_t)
+#define MBUF_MAX_TSO_LEN RTE_LEN2MASK(RTE_MBUF_TSO_SEGSZ_BITS, uint64_t)
+#define MBUF_MAX_OL3_LEN RTE_LEN2MASK(RTE_MBUF_OUTL3_LEN_BITS, uint64_t)
/* some helper structures */
struct crypto_xform {
@@ -348,6 +350,11 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen, uint64_t sqn)
sa->cofs.ofs.cipher.head = sa->ctp.cipher.offset - sa->ctp.auth.offset;
sa->cofs.ofs.cipher.tail = (sa->ctp.auth.offset + sa->ctp.auth.length) -
(sa->ctp.cipher.offset + sa->ctp.cipher.length);
+
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
+ sa->tx_ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV6)
+ sa->tx_ol_flags |= PKT_TX_IPV6;
}
/*
@@ -362,9 +369,43 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
sa->hdr_len = prm->tun.hdr_len;
sa->hdr_l3_off = prm->tun.hdr_l3_off;
+
+ /* update l2_len and l3_len fields for outbound mbuf */
+ sa->inline_crypto.tx_offload.val = rte_mbuf_tx_offload(
+ 0, /* iL2_LEN */
+ 0, /* iL3_LEN */
+ 0, /* iL4_LEN */
+ 0, /* TSO_SEG_SZ */
+ prm->tun.hdr_l3_len, /* oL3_LEN */
+ prm->tun.hdr_l3_off, /* oL2_LEN */
+ 0);
+
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_TUNNEL_ESP;
+
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_OUTER_IPV4;
+ else if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV6)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_OUTER_IPV6;
+
+ if (sa->inline_crypto.tx_ol_flags & PKT_TX_OUTER_IPV4)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_OUTER_IP_CKSUM;
+ if (sa->tx_ol_flags & PKT_TX_IPV4)
+ sa->inline_crypto.tx_ol_flags |= PKT_TX_IP_CKSUM;
+
/* update l2_len and l3_len fields for outbound mbuf */
- sa->tx_offload.val = rte_mbuf_tx_offload(sa->hdr_l3_off,
- sa->hdr_len - sa->hdr_l3_off, 0, 0, 0, 0, 0);
+ sa->tx_offload.val = rte_mbuf_tx_offload(
+ prm->tun.hdr_l3_off, /* iL2_LEN */
+ prm->tun.hdr_l3_len, /* iL3_LEN */
+ 0, /* iL4_LEN */
+ 0, /* TSO_SEG_SZ */
+ 0, /* oL3_LEN */
+ 0, /* oL2_LEN */
+ 0);
+
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
+ sa->tx_ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+ else if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV6)
+ sa->tx_ol_flags |= PKT_TX_IPV6;
memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
@@ -473,6 +514,10 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->salt = prm->ipsec_xform.salt;
/* preserve all values except l2_len and l3_len */
+ sa->inline_crypto.tx_offload.msk =
+ ~rte_mbuf_tx_offload(MBUF_MAX_L2_LEN, MBUF_MAX_L3_LEN,
+ 0, 0, MBUF_MAX_OL3_LEN, 0, 0);
+
sa->tx_offload.msk =
~rte_mbuf_tx_offload(MBUF_MAX_L2_LEN, MBUF_MAX_L3_LEN,
0, 0, 0, 0, 0);
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index b9b7ebec5b..172d094c4b 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -101,6 +101,14 @@ struct rte_ipsec_sa {
uint64_t msk;
uint64_t val;
} tx_offload;
+ uint64_t tx_ol_flags;
+ struct {
+ uint64_t tx_ol_flags;
+ struct {
+ uint64_t msk;
+ uint64_t val;
+ } tx_offload;
+ } inline_crypto;
struct {
uint16_t sport;
uint16_t dport;
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v6 10/10] ipsec: add ol_flags support
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 10/10] ipsec: add ol_flags support Radu Nicolau
@ 2021-09-22 13:18 ` Zhang, Roy Fan
2021-09-24 11:39 ` Ananyev, Konstantin
1 sibling, 0 replies; 184+ messages in thread
From: Zhang, Roy Fan @ 2021-09-22 13:18 UTC (permalink / raw)
To: Nicolau, Radu, Ananyev, Konstantin, Iremonger, Bernard,
Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, hemant.agrawal, gakhil, anoobj,
Doherty, Declan, Sinha, Abhijit, Buckley, Daniel M, marchana,
ktejasree, matan
> -----Original Message-----
> From: Nicolau, Radu <radu.nicolau@intel.com>
> Sent: Friday, September 17, 2021 10:18 AM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Iremonger,
> Bernard <bernard.iremonger@intel.com>; Medvedkin, Vladimir
> <vladimir.medvedkin@intel.com>
> Cc: dev@dpdk.org; mdr@ashroe.eu; Richardson, Bruce
> <bruce.richardson@intel.com>; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> hemant.agrawal@nxp.com; gakhil@marvell.com; anoobj@marvell.com;
> Doherty, Declan <declan.doherty@intel.com>; Sinha, Abhijit
> <abhijit.sinha@intel.com>; Buckley, Daniel M <daniel.m.buckley@intel.com>;
> marchana@marvell.com; ktejasree@marvell.com; matan@nvidia.com;
> Nicolau, Radu <radu.nicolau@intel.com>
> Subject: [PATCH v6 10/10] ipsec: add ol_flags support
>
> Update the IPsec library to set mbuff->ol_flags and use the configured
> L3 header length when setting the mbuff->tx_offload fields
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v6 10/10] ipsec: add ol_flags support
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 10/10] ipsec: add ol_flags support Radu Nicolau
2021-09-22 13:18 ` Zhang, Roy Fan
@ 2021-09-24 11:39 ` Ananyev, Konstantin
1 sibling, 0 replies; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-09-24 11:39 UTC (permalink / raw)
To: Nicolau, Radu, Iremonger, Bernard, Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal,
gakhil, anoobj, Doherty, Declan, Sinha, Abhijit, Buckley,
Daniel M, marchana, ktejasree, matan
> Update the IPsec library to set mbuff->ol_flags and use the configured
> L3 header length when setting the mbuff->tx_offload fields
You stated what pactch does, but didn't explain why it is needed.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> ---
> lib/ipsec/esp_inb.c | 17 ++++++++++++--
> lib/ipsec/esp_outb.c | 48 ++++++++++++++++++++++++++++++---------
> lib/ipsec/rte_ipsec_sa.h | 3 ++-
> lib/ipsec/sa.c | 49 ++++++++++++++++++++++++++++++++++++++--
> lib/ipsec/sa.h | 8 +++++++
> 5 files changed, 109 insertions(+), 16 deletions(-)
>
> diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
> index 8cb4c16302..5fcb41297e 100644
> --- a/lib/ipsec/esp_inb.c
> +++ b/lib/ipsec/esp_inb.c
> @@ -559,7 +559,8 @@ trs_process_step3(struct rte_mbuf *mb)
> * - tx_offload
> */
> static inline void
> -tun_process_step3(struct rte_mbuf *mb, uint64_t txof_msk, uint64_t txof_val)
> +tun_process_step3(struct rte_mbuf *mb, uint8_t is_ipv4, uint64_t txof_msk,
> + uint64_t txof_val)
> {
> /* reset mbuf metatdata: L2/L3 len, packet type */
> mb->packet_type = RTE_PTYPE_UNKNOWN;
> @@ -567,6 +568,14 @@ tun_process_step3(struct rte_mbuf *mb, uint64_t txof_msk, uint64_t txof_val)
>
> /* clear the PKT_RX_SEC_OFFLOAD flag if set */
> mb->ol_flags &= ~PKT_RX_SEC_OFFLOAD;
> +
> + if (is_ipv4) {
> + mb->l3_len = sizeof(struct rte_ipv4_hdr);
> + mb->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
> + } else {
> + mb->l3_len = sizeof(struct rte_ipv6_hdr);
> + mb->ol_flags |= PKT_TX_IPV6;
> + }
That's TX related flags.
Why do you set them for inbound traffic?
> }
>
> /*
> @@ -618,8 +627,12 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
> update_tun_inb_l3hdr(sa, outh, inh);
>
> /* update mbuf's metadata */
> - tun_process_step3(mb[i], sa->tx_offload.msk,
> + tun_process_step3(mb[i],
> + (sa->type & RTE_IPSEC_SATP_IPV_MASK) ==
> + RTE_IPSEC_SATP_IPV4 ? 1 : 0,
> + sa->tx_offload.msk,
> sa->tx_offload.val);
> +
> k++;
> } else
> dr[i - k] = i;
> diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
> index 8a6d09558f..d8e261e6fb 100644
> --- a/lib/ipsec/esp_outb.c
> +++ b/lib/ipsec/esp_outb.c
> @@ -19,7 +19,7 @@
>
> typedef int32_t (*esp_outb_prepare_t)(struct rte_ipsec_sa *sa, rte_be64_t sqc,
> const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
> - union sym_op_data *icv, uint8_t sqh_len);
> + union sym_op_data *icv, uint8_t sqh_len, uint8_t icrypto);
Sigh, what does icrypto mean and why it is needed?
It really would help if you bother to put at least some comments explaining your changes.
If that's for inline crypto specific add-ons then why it has to be there, in generic function?
Why not in inline_outb_tun_pkt_process()?
>
> /*
> * helper function to fill crypto_sym op for cipher+auth algorithms.
> @@ -140,9 +140,9 @@ outb_cop_prepare(struct rte_crypto_op *cop,
> static inline int32_t
> outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
> const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
> - union sym_op_data *icv, uint8_t sqh_len)
> + union sym_op_data *icv, uint8_t sqh_len, uint8_t icrypto)
> {
> - uint32_t clen, hlen, l2len, pdlen, pdofs, plen, tlen;
> + uint32_t clen, hlen, l2len, l3len, pdlen, pdofs, plen, tlen;
> struct rte_mbuf *ml;
> struct rte_esp_hdr *esph;
> struct rte_esp_tail *espt;
> @@ -154,6 +154,8 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
>
> /* size of ipsec protected data */
> l2len = mb->l2_len;
> + l3len = mb->l3_len;
> +
> plen = mb->pkt_len - l2len;
>
> /* number of bytes to encrypt */
> @@ -190,8 +192,26 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
> pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
>
> /* update pkt l2/l3 len */
> - mb->tx_offload = (mb->tx_offload & sa->tx_offload.msk) |
> - sa->tx_offload.val;
> + if (icrypto) {
> + mb->tx_offload =
> + (mb->tx_offload & sa->inline_crypto.tx_offload.msk) |
> + sa->inline_crypto.tx_offload.val;
> + mb->l3_len = l3len;
> +
> + mb->ol_flags |= sa->inline_crypto.tx_ol_flags;
> +
> + /* set ip checksum offload for inner */
> + if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4)
> + mb->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
> + else if ((sa->type & RTE_IPSEC_SATP_IPV_MASK)
> + == RTE_IPSEC_SATP_IPV6)
> + mb->ol_flags |= PKT_TX_IPV6;
> + } else {
> + mb->tx_offload = (mb->tx_offload & sa->tx_offload.msk) |
> + sa->tx_offload.val;
> +
> + mb->ol_flags |= sa->tx_ol_flags;
> + }
>
> /* copy tunnel pkt header */
> rte_memcpy(ph, sa->hdr, sa->hdr_len);
> @@ -311,7 +331,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
>
> /* try to update the packet itself */
> rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv,
> - sa->sqh_len);
> + sa->sqh_len, 0);
> /* success, setup crypto op */
> if (rc >= 0) {
> outb_pkt_xprepare(sa, sqc, &icv);
> @@ -338,7 +358,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
> static inline int32_t
> outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
> const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
> - union sym_op_data *icv, uint8_t sqh_len)
> + union sym_op_data *icv, uint8_t sqh_len, uint8_t icrypto __rte_unused)
> {
> uint8_t np;
> uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen;
> @@ -394,10 +414,16 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
> /* shift L2/L3 headers */
> insert_esph(ph, ph + hlen, uhlen);
>
> + if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4)
> + mb->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
> + else if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV6)
> + mb->ol_flags |= PKT_TX_IPV6;
Why do ipsec lib now have to setup these flags unconditionally?
If that's a change in functionality - it should be documented properly
with clear explanation why.
If you believe it is a bug in current ipsec implementation - then it should be
separate 'fix' patch.
But so far, I don't see any good reason for that.
> /* update ip header fields */
> np = update_trs_l34hdrs(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
> l3len, IPPROTO_ESP, tso);
>
> +
Empty line.
> /* update spi, seqn and iv */
> esph = (struct rte_esp_hdr *)(ph + uhlen);
> iv = (uint64_t *)(esph + 1);
> @@ -463,7 +489,7 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
>
> /* try to update the packet itself */
> rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv,
> - sa->sqh_len);
> + sa->sqh_len, 0);
> /* success, setup crypto op */
> if (rc >= 0) {
> outb_pkt_xprepare(sa, sqc, &icv);
> @@ -560,7 +586,7 @@ cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss,
> gen_iv(ivbuf[k], sqc);
>
> /* try to update the packet itself */
> - rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len);
> + rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len, 0);
>
> /* success, proceed with preparations */
> if (rc >= 0) {
> @@ -741,7 +767,7 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
> gen_iv(iv, sqc);
>
> /* try to update the packet itself */
> - rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
> + rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0, 1);
>
> k += (rc >= 0);
>
> @@ -808,7 +834,7 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
> gen_iv(iv, sqc);
>
> /* try to update the packet itself */
> - rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
> + rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0, 0);
>
> k += (rc >= 0);
>
> diff --git a/lib/ipsec/rte_ipsec_sa.h b/lib/ipsec/rte_ipsec_sa.h
> index 40d1e70d45..3c36dcaa77 100644
> --- a/lib/ipsec/rte_ipsec_sa.h
> +++ b/lib/ipsec/rte_ipsec_sa.h
> @@ -38,7 +38,8 @@ struct rte_ipsec_sa_prm {
> union {
> struct {
> uint8_t hdr_len; /**< tunnel header len */
> - uint8_t hdr_l3_off; /**< offset for IPv4/IPv6 header */
> + uint8_t hdr_l3_off; /**< tunnel l3 header len */
> + uint8_t hdr_l3_len; /**< tunnel l3 header len */
> uint8_t next_proto; /**< next header protocol */
> const void *hdr; /**< tunnel header template */
> } tun; /**< tunnel mode related parameters */
> diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
> index d94684cf96..149ed5dd4f 100644
> --- a/lib/ipsec/sa.c
> +++ b/lib/ipsec/sa.c
> @@ -17,6 +17,8 @@
>
> #define MBUF_MAX_L2_LEN RTE_LEN2MASK(RTE_MBUF_L2_LEN_BITS, uint64_t)
> #define MBUF_MAX_L3_LEN RTE_LEN2MASK(RTE_MBUF_L3_LEN_BITS, uint64_t)
> +#define MBUF_MAX_TSO_LEN RTE_LEN2MASK(RTE_MBUF_TSO_SEGSZ_BITS, uint64_t)
> +#define MBUF_MAX_OL3_LEN RTE_LEN2MASK(RTE_MBUF_OUTL3_LEN_BITS, uint64_t)
>
> /* some helper structures */
> struct crypto_xform {
> @@ -348,6 +350,11 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen, uint64_t sqn)
> sa->cofs.ofs.cipher.head = sa->ctp.cipher.offset - sa->ctp.auth.offset;
> sa->cofs.ofs.cipher.tail = (sa->ctp.auth.offset + sa->ctp.auth.length) -
> (sa->ctp.cipher.offset + sa->ctp.cipher.length);
> +
> + if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
> + sa->tx_ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
> + else if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV6)
> + sa->tx_ol_flags |= PKT_TX_IPV6;
Same question: why these flags have to be *always* and unconditionally set?
This is a change in behaviour and there should be a really good reason for that.
And if we will decide to proceed with it - it should be clearly outlined in both doc
and commit message.
> }
>
> /*
> @@ -362,9 +369,43 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
> sa->hdr_len = prm->tun.hdr_len;
> sa->hdr_l3_off = prm->tun.hdr_l3_off;
>
> +
> + /* update l2_len and l3_len fields for outbound mbuf */
> + sa->inline_crypto.tx_offload.val = rte_mbuf_tx_offload(
> + 0, /* iL2_LEN */
> + 0, /* iL3_LEN */
> + 0, /* iL4_LEN */
> + 0, /* TSO_SEG_SZ */
> + prm->tun.hdr_l3_len, /* oL3_LEN */
> + prm->tun.hdr_l3_off, /* oL2_LEN */
> + 0);
> + sa->inline_crypto.tx_ol_flags |= PKT_TX_TUNNEL_ESP;
> +
> + if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
> + sa->inline_crypto.tx_ol_flags |= PKT_TX_OUTER_IPV4;
> + else if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV6)
> + sa->inline_crypto.tx_ol_flags |= PKT_TX_OUTER_IPV6;
> +
> + if (sa->inline_crypto.tx_ol_flags & PKT_TX_OUTER_IPV4)
> + sa->inline_crypto.tx_ol_flags |= PKT_TX_OUTER_IP_CKSUM;
> + if (sa->tx_ol_flags & PKT_TX_IPV4)
> + sa->inline_crypto.tx_ol_flags |= PKT_TX_IP_CKSUM;
> +
> /* update l2_len and l3_len fields for outbound mbuf */
> - sa->tx_offload.val = rte_mbuf_tx_offload(sa->hdr_l3_off,
> - sa->hdr_len - sa->hdr_l3_off, 0, 0, 0, 0, 0);
> + sa->tx_offload.val = rte_mbuf_tx_offload(
> + prm->tun.hdr_l3_off, /* iL2_LEN */
> + prm->tun.hdr_l3_len, /* iL3_LEN */
Sigh, again that's the change in current behaviour.
What would happen for old apps that doesn't set hdr_l3_len properly?
> + 0, /* iL4_LEN */
> + 0, /* TSO_SEG_SZ */
> + 0, /* oL3_LEN */
> + 0, /* oL2_LEN */
> + 0);
> +
> + if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
> + sa->tx_ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
> + else if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV6)
> + sa->tx_ol_flags |= PKT_TX_IPV6;
>
> memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
>
> @@ -473,6 +514,10 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
> sa->salt = prm->ipsec_xform.salt;
>
> /* preserve all values except l2_len and l3_len */
> + sa->inline_crypto.tx_offload.msk =
> + ~rte_mbuf_tx_offload(MBUF_MAX_L2_LEN, MBUF_MAX_L3_LEN,
> + 0, 0, MBUF_MAX_OL3_LEN, 0, 0);
> +
> sa->tx_offload.msk =
> ~rte_mbuf_tx_offload(MBUF_MAX_L2_LEN, MBUF_MAX_L3_LEN,
> 0, 0, 0, 0, 0);
> diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
> index b9b7ebec5b..172d094c4b 100644
> --- a/lib/ipsec/sa.h
> +++ b/lib/ipsec/sa.h
> @@ -101,6 +101,14 @@ struct rte_ipsec_sa {
> uint64_t msk;
> uint64_t val;
> } tx_offload;
> + uint64_t tx_ol_flags;
> + struct {
> + uint64_t tx_ol_flags;
> + struct {
> + uint64_t msk;
> + uint64_t val;
> + } tx_offload;
> + } inline_crypto;
I don't see any reason why we need two tx_offload fields (and tx_ol_flags) -
one generic, second for inline.
I believe both can be squeezed into just one value.
For different methods (inline/lookaside) these values will be just different.
Probably these fields need to be moved from struct rte_ipsec_sa, to rte_ipsec_session.
> struct {
> uint16_t sport;
> uint16_t dport;
> --
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v6 00/10] new features for ipsec and security libraries
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 " Radu Nicolau
` (9 preceding siblings ...)
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 10/10] ipsec: add ol_flags support Radu Nicolau
@ 2021-09-24 12:42 ` Ananyev, Konstantin
10 siblings, 0 replies; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-09-24 12:42 UTC (permalink / raw)
To: Nicolau, Radu
Cc: dev, mdr, Medvedkin, Vladimir, Richardson, Bruce, Zhang, Roy Fan,
hemant.agrawal, gakhil, anoobj, Doherty, Declan, Sinha, Abhijit,
Buckley, Daniel M, marchana, ktejasree, matan
>
> Add support for:
> TSO, NAT-T/UDP encapsulation, ESN
> AES_CCM, CHACHA20_POLY1305 and AES_GMAC
> SA telemetry
> mbuf offload flags
> Initial SQN value
I provided my comments for individual patches.
There are few more generic ones, I have:
1. Documentation updates are missing.
Specially things that need to be documented properly:
- changes in the public API and current behaviour.
2. In some patches you describe the actual changes,
but without providing any reason why it is necessary.
3. For new algos/features it would be really good to extend
examples/ipsec-secgw/test with new test-cases.
4. When submitting new version - it would be really good to have in cover-letter
a summary of changes from previous version, so reviewer can avoid
looking through all patches again.
5. The series contains mix of patches for completely different features.
It would be much cleaner to have a separate series for each such feature.
Let say series to enable feature X:
- patch to update lib/security public headers (if any)
- patch(es) to update lib/ipsec
- patch(es) to update PMD to implement new functionality (if any)
- patch(es) to update examples/ipec-secgw to enable new functionality
- patch(es) to update examples/ipsec-secgw/test to add new test-cases (if any)
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
>
> Radu Nicolau (10):
> security: add support for TSO on IPsec session
> security: add UDP params for IPsec NAT-T
> security: add ESN field to ipsec_xform
> mbuf: add IPsec ESP tunnel type
> ipsec: add support for AEAD algorithms
> ipsec: add transmit segmentation offload support
> ipsec: add support for NAT-T
> ipsec: add support for SA telemetry
> ipsec: add support for initial SQN value
> ipsec: add ol_flags support
>
> lib/ipsec/crypto.h | 137 ++++++++++++
> lib/ipsec/esp_inb.c | 88 +++++++-
> lib/ipsec/esp_outb.c | 262 +++++++++++++++++++----
> lib/ipsec/iph.h | 27 ++-
> lib/ipsec/meson.build | 2 +-
> lib/ipsec/rte_ipsec.h | 23 ++
> lib/ipsec/rte_ipsec_sa.h | 11 +-
> lib/ipsec/sa.c | 406 ++++++++++++++++++++++++++++++++++--
> lib/ipsec/sa.h | 43 ++++
> lib/ipsec/version.map | 9 +
> lib/mbuf/rte_mbuf_core.h | 1 +
> lib/security/rte_security.h | 31 +++
> 12 files changed, 967 insertions(+), 73 deletions(-)
>
> --
> v2: fixed lib/ipsec/version.map updates to show correct version
> v3: fixed build error and corrected misspelled email address
> v4: add doxygen comments for the IPsec telemetry APIs
> update inline comments refering to the wrong RFC
> v5: update commit messages after feedback
> update the UDP encapsulation patch to actually use the configured ports
> v6: fix initial SQN value
>
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v7 0/8] new features for ipsec and security libraries
2021-07-13 13:35 [dpdk-dev] [PATCH 00/10] new features for ipsec and security libraries Radu Nicolau
` (14 preceding siblings ...)
2021-09-17 9:17 ` [dpdk-dev] [PATCH v6 " Radu Nicolau
@ 2021-10-01 9:50 ` Radu Nicolau
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 1/8] security: add ESN field to ipsec_xform Radu Nicolau
` (8 more replies)
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 00/10] " Radu Nicolau
` (2 subsequent siblings)
18 siblings, 9 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-01 9:50 UTC (permalink / raw)
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, gakhil, anoobj,
declan.doherty, abhijit.sinha, daniel.m.buckley, marchana,
ktejasree, matan, Radu Nicolau
Add support for:
TSO, NAT-T/UDP encapsulation, ESN
AES_CCM, CHACHA20_POLY1305 and AES_GMAC
SA telemetry
mbuf offload flags
Initial SQN value
Radu Nicolau (8):
security: add ESN field to ipsec_xform
ipsec: add support for AEAD algorithms
security: add UDP params for IPsec NAT-T
ipsec: add support for NAT-T
mbuf: add IPsec ESP tunnel type
ipsec: add transmit segmentation offload support
ipsec: add support for SA telemetry
ipsec: add support for initial SQN value
lib/ipsec/crypto.h | 137 +++++++++++++++++++++
lib/ipsec/esp_inb.c | 84 +++++++++++--
lib/ipsec/esp_outb.c | 210 ++++++++++++++++++++++++++++----
lib/ipsec/ipsec_telemetry.c | 237 ++++++++++++++++++++++++++++++++++++
lib/ipsec/meson.build | 6 +-
lib/ipsec/rte_ipsec.h | 23 ++++
lib/ipsec/rte_ipsec_sa.h | 9 +-
lib/ipsec/sa.c | 117 +++++++++++++++---
lib/ipsec/sa.h | 15 +++
lib/ipsec/version.map | 9 ++
lib/mbuf/rte_mbuf_core.h | 1 +
lib/security/rte_security.h | 15 +++
12 files changed, 811 insertions(+), 52 deletions(-)
create mode 100644 lib/ipsec/ipsec_telemetry.c
--
v2: fixed lib/ipsec/version.map updates to show correct version
v3: fixed build error and corrected misspelled email address
v4: add doxygen comments for the IPsec telemetry APIs
update inline comments refering to the wrong RFC
v5: update commit messages after feedback
update the UDP encapsulation patch to actually use the configured ports
v6: fix initial SQN value
v7: reworked the patches after feedback
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v7 1/8] security: add ESN field to ipsec_xform
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 0/8] " Radu Nicolau
@ 2021-10-01 9:50 ` Radu Nicolau
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 2/8] ipsec: add support for AEAD algorithms Radu Nicolau
` (7 subsequent siblings)
8 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-01 9:50 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, anoobj,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau
Update ipsec_xform definition to include ESN field.
This allows the application to control the ESN starting value.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
---
lib/security/rte_security.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 2e136d7929..48353a3e18 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -217,6 +217,14 @@ struct rte_security_ipsec_xform {
/**< Anti replay window size to enable sequence replay attack handling.
* replay checking is disabled if the window size is 0.
*/
+ union {
+ uint64_t value;
+ struct {
+ uint32_t low;
+ uint32_t hi;
+ };
+ } esn;
+ /**< Extended Sequence Number */
};
/**
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v7 2/8] ipsec: add support for AEAD algorithms
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 0/8] " Radu Nicolau
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 1/8] security: add ESN field to ipsec_xform Radu Nicolau
@ 2021-10-01 9:50 ` Radu Nicolau
2021-10-08 18:30 ` [dpdk-dev] [EXT] " Akhil Goyal
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 3/8] security: add UDP params for IPsec NAT-T Radu Nicolau
` (6 subsequent siblings)
8 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-10-01 9:50 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add support for AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
lib/ipsec/crypto.h | 137 +++++++++++++++++++++++++++++++++++++++++++
lib/ipsec/esp_inb.c | 66 ++++++++++++++++++++-
lib/ipsec/esp_outb.c | 70 +++++++++++++++++++++-
lib/ipsec/sa.c | 54 +++++++++++++++--
lib/ipsec/sa.h | 6 ++
5 files changed, 322 insertions(+), 11 deletions(-)
diff --git a/lib/ipsec/crypto.h b/lib/ipsec/crypto.h
index 3d03034590..93d20aaaa0 100644
--- a/lib/ipsec/crypto.h
+++ b/lib/ipsec/crypto.h
@@ -21,6 +21,37 @@ struct aesctr_cnt_blk {
uint32_t cnt;
} __rte_packed;
+ /*
+ * CHACHA20-POLY1305 devices have some specific requirements
+ * for IV and AAD formats.
+ * Ideally that to be done by the driver itself.
+ */
+
+struct aead_chacha20_poly1305_iv {
+ uint32_t salt;
+ uint64_t iv;
+ uint32_t cnt;
+} __rte_packed;
+
+struct aead_chacha20_poly1305_aad {
+ uint32_t spi;
+ /*
+ * RFC 4106, section 5:
+ * Two formats of the AAD are defined:
+ * one for 32-bit sequence numbers, and one for 64-bit ESN.
+ */
+ union {
+ uint32_t u32[2];
+ uint64_t u64;
+ } sqn;
+ uint32_t align0; /* align to 16B boundary */
+} __rte_packed;
+
+struct chacha20_poly1305_esph_iv {
+ struct rte_esp_hdr esph;
+ uint64_t iv;
+} __rte_packed;
+
/*
* AES-GCM devices have some specific requirements for IV and AAD formats.
* Ideally that to be done by the driver itself.
@@ -51,6 +82,47 @@ struct gcm_esph_iv {
uint64_t iv;
} __rte_packed;
+ /*
+ * AES-CCM devices have some specific requirements for IV and AAD formats.
+ * Ideally that to be done by the driver itself.
+ */
+union aead_ccm_salt {
+ uint32_t salt;
+ struct inner {
+ uint8_t salt8[3];
+ uint8_t ccm_flags;
+ } inner;
+} __rte_packed;
+
+
+struct aead_ccm_iv {
+ uint8_t ccm_flags;
+ uint8_t salt[3];
+ uint64_t iv;
+ uint32_t cnt;
+} __rte_packed;
+
+struct aead_ccm_aad {
+ uint8_t padding[18];
+ uint32_t spi;
+ /*
+ * RFC 4309, section 5:
+ * Two formats of the AAD are defined:
+ * one for 32-bit sequence numbers, and one for 64-bit ESN.
+ */
+ union {
+ uint32_t u32[2];
+ uint64_t u64;
+ } sqn;
+ uint32_t align0; /* align to 16B boundary */
+} __rte_packed;
+
+struct ccm_esph_iv {
+ struct rte_esp_hdr esph;
+ uint64_t iv;
+} __rte_packed;
+
+
static inline void
aes_ctr_cnt_blk_fill(struct aesctr_cnt_blk *ctr, uint64_t iv, uint32_t nonce)
{
@@ -59,6 +131,16 @@ aes_ctr_cnt_blk_fill(struct aesctr_cnt_blk *ctr, uint64_t iv, uint32_t nonce)
ctr->cnt = rte_cpu_to_be_32(1);
}
+static inline void
+aead_chacha20_poly1305_iv_fill(struct aead_chacha20_poly1305_iv
+ *chacha20_poly1305,
+ uint64_t iv, uint32_t salt)
+{
+ chacha20_poly1305->salt = salt;
+ chacha20_poly1305->iv = iv;
+ chacha20_poly1305->cnt = rte_cpu_to_be_32(1);
+}
+
static inline void
aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
{
@@ -67,6 +149,21 @@ aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
gcm->cnt = rte_cpu_to_be_32(1);
}
+static inline void
+aead_ccm_iv_fill(struct aead_ccm_iv *ccm, uint64_t iv, uint32_t salt)
+{
+ union aead_ccm_salt tsalt;
+
+ tsalt.salt = salt;
+ ccm->ccm_flags = tsalt.inner.ccm_flags;
+ ccm->salt[0] = tsalt.inner.salt8[0];
+ ccm->salt[1] = tsalt.inner.salt8[1];
+ ccm->salt[2] = tsalt.inner.salt8[2];
+ ccm->iv = iv;
+ ccm->cnt = rte_cpu_to_be_32(1);
+}
+
+
/*
* RFC 4106, 5 AAD Construction
* spi and sqn should already be converted into network byte order.
@@ -86,6 +183,25 @@ aead_gcm_aad_fill(struct aead_gcm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
aad->align0 = 0;
}
+/*
+ * RFC 4309, 5 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_ccm_aad_fill(struct aead_ccm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
+ int esn)
+{
+ aad->spi = spi;
+ if (esn)
+ aad->sqn.u64 = sqn;
+ else {
+ aad->sqn.u32[0] = sqn_low32(sqn);
+ aad->sqn.u32[1] = 0;
+ }
+ aad->align0 = 0;
+}
+
static inline void
gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
{
@@ -93,6 +209,27 @@ gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
iv[1] = 0;
}
+
+/*
+ * RFC 7634, 2.1 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_chacha20_poly1305_aad_fill(struct aead_chacha20_poly1305_aad *aad,
+ rte_be32_t spi, rte_be64_t sqn,
+ int esn)
+{
+ aad->spi = spi;
+ if (esn)
+ aad->sqn.u64 = sqn;
+ else {
+ aad->sqn.u32[0] = sqn_low32(sqn);
+ aad->sqn.u32[1] = 0;
+ }
+ aad->align0 = 0;
+}
+
/*
* Helper routine to copy IV
* Right now we support only algorithms with IV length equals 0/8/16 bytes.
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index 2b1df6a032..d66c88f05d 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -63,6 +63,8 @@ inb_cop_prepare(struct rte_crypto_op *cop,
{
struct rte_crypto_sym_op *sop;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint64_t *ivc, *ivp;
uint32_t algo;
@@ -83,6 +85,24 @@ inb_cop_prepare(struct rte_crypto_op *cop,
sa->iv_ofs);
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ sop_aead_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ ccm = rte_crypto_op_ctod_offset(cop, struct aead_ccm_iv *,
+ sa->iv_ofs);
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ sop_aead_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ chacha20_poly1305 = rte_crypto_op_ctod_offset(cop,
+ struct aead_chacha20_poly1305_iv *,
+ sa->iv_ofs);
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CBC:
case ALGO_TYPE_3DES_CBC:
sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
@@ -91,6 +111,14 @@ inb_cop_prepare(struct rte_crypto_op *cop,
ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs);
copy_iv(ivc, ivp, sa->iv_len);
break;
+ case ALGO_TYPE_AES_GMAC:
+ sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+ sa->iv_ofs);
+ aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
@@ -110,6 +138,8 @@ inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
uint32_t *pofs, uint32_t plen, void *iv)
{
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint64_t *ivp;
uint32_t clen;
@@ -120,9 +150,19 @@ inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
switch (sa->algo_type) {
case ALGO_TYPE_AES_GCM:
+ case ALGO_TYPE_AES_GMAC:
gcm = (struct aead_gcm_iv *)iv;
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ ccm = (struct aead_ccm_iv *)iv;
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ chacha20_poly1305 = (struct aead_chacha20_poly1305_iv *)iv;
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CBC:
case ALGO_TYPE_3DES_CBC:
copy_iv(iv, ivp, sa->iv_len);
@@ -175,6 +215,8 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
const union sym_op_data *icv)
{
struct aead_gcm_aad *aad;
+ struct aead_ccm_aad *caad;
+ struct aead_chacha20_poly1305_aad *chacha_aad;
/* insert SQN.hi between ESP trailer and ICV */
if (sa->sqh_len != 0)
@@ -184,9 +226,27 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
* fill AAD fields, if any (aad fields are placed after icv),
* right now we support only one AEAD algorithm: AES-GCM.
*/
- if (sa->aad_len != 0) {
- aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
- aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ switch (sa->algo_type) {
+ case ALGO_TYPE_AES_GCM:
+ if (sa->aad_len != 0) {
+ aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+ aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_AES_CCM:
+ if (sa->aad_len != 0) {
+ caad = (struct aead_ccm_aad *)(icv->va + sa->icv_len);
+ aead_ccm_aad_fill(caad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ if (sa->aad_len != 0) {
+ chacha_aad = (struct aead_chacha20_poly1305_aad *)
+ (icv->va + sa->icv_len);
+ aead_chacha20_poly1305_aad_fill(chacha_aad,
+ sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
}
}
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 1e181cf2ce..a3f77469c3 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -63,6 +63,8 @@ outb_cop_prepare(struct rte_crypto_op *cop,
{
struct rte_crypto_sym_op *sop;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint32_t algo;
@@ -80,6 +82,15 @@ outb_cop_prepare(struct rte_crypto_op *cop,
/* NULL case */
sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
break;
+ case ALGO_TYPE_AES_GMAC:
+ /* GMAC case */
+ sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+ sa->iv_ofs);
+ aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_GCM:
/* AEAD (AES_GCM) case */
sop_aead_prepare(sop, sa, icv, hlen, plen);
@@ -89,6 +100,26 @@ outb_cop_prepare(struct rte_crypto_op *cop,
sa->iv_ofs);
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ /* AEAD (AES_CCM) case */
+ sop_aead_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ ccm = rte_crypto_op_ctod_offset(cop, struct aead_ccm_iv *,
+ sa->iv_ofs);
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ /* AEAD (CHACHA20_POLY) case */
+ sop_aead_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ chacha20_poly1305 = rte_crypto_op_ctod_offset(cop,
+ struct aead_chacha20_poly1305_iv *,
+ sa->iv_ofs);
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
/* Cipher-Auth (AES-CTR *) case */
sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
@@ -196,7 +227,9 @@ outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
const union sym_op_data *icv)
{
uint32_t *psqh;
- struct aead_gcm_aad *aad;
+ struct aead_gcm_aad *gaad;
+ struct aead_ccm_aad *caad;
+ struct aead_chacha20_poly1305_aad *chacha20_poly1305_aad;
/* insert SQN.hi between ESP trailer and ICV */
if (sa->sqh_len != 0) {
@@ -208,9 +241,29 @@ outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
* fill IV and AAD fields, if any (aad fields are placed after icv),
* right now we support only one AEAD algorithm: AES-GCM .
*/
+ switch (sa->algo_type) {
+ case ALGO_TYPE_AES_GCM:
if (sa->aad_len != 0) {
- aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
- aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ gaad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+ aead_gcm_aad_fill(gaad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_AES_CCM:
+ if (sa->aad_len != 0) {
+ caad = (struct aead_ccm_aad *)(icv->va + sa->icv_len);
+ aead_ccm_aad_fill(caad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ if (sa->aad_len != 0) {
+ chacha20_poly1305_aad = (struct aead_chacha20_poly1305_aad *)
+ (icv->va + sa->icv_len);
+ aead_chacha20_poly1305_aad_fill(chacha20_poly1305_aad,
+ sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ default:
+ break;
}
}
@@ -418,6 +471,8 @@ outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
{
uint64_t *ivp = iv;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint32_t clen;
@@ -426,6 +481,15 @@ outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
gcm = iv;
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ ccm = iv;
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ chacha20_poly1305 = iv;
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
ctr = iv;
aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt);
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index e59189d215..720e0f365b 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -47,6 +47,15 @@ fill_crypto_xform(struct crypto_xform *xform, uint64_t type,
if (xfn != NULL)
return -EINVAL;
xform->aead = &xf->aead;
+
+ /* GMAC has only auth */
+ } else if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+ xf->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ if (xfn != NULL)
+ return -EINVAL;
+ xform->auth = &xf->auth;
+ xform->cipher = &xfn->cipher;
+
/*
* CIPHER+AUTH xforms are expected in strict order,
* depending on SA direction:
@@ -247,12 +256,13 @@ esp_inb_init(struct rte_ipsec_sa *sa)
sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
/*
- * for AEAD and NULL algorithms we can assume that
+ * for AEAD algorithms we can assume that
* auth and cipher offsets would be equal.
*/
switch (sa->algo_type) {
case ALGO_TYPE_AES_GCM:
- case ALGO_TYPE_NULL:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
sa->ctp.auth.raw = sa->ctp.cipher.raw;
break;
default:
@@ -294,6 +304,8 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
switch (algo_type) {
case ALGO_TYPE_AES_GCM:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
case ALGO_TYPE_AES_CTR:
case ALGO_TYPE_NULL:
sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr) +
@@ -305,15 +317,20 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr);
sa->ctp.cipher.length = sa->iv_len;
break;
+ case ALGO_TYPE_AES_GMAC:
+ sa->ctp.cipher.offset = 0;
+ sa->ctp.cipher.length = 0;
+ break;
}
/*
- * for AEAD and NULL algorithms we can assume that
+ * for AEAD algorithms we can assume that
* auth and cipher offsets would be equal.
*/
switch (algo_type) {
case ALGO_TYPE_AES_GCM:
- case ALGO_TYPE_NULL:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
sa->ctp.auth.raw = sa->ctp.cipher.raw;
break;
default:
@@ -374,13 +391,39 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->pad_align = IPSEC_PAD_AES_GCM;
sa->algo_type = ALGO_TYPE_AES_GCM;
break;
+ case RTE_CRYPTO_AEAD_AES_CCM:
+ /* RFC 4309 */
+ sa->aad_len = sizeof(struct aead_ccm_aad);
+ sa->icv_len = cxf->aead->digest_length;
+ sa->iv_ofs = cxf->aead->iv.offset;
+ sa->iv_len = sizeof(uint64_t);
+ sa->pad_align = IPSEC_PAD_AES_CCM;
+ sa->algo_type = ALGO_TYPE_AES_CCM;
+ break;
+ case RTE_CRYPTO_AEAD_CHACHA20_POLY1305:
+ /* RFC 7634 & 8439*/
+ sa->aad_len = sizeof(struct aead_chacha20_poly1305_aad);
+ sa->icv_len = cxf->aead->digest_length;
+ sa->iv_ofs = cxf->aead->iv.offset;
+ sa->iv_len = sizeof(uint64_t);
+ sa->pad_align = IPSEC_PAD_CHACHA20_POLY1305;
+ sa->algo_type = ALGO_TYPE_CHACHA20_POLY1305;
+ break;
default:
return -EINVAL;
}
+ } else if (cxf->auth->algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ /* RFC 4543 */
+ /* AES-GMAC is a special case of auth that needs IV */
+ sa->pad_align = IPSEC_PAD_AES_GMAC;
+ sa->iv_len = sizeof(uint64_t);
+ sa->icv_len = cxf->auth->digest_length;
+ sa->iv_ofs = cxf->auth->iv.offset;
+ sa->algo_type = ALGO_TYPE_AES_GMAC;
+
} else {
sa->icv_len = cxf->auth->digest_length;
sa->iv_ofs = cxf->cipher->iv.offset;
- sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
switch (cxf->cipher->algo) {
case RTE_CRYPTO_CIPHER_NULL:
@@ -414,6 +457,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
}
}
+ sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
sa->udata = prm->userdata;
sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi);
sa->salt = prm->ipsec_xform.salt;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 1bffe751f5..107ebd1519 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -19,7 +19,10 @@ enum {
IPSEC_PAD_AES_CBC = IPSEC_MAX_IV_SIZE,
IPSEC_PAD_AES_CTR = IPSEC_PAD_DEFAULT,
IPSEC_PAD_AES_GCM = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_AES_CCM = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_CHACHA20_POLY1305 = IPSEC_PAD_DEFAULT,
IPSEC_PAD_NULL = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_AES_GMAC = IPSEC_PAD_DEFAULT,
};
/* iv sizes for different algorithms */
@@ -67,6 +70,9 @@ enum sa_algo_type {
ALGO_TYPE_AES_CBC,
ALGO_TYPE_AES_CTR,
ALGO_TYPE_AES_GCM,
+ ALGO_TYPE_AES_CCM,
+ ALGO_TYPE_CHACHA20_POLY1305,
+ ALGO_TYPE_AES_GMAC,
ALGO_TYPE_MAX
};
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH v7 2/8] ipsec: add support for AEAD algorithms
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 2/8] ipsec: add support for AEAD algorithms Radu Nicolau
@ 2021-10-08 18:30 ` Akhil Goyal
0 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2021-10-08 18:30 UTC (permalink / raw)
To: Radu Nicolau, Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
Anoob Joseph, declan.doherty, abhijit.sinha, daniel.m.buckley,
Archana Muniganti, Tejasree Kondoj, matan
> Add support for AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
> lib/ipsec/crypto.h | 137
> +++++++++++++++++++++++++++++++++++++++++++
> lib/ipsec/esp_inb.c | 66 ++++++++++++++++++++-
> lib/ipsec/esp_outb.c | 70 +++++++++++++++++++++-
> lib/ipsec/sa.c | 54 +++++++++++++++--
> lib/ipsec/sa.h | 6 ++
> 5 files changed, 322 insertions(+), 11 deletions(-)
Documentation updates are also missing.
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v7 3/8] security: add UDP params for IPsec NAT-T
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 0/8] " Radu Nicolau
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 1/8] security: add ESN field to ipsec_xform Radu Nicolau
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 2/8] ipsec: add support for AEAD algorithms Radu Nicolau
@ 2021-10-01 9:50 ` Radu Nicolau
2021-10-01 12:20 ` [dpdk-dev] [EXT] " Anoob Joseph
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 4/8] ipsec: add support for NAT-T Radu Nicolau
` (5 subsequent siblings)
8 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-10-01 9:50 UTC (permalink / raw)
To: Akhil Goyal, Declan Doherty
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, anoobj,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau
Add support for specifying UDP port params for UDP encapsulation option.
RFC3948 section-2.1 does not enforce using specific the UDP ports for
UDP-Encapsulated ESP Header
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
lib/security/rte_security.h | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 48353a3e18..033887f09a 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -112,6 +112,11 @@ struct rte_security_ipsec_tunnel_param {
};
};
+struct rte_security_ipsec_udp_param {
+ uint16_t sport;
+ uint16_t dport;
+};
+
/**
* IPsec Security Association option flags
*/
@@ -225,6 +230,8 @@ struct rte_security_ipsec_xform {
};
} esn;
/**< Extended Sequence Number */
+ struct rte_security_ipsec_udp_param udp;
+ /**< UDP parameters, ignored when udp_encap option not specified */
};
/**
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH v7 3/8] security: add UDP params for IPsec NAT-T
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 3/8] security: add UDP params for IPsec NAT-T Radu Nicolau
@ 2021-10-01 12:20 ` Anoob Joseph
0 siblings, 0 replies; 184+ messages in thread
From: Anoob Joseph @ 2021-10-01 12:20 UTC (permalink / raw)
To: Radu Nicolau, Akhil Goyal, Declan Doherty
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, abhijit.sinha,
daniel.m.buckley, Archana Muniganti, Tejasree Kondoj, matan
> Add support for specifying UDP port params for UDP encapsulation option.
> RFC3948 section-2.1 does not enforce using specific the UDP ports for UDP-
> Encapsulated ESP Header
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
> lib/security/rte_security.h | 7 +++++++
> 1 file changed, 7 insertions(+)
>
Acked-by: Anoob Joseph <anoobj@marvell.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v7 4/8] ipsec: add support for NAT-T
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 0/8] " Radu Nicolau
` (2 preceding siblings ...)
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 3/8] security: add UDP params for IPsec NAT-T Radu Nicolau
@ 2021-10-01 9:50 ` Radu Nicolau
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 5/8] mbuf: add IPsec ESP tunnel type Radu Nicolau
` (4 subsequent siblings)
8 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-01 9:50 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add support for the IPsec NAT-Traversal use case for Tunnel mode
packets.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
lib/ipsec/esp_outb.c | 9 +++++++++
lib/ipsec/rte_ipsec_sa.h | 9 ++++++++-
lib/ipsec/sa.c | 28 +++++++++++++++++++++++++---
3 files changed, 42 insertions(+), 4 deletions(-)
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index a3f77469c3..0e3314b358 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -5,6 +5,7 @@
#include <rte_ipsec.h>
#include <rte_esp.h>
#include <rte_ip.h>
+#include <rte_udp.h>
#include <rte_errno.h>
#include <rte_cryptodev.h>
@@ -185,6 +186,14 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* copy tunnel pkt header */
rte_memcpy(ph, sa->hdr, sa->hdr_len);
+ /* if UDP encap is enabled update the dgram_len */
+ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
+ struct rte_udp_hdr *udph = (struct rte_udp_hdr *)
+ (ph - sizeof(struct rte_udp_hdr));
+ udph->dgram_len = rte_cpu_to_be_16(mb->pkt_len - sqh_len -
+ sa->hdr_l3_off - sa->hdr_len);
+ }
+
/* update original and new ip header fields */
update_tun_outb_l3hdr(sa, ph + sa->hdr_l3_off, ph + hlen,
mb->pkt_len - sqh_len, sa->hdr_l3_off, sqn_low16(sqc));
diff --git a/lib/ipsec/rte_ipsec_sa.h b/lib/ipsec/rte_ipsec_sa.h
index cf51ad8338..3a22705055 100644
--- a/lib/ipsec/rte_ipsec_sa.h
+++ b/lib/ipsec/rte_ipsec_sa.h
@@ -78,6 +78,7 @@ struct rte_ipsec_sa_prm {
* - for TUNNEL outer IP version (IPv4/IPv6)
* - are SA SQN operations 'atomic'
* - ESN enabled/disabled
+ * - NAT-T UDP encapsulated (TUNNEL mode only)
* ...
*/
@@ -89,7 +90,8 @@ enum {
RTE_SATP_LOG2_SQN = RTE_SATP_LOG2_MODE + 2,
RTE_SATP_LOG2_ESN,
RTE_SATP_LOG2_ECN,
- RTE_SATP_LOG2_DSCP
+ RTE_SATP_LOG2_DSCP,
+ RTE_SATP_LOG2_NATT
};
#define RTE_IPSEC_SATP_IPV_MASK (1ULL << RTE_SATP_LOG2_IPV)
@@ -125,6 +127,11 @@ enum {
#define RTE_IPSEC_SATP_DSCP_DISABLE (0ULL << RTE_SATP_LOG2_DSCP)
#define RTE_IPSEC_SATP_DSCP_ENABLE (1ULL << RTE_SATP_LOG2_DSCP)
+#define RTE_IPSEC_SATP_NATT_MASK (1ULL << RTE_SATP_LOG2_NATT)
+#define RTE_IPSEC_SATP_NATT_DISABLE (0ULL << RTE_SATP_LOG2_NATT)
+#define RTE_IPSEC_SATP_NATT_ENABLE (1ULL << RTE_SATP_LOG2_NATT)
+
+
/**
* get type of given SA
* @return
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 720e0f365b..1dd19467a6 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -5,6 +5,7 @@
#include <rte_ipsec.h>
#include <rte_esp.h>
#include <rte_ip.h>
+#include <rte_udp.h>
#include <rte_errno.h>
#include <rte_cryptodev.h>
@@ -217,6 +218,10 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
} else
return -EINVAL;
+ /* check for UDP encapsulation flag */
+ if (prm->ipsec_xform.options.udp_encap == 1)
+ tp |= RTE_IPSEC_SATP_NATT_ENABLE;
+
/* check for ESN flag */
if (prm->ipsec_xform.options.esn == 0)
tp |= RTE_IPSEC_SATP_ESN_DISABLE;
@@ -355,12 +360,22 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
sa->hdr_len = prm->tun.hdr_len;
sa->hdr_l3_off = prm->tun.hdr_l3_off;
+ memcpy(sa->hdr, prm->tun.hdr, prm->tun.hdr_len);
+
+ /* insert UDP header if UDP encapsulation is inabled */
+ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
+ struct rte_udp_hdr *udph = (struct rte_udp_hdr *)
+ &sa->hdr[prm->tun.hdr_len];
+ sa->hdr_len += sizeof(struct rte_udp_hdr);
+ udph->src_port = prm->ipsec_xform.udp.sport;
+ udph->dst_port = prm->ipsec_xform.udp.dport;
+ udph->dgram_cksum = 0;
+ }
+
/* update l2_len and l3_len fields for outbound mbuf */
sa->tx_offload.val = rte_mbuf_tx_offload(sa->hdr_l3_off,
sa->hdr_len - sa->hdr_l3_off, 0, 0, 0, 0, 0);
- memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
-
esp_outb_init(sa, sa->hdr_len);
}
@@ -372,7 +387,8 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
const struct crypto_xform *cxf)
{
static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
- RTE_IPSEC_SATP_MODE_MASK;
+ RTE_IPSEC_SATP_MODE_MASK |
+ RTE_IPSEC_SATP_NATT_MASK;
if (prm->ipsec_xform.options.ecn)
sa->tos_mask |= RTE_IPV4_HDR_ECN_MASK;
@@ -475,10 +491,16 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
esp_inb_init(sa);
break;
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4 |
+ RTE_IPSEC_SATP_NATT_ENABLE):
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6 |
+ RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
esp_outb_tun_init(sa, prm);
break;
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
+ RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
esp_outb_init(sa, 0);
break;
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v7 5/8] mbuf: add IPsec ESP tunnel type
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 0/8] " Radu Nicolau
` (3 preceding siblings ...)
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 4/8] ipsec: add support for NAT-T Radu Nicolau
@ 2021-10-01 9:50 ` Radu Nicolau
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 6/8] ipsec: add transmit segmentation offload support Radu Nicolau
` (3 subsequent siblings)
8 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-01 9:50 UTC (permalink / raw)
To: Olivier Matz
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, gakhil, anoobj,
declan.doherty, abhijit.sinha, daniel.m.buckley, marchana,
ktejasree, matan, Radu Nicolau
Add ESP tunnel type to the tunnel types list that can be specified
for TSO or checksum on the inner part of tunnel packets.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
lib/mbuf/rte_mbuf_core.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index bb38d7f581..a4d95deee6 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -253,6 +253,7 @@ extern "C" {
#define PKT_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
#define PKT_TX_TUNNEL_VXLAN_GPE (0x6ULL << 45)
#define PKT_TX_TUNNEL_GTP (0x7ULL << 45)
+#define PKT_TX_TUNNEL_ESP (0x8ULL << 45)
/**
* Generic IP encapsulated tunnel type, used for TSO and checksum offload.
* It can be used for tunnels which are not standards or listed above.
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v7 6/8] ipsec: add transmit segmentation offload support
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 0/8] " Radu Nicolau
` (4 preceding siblings ...)
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 5/8] mbuf: add IPsec ESP tunnel type Radu Nicolau
@ 2021-10-01 9:50 ` Radu Nicolau
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 7/8] ipsec: add support for SA telemetry Radu Nicolau
` (2 subsequent siblings)
8 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-01 9:50 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add support for transmit segmentation offload to inline crypto processing
mode. This offload is not supported by other offload modes, as at a
minimum it requires inline crypto for IPsec to be supported on the
network interface.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
lib/ipsec/esp_outb.c | 119 ++++++++++++++++++++++++++++++++++++-------
1 file changed, 100 insertions(+), 19 deletions(-)
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 0e3314b358..df7d3e8645 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -147,6 +147,7 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
struct rte_esp_tail *espt;
char *ph, *pt;
uint64_t *iv;
+ uint8_t tso = !!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG));
/* calculate extra header space required */
hlen = sa->hdr_len + sa->iv_len + sizeof(*esph);
@@ -157,11 +158,20 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* number of bytes to encrypt */
clen = plen + sizeof(*espt);
- clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+ /* We don't need to pad/ailgn packet when using TSO offload */
+ if (likely(!tso))
+ clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
/* pad length + esp tail */
pdlen = clen - plen;
- tlen = pdlen + sa->icv_len + sqh_len;
+
+ /* We don't append ICV length when using TSO offload */
+ if (likely(!tso))
+ tlen = pdlen + sa->icv_len + sqh_len;
+ else
+ tlen = pdlen + sqh_len;
/* do append and prepend */
ml = rte_pktmbuf_lastseg(mb);
@@ -346,6 +356,7 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
char *ph, *pt;
uint64_t *iv;
uint32_t l2len, l3len;
+ uint8_t tso = !!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG));
l2len = mb->l2_len;
l3len = mb->l3_len;
@@ -358,11 +369,19 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* number of bytes to encrypt */
clen = plen + sizeof(*espt);
- clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+ /* We don't need to pad/ailgn packet when using TSO offload */
+ if (likely(!tso))
+ clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
/* pad length + esp tail */
pdlen = clen - plen;
- tlen = pdlen + sa->icv_len + sqh_len;
+
+ /* We don't append ICV length when using TSO offload */
+ if (likely(!tso))
+ tlen = pdlen + sa->icv_len + sqh_len;
+ else
+ tlen = pdlen + sqh_len;
/* do append and insert */
ml = rte_pktmbuf_lastseg(mb);
@@ -660,6 +679,29 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
}
}
+/* check if packet will exceed MSS and segmentation is required */
+static inline int
+esn_outb_nb_segments(struct rte_mbuf *m) {
+ uint16_t segments = 1;
+ uint16_t pkt_l3len = m->pkt_len - m->l2_len;
+
+ /* Only support segmentation for UDP/TCP flows */
+ if (!(m->packet_type & (RTE_PTYPE_L4_UDP | RTE_PTYPE_L4_TCP)))
+ return segments;
+
+ if (m->tso_segsz > 0 && pkt_l3len > m->tso_segsz) {
+ segments = pkt_l3len / m->tso_segsz;
+ if (segments * m->tso_segsz < pkt_l3len)
+ segments++;
+ if (m->packet_type & RTE_PTYPE_L4_TCP)
+ m->ol_flags |= (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM);
+ else
+ m->ol_flags |= (PKT_TX_UDP_SEG | PKT_TX_UDP_CKSUM);
+ }
+
+ return segments;
+}
+
/*
* process group of ESP outbound tunnel packets destined for
* INLINE_CRYPTO type of device.
@@ -669,24 +711,36 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
int32_t rc;
- uint32_t i, k, n;
+ uint32_t i, k, nb_sqn = 0, nb_sqn_alloc;
uint64_t sqn;
rte_be64_t sqc;
struct rte_ipsec_sa *sa;
union sym_op_data icv;
uint64_t iv[IPSEC_MAX_IV_QWORD];
uint32_t dr[num];
+ uint16_t nb_segs[num];
sa = ss->sa;
- n = num;
- sqn = esn_outb_update_sqn(sa, &n);
- if (n != num)
+ for (i = 0; i != num; i++) {
+ nb_segs[i] = esn_outb_nb_segments(mb[i]);
+ nb_sqn += nb_segs[i];
+ /* setup offload fields for TSO */
+ if (nb_segs[i] > 1) {
+ mb[i]->ol_flags |= (PKT_TX_OUTER_IPV4 |
+ PKT_TX_OUTER_IP_CKSUM |
+ PKT_TX_TUNNEL_ESP);
+ mb[i]->outer_l3_len = mb[i]->l3_len;
+ }
+ }
+
+ nb_sqn_alloc = nb_sqn;
+ sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
+ if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
k = 0;
- for (i = 0; i != n; i++) {
-
+ for (i = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -700,11 +754,18 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
dr[i - k] = i;
rte_errno = -rc;
}
+
+ /**
+ * If packet is using tso, increment sqn by the number of
+ * segments for packet
+ */
+ if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
+ sqn += nb_segs[i] - 1;
}
/* copy not processed mbufs beyond good ones */
- if (k != n && k != 0)
- move_bad_mbufs(mb, dr, n, n - k);
+ if (k != num && k != 0)
+ move_bad_mbufs(mb, dr, num, num - k);
inline_outb_mbuf_prepare(ss, mb, k);
return k;
@@ -719,23 +780,36 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
int32_t rc;
- uint32_t i, k, n;
+ uint32_t i, k, nb_sqn, nb_sqn_alloc;
uint64_t sqn;
rte_be64_t sqc;
struct rte_ipsec_sa *sa;
union sym_op_data icv;
uint64_t iv[IPSEC_MAX_IV_QWORD];
uint32_t dr[num];
+ uint16_t nb_segs[num];
sa = ss->sa;
- n = num;
- sqn = esn_outb_update_sqn(sa, &n);
- if (n != num)
+ /* Calculate number of sequence numbers required */
+ for (i = 0, nb_sqn = 0; i != num; i++) {
+ nb_segs[i] = esn_outb_nb_segments(mb[i]);
+ nb_sqn += nb_segs[i];
+ /* setup offload fields for TSO */
+ if (nb_segs[i] > 1) {
+ mb[i]->ol_flags |= (PKT_TX_OUTER_IPV4 |
+ PKT_TX_OUTER_IP_CKSUM);
+ mb[i]->outer_l3_len = mb[i]->l3_len;
+ }
+ }
+
+ nb_sqn_alloc = nb_sqn;
+ sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
+ if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
k = 0;
- for (i = 0; i != n; i++) {
+ for (i = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -750,11 +824,18 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
dr[i - k] = i;
rte_errno = -rc;
}
+
+ /**
+ * If packet is using tso, increment sqn by the number of
+ * segments for packet
+ */
+ if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
+ sqn += nb_segs[i] - 1;
}
/* copy not processed mbufs beyond good ones */
- if (k != n && k != 0)
- move_bad_mbufs(mb, dr, n, n - k);
+ if (k != num && k != 0)
+ move_bad_mbufs(mb, dr, num, num - k);
inline_outb_mbuf_prepare(ss, mb, k);
return k;
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v7 7/8] ipsec: add support for SA telemetry
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 0/8] " Radu Nicolau
` (5 preceding siblings ...)
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 6/8] ipsec: add transmit segmentation offload support Radu Nicolau
@ 2021-10-01 9:50 ` Radu Nicolau
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 8/8] ipsec: add support for initial SQN value Radu Nicolau
2021-10-08 18:26 ` [dpdk-dev] [EXT] [PATCH v7 0/8] new features for ipsec and security libraries Akhil Goyal
8 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-01 9:50 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin, Ray Kinsella
Cc: dev, bruce.richardson, roy.fan.zhang, hemant.agrawal, gakhil,
anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add telemetry support for ipsec SAs
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
lib/ipsec/esp_inb.c | 18 ++-
lib/ipsec/esp_outb.c | 12 +-
lib/ipsec/ipsec_telemetry.c | 237 ++++++++++++++++++++++++++++++++++++
lib/ipsec/meson.build | 6 +-
lib/ipsec/rte_ipsec.h | 23 ++++
lib/ipsec/sa.c | 10 +-
lib/ipsec/sa.h | 9 ++
lib/ipsec/version.map | 9 ++
8 files changed, 313 insertions(+), 11 deletions(-)
create mode 100644 lib/ipsec/ipsec_telemetry.c
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index d66c88f05d..6fbe468a61 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -15,7 +15,7 @@
#include "misc.h"
#include "pad.h"
-typedef uint16_t (*esp_inb_process_t)(const struct rte_ipsec_sa *sa,
+typedef uint16_t (*esp_inb_process_t)(struct rte_ipsec_sa *sa,
struct rte_mbuf *mb[], uint32_t sqn[], uint32_t dr[], uint16_t num,
uint8_t sqh_len);
@@ -573,10 +573,10 @@ tun_process_step3(struct rte_mbuf *mb, uint64_t txof_msk, uint64_t txof_val)
* *process* function for tunnel packets
*/
static inline uint16_t
-tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
+tun_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
uint32_t sqn[], uint32_t dr[], uint16_t num, uint8_t sqh_len)
{
- uint32_t adj, i, k, tl;
+ uint32_t adj, i, k, tl, bytes;
uint32_t hl[num], to[num];
struct rte_esp_tail espt[num];
struct rte_mbuf *ml[num];
@@ -598,6 +598,7 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
process_step1(mb[i], tlen, &ml[i], &espt[i], &hl[i], &to[i]);
k = 0;
+ bytes = 0;
for (i = 0; i != num; i++) {
adj = hl[i] + cofs;
@@ -621,10 +622,13 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
tun_process_step3(mb[i], sa->tx_offload.msk,
sa->tx_offload.val);
k++;
+ bytes += mb[i]->pkt_len;
} else
dr[i - k] = i;
}
+ sa->statistics.count += k;
+ sa->statistics.bytes += bytes;
return k;
}
@@ -632,11 +636,11 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
* *process* function for tunnel packets
*/
static inline uint16_t
-trs_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
+trs_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
uint32_t sqn[], uint32_t dr[], uint16_t num, uint8_t sqh_len)
{
char *np;
- uint32_t i, k, l2, tl;
+ uint32_t i, k, l2, tl, bytes;
uint32_t hl[num], to[num];
struct rte_esp_tail espt[num];
struct rte_mbuf *ml[num];
@@ -656,6 +660,7 @@ trs_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
process_step1(mb[i], tlen, &ml[i], &espt[i], &hl[i], &to[i]);
k = 0;
+ bytes = 0;
for (i = 0; i != num; i++) {
tl = tlen + espt[i].pad_len;
@@ -674,10 +679,13 @@ trs_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
/* update mbuf's metadata */
trs_process_step3(mb[i]);
k++;
+ bytes += mb[i]->pkt_len;
} else
dr[i - k] = i;
}
+ sa->statistics.count += k;
+ sa->statistics.bytes += bytes;
return k;
}
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index df7d3e8645..b18057b7da 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -625,7 +625,7 @@ uint16_t
esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
uint16_t num)
{
- uint32_t i, k, icv_len, *icv;
+ uint32_t i, k, icv_len, *icv, bytes;
struct rte_mbuf *ml;
struct rte_ipsec_sa *sa;
uint32_t dr[num];
@@ -634,6 +634,7 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
k = 0;
icv_len = sa->icv_len;
+ bytes = 0;
for (i = 0; i != num; i++) {
if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
@@ -644,10 +645,13 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
icv = rte_pktmbuf_mtod_offset(ml, void *,
ml->data_len - icv_len);
remove_sqh(icv, icv_len);
+ bytes += mb[i]->pkt_len;
k++;
} else
dr[i - k] = i;
}
+ sa->statistics.count += k;
+ sa->statistics.bytes += bytes;
/* handle unprocessed mbufs */
if (k != num) {
@@ -667,16 +671,20 @@ static inline void
inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- uint32_t i, ol_flags;
+ uint32_t i, ol_flags, bytes;
ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
+ bytes = 0;
for (i = 0; i != num; i++) {
mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
+ bytes += mb[i]->pkt_len;
if (ol_flags != 0)
rte_security_set_pkt_metadata(ss->security.ctx,
ss->security.ses, mb[i], NULL);
}
+ ss->sa->statistics.count += num;
+ ss->sa->statistics.bytes += bytes;
}
/* check if packet will exceed MSS and segmentation is required */
diff --git a/lib/ipsec/ipsec_telemetry.c b/lib/ipsec/ipsec_telemetry.c
new file mode 100644
index 0000000000..f963d062a8
--- /dev/null
+++ b/lib/ipsec/ipsec_telemetry.c
@@ -0,0 +1,237 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include <rte_ipsec.h>
+#include <rte_telemetry.h>
+#include <rte_malloc.h>
+#include "sa.h"
+
+
+struct ipsec_telemetry_entry {
+ LIST_ENTRY(ipsec_telemetry_entry) next;
+ struct rte_ipsec_sa *sa;
+};
+static LIST_HEAD(ipsec_telemetry_head, ipsec_telemetry_entry)
+ ipsec_telemetry_list = LIST_HEAD_INITIALIZER();
+
+static int
+handle_telemetry_cmd_ipsec_sa_list(const char *cmd __rte_unused,
+ const char *params __rte_unused,
+ struct rte_tel_data *data)
+{
+ struct ipsec_telemetry_entry *entry;
+ rte_tel_data_start_array(data, RTE_TEL_U64_VAL);
+
+ LIST_FOREACH(entry, &ipsec_telemetry_list, next) {
+ struct rte_ipsec_sa *sa = entry->sa;
+ rte_tel_data_add_array_u64(data, rte_be_to_cpu_32(sa->spi));
+ }
+
+ return 0;
+}
+
+/**
+ * Handle IPsec SA statistics telemetry request
+ *
+ * Return dict of SA's with dict of key/value counters
+ *
+ * {
+ * "SA_SPI_XX": {"count": 0, "bytes": 0, "errors": 0},
+ * "SA_SPI_YY": {"count": 0, "bytes": 0, "errors": 0}
+ * }
+ *
+ */
+static int
+handle_telemetry_cmd_ipsec_sa_stats(const char *cmd __rte_unused,
+ const char *params,
+ struct rte_tel_data *data)
+{
+ struct ipsec_telemetry_entry *entry;
+ struct rte_ipsec_sa *sa;
+ uint32_t sa_spi = 0;
+
+ if (params)
+ sa_spi = rte_cpu_to_be_32((uint32_t)strtoul(params, NULL, 10));
+
+ rte_tel_data_start_dict(data);
+
+ LIST_FOREACH(entry, &ipsec_telemetry_list, next) {
+ char sa_name[64];
+ sa = entry->sa;
+ static const char *name_pkt_cnt = "count";
+ static const char *name_byte_cnt = "bytes";
+ static const char *name_error_cnt = "errors";
+ struct rte_tel_data *sa_data;
+
+ /* If user provided SPI only get telemetry for that SA */
+ if (sa_spi && (sa_spi != sa->spi))
+ continue;
+
+ /* allocate telemetry data struct for SA telemetry */
+ sa_data = rte_tel_data_alloc();
+ if (!sa_data)
+ return -ENOMEM;
+
+ rte_tel_data_start_dict(sa_data);
+
+ /* add telemetry key/values pairs */
+ rte_tel_data_add_dict_u64(sa_data, name_pkt_cnt,
+ sa->statistics.count);
+
+ rte_tel_data_add_dict_u64(sa_data, name_byte_cnt,
+ sa->statistics.bytes -
+ (sa->statistics.count * sa->hdr_len));
+
+ rte_tel_data_add_dict_u64(sa_data, name_error_cnt,
+ sa->statistics.errors.count);
+
+ /* generate telemetry label */
+ snprintf(sa_name, sizeof(sa_name), "SA_SPI_%i",
+ rte_be_to_cpu_32(sa->spi));
+
+ /* add SA telemetry to dictionary container */
+ rte_tel_data_add_dict_container(data, sa_name, sa_data, 0);
+ }
+
+ return 0;
+}
+
+static int
+handle_telemetry_cmd_ipsec_sa_details(const char *cmd __rte_unused,
+ const char *params,
+ struct rte_tel_data *data)
+{
+ struct ipsec_telemetry_entry *entry;
+ struct rte_ipsec_sa *sa;
+ uint32_t sa_spi;
+
+ if (params)
+ sa_spi = rte_cpu_to_be_32((uint32_t)strtoul(params, NULL, 10));
+ else
+ return -EINVAL;
+
+ rte_tel_data_start_dict(data);
+
+ LIST_FOREACH(entry, &ipsec_telemetry_list, next) {
+ uint64_t mode;
+ sa = entry->sa;
+ if (sa_spi != sa->spi)
+ continue;
+
+ /* add SA configuration key/values pairs */
+ rte_tel_data_add_dict_string(data, "Type",
+ (sa->type & RTE_IPSEC_SATP_PROTO_MASK) ==
+ RTE_IPSEC_SATP_PROTO_AH ? "AH" : "ESP");
+
+ rte_tel_data_add_dict_string(data, "Direction",
+ (sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+ RTE_IPSEC_SATP_DIR_IB ? "Inbound" : "Outbound");
+
+ mode = sa->type & RTE_IPSEC_SATP_MODE_MASK;
+
+ if (mode == RTE_IPSEC_SATP_MODE_TRANS) {
+ rte_tel_data_add_dict_string(data, "Mode", "Transport");
+ } else {
+ rte_tel_data_add_dict_string(data, "Mode", "Tunnel");
+
+ if ((sa->type & RTE_IPSEC_SATP_NATT_MASK) ==
+ RTE_IPSEC_SATP_NATT_ENABLE) {
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ } else if (sa->type &
+ RTE_IPSEC_SATP_MODE_TUNLV6) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ }
+ } else {
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ } else if (sa->type &
+ RTE_IPSEC_SATP_MODE_TUNLV6) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ }
+ }
+ }
+
+ rte_tel_data_add_dict_string(data,
+ "extended-sequence-number",
+ (sa->type & RTE_IPSEC_SATP_ESN_MASK) ==
+ RTE_IPSEC_SATP_ESN_ENABLE ?
+ "enabled" : "disabled");
+
+ if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+ RTE_IPSEC_SATP_DIR_IB)
+
+ if (sa->sqn.inb.rsn[sa->sqn.inb.rdidx])
+ rte_tel_data_add_dict_u64(data,
+ "sequence-number",
+ sa->sqn.inb.rsn[sa->sqn.inb.rdidx]->sqn);
+ else
+ rte_tel_data_add_dict_u64(data,
+ "sequence-number", 0);
+ else
+ rte_tel_data_add_dict_u64(data, "sequence-number",
+ sa->sqn.outb);
+
+ rte_tel_data_add_dict_string(data,
+ "explicit-congestion-notification",
+ (sa->type & RTE_IPSEC_SATP_ECN_MASK) ==
+ RTE_IPSEC_SATP_ECN_ENABLE ?
+ "enabled" : "disabled");
+
+ rte_tel_data_add_dict_string(data,
+ "copy-DSCP",
+ (sa->type & RTE_IPSEC_SATP_DSCP_MASK) ==
+ RTE_IPSEC_SATP_DSCP_ENABLE ?
+ "enabled" : "disabled");
+ }
+
+ return 0;
+}
+
+
+int
+rte_ipsec_telemetry_sa_add(struct rte_ipsec_sa *sa)
+{
+ struct ipsec_telemetry_entry *entry = rte_zmalloc(NULL,
+ sizeof(struct ipsec_telemetry_entry), 0);
+ entry->sa = sa;
+ LIST_INSERT_HEAD(&ipsec_telemetry_list, entry, next);
+ return 0;
+}
+
+void
+rte_ipsec_telemetry_sa_del(struct rte_ipsec_sa *sa)
+{
+ struct ipsec_telemetry_entry *entry;
+ LIST_FOREACH(entry, &ipsec_telemetry_list, next) {
+ if (sa == entry->sa) {
+ LIST_REMOVE(entry, next);
+ rte_free(entry);
+ return;
+ }
+ }
+}
+
+
+RTE_INIT(rte_ipsec_telemetry_init)
+{
+ rte_telemetry_register_cmd("/ipsec/sa/list",
+ handle_telemetry_cmd_ipsec_sa_list,
+ "Return list of IPsec SAs with telemetry enabled.");
+ rte_telemetry_register_cmd("/ipsec/sa/stats",
+ handle_telemetry_cmd_ipsec_sa_stats,
+ "Returns IPsec SA stastistics. Parameters: int sa_spi");
+ rte_telemetry_register_cmd("/ipsec/sa/details",
+ handle_telemetry_cmd_ipsec_sa_details,
+ "Returns IPsec SA configuration. Parameters: int sa_spi");
+}
+
diff --git a/lib/ipsec/meson.build b/lib/ipsec/meson.build
index 1497f573bb..ddb9ea1767 100644
--- a/lib/ipsec/meson.build
+++ b/lib/ipsec/meson.build
@@ -1,9 +1,11 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2018 Intel Corporation
-sources = files('esp_inb.c', 'esp_outb.c', 'sa.c', 'ses.c', 'ipsec_sad.c')
+sources = files('esp_inb.c', 'esp_outb.c',
+ 'sa.c', 'ses.c', 'ipsec_sad.c',
+ 'ipsec_telemetry.c')
headers = files('rte_ipsec.h', 'rte_ipsec_sa.h', 'rte_ipsec_sad.h')
indirect_headers += files('rte_ipsec_group.h')
-deps += ['mbuf', 'net', 'cryptodev', 'security', 'hash']
+deps += ['mbuf', 'net', 'cryptodev', 'security', 'hash', 'telemetry']
diff --git a/lib/ipsec/rte_ipsec.h b/lib/ipsec/rte_ipsec.h
index dd60d95915..85f3ac0fff 100644
--- a/lib/ipsec/rte_ipsec.h
+++ b/lib/ipsec/rte_ipsec.h
@@ -158,6 +158,29 @@ rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
return ss->pkt_func.process(ss, mb, num);
}
+
+/**
+ * Enable per SA telemetry for a specific SA.
+ * Note that this function is not thread safe
+ * @param sa
+ * Pointer to the *rte_ipsec_sa* object that will have telemetry enabled.
+ * @return
+ * 0 on success, negative value otherwise.
+ */
+__rte_experimental
+int
+rte_ipsec_telemetry_sa_add(struct rte_ipsec_sa *sa);
+
+/**
+ * Disable per SA telemetry for a specific SA.
+ * Note that this function is not thread safe
+ * @param sa
+ * Pointer to the *rte_ipsec_sa* object that will have telemetry disabled.
+ */
+__rte_experimental
+void
+rte_ipsec_telemetry_sa_del(struct rte_ipsec_sa *sa);
+
#include <rte_ipsec_group.h>
#ifdef __cplusplus
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 1dd19467a6..44dcc524ee 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -649,19 +649,25 @@ uint16_t
pkt_flag_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- uint32_t i, k;
+ uint32_t i, k, bytes;
uint32_t dr[num];
RTE_SET_USED(ss);
k = 0;
+ bytes = 0;
for (i = 0; i != num; i++) {
- if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
+ if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
k++;
+ bytes += mb[i]->pkt_len;
+ }
else
dr[i - k] = i;
}
+ ss->sa->statistics.count += k;
+ ss->sa->statistics.bytes += bytes;
+
/* handle unprocessed mbufs */
if (k != num) {
rte_errno = EBADMSG;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 107ebd1519..6e59f18e16 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -132,6 +132,15 @@ struct rte_ipsec_sa {
struct replay_sqn *rsn[REPLAY_SQN_NUM];
} inb;
} sqn;
+ /* Statistics */
+ struct {
+ uint64_t count;
+ uint64_t bytes;
+ struct {
+ uint64_t count;
+ uint64_t authentication_failed;
+ } errors;
+ } statistics;
} __rte_cache_aligned;
diff --git a/lib/ipsec/version.map b/lib/ipsec/version.map
index ba8753eac4..0af27ffd60 100644
--- a/lib/ipsec/version.map
+++ b/lib/ipsec/version.map
@@ -19,3 +19,12 @@ DPDK_22 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ # added in 21.11
+ rte_ipsec_telemetry_sa_add;
+ rte_ipsec_telemetry_sa_del;
+
+};
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v7 8/8] ipsec: add support for initial SQN value
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 0/8] " Radu Nicolau
` (6 preceding siblings ...)
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 7/8] ipsec: add support for SA telemetry Radu Nicolau
@ 2021-10-01 9:50 ` Radu Nicolau
2021-10-08 18:26 ` [dpdk-dev] [EXT] [PATCH v7 0/8] new features for ipsec and security libraries Akhil Goyal
8 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-01 9:50 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Update IPsec library to support initial SQN value.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
lib/ipsec/sa.c | 25 ++++++++++++++++++-------
1 file changed, 18 insertions(+), 7 deletions(-)
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 44dcc524ee..85e06069de 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -294,11 +294,11 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
* Init ESP outbound specific things.
*/
static void
-esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
+esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen, uint64_t sqn)
{
uint8_t algo_type;
- sa->sqn.outb = 1;
+ sa->sqn.outb = sqn > 1 ? sqn : 1;
algo_type = sa->algo_type;
@@ -376,7 +376,7 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
sa->tx_offload.val = rte_mbuf_tx_offload(sa->hdr_l3_off,
sa->hdr_len - sa->hdr_l3_off, 0, 0, 0, 0, 0);
- esp_outb_init(sa, sa->hdr_len);
+ esp_outb_init(sa, sa->hdr_len, prm->ipsec_xform.esn.value);
}
/*
@@ -502,7 +502,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
- esp_outb_init(sa, 0);
+ esp_outb_init(sa, 0, prm->ipsec_xform.esn.value);
break;
}
@@ -513,15 +513,19 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
* helper function, init SA replay structure.
*/
static void
-fill_sa_replay(struct rte_ipsec_sa *sa, uint32_t wnd_sz, uint32_t nb_bucket)
+fill_sa_replay(struct rte_ipsec_sa *sa, uint32_t wnd_sz, uint32_t nb_bucket,
+ uint64_t sqn)
{
sa->replay.win_sz = wnd_sz;
sa->replay.nb_bucket = nb_bucket;
sa->replay.bucket_index_mask = nb_bucket - 1;
sa->sqn.inb.rsn[0] = (struct replay_sqn *)(sa + 1);
- if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+ sa->sqn.inb.rsn[0]->sqn = sqn;
+ if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM) {
sa->sqn.inb.rsn[1] = (struct replay_sqn *)
((uintptr_t)sa->sqn.inb.rsn[0] + rsn_size(nb_bucket));
+ sa->sqn.inb.rsn[1]->sqn = sqn;
+ }
}
int
@@ -591,13 +595,20 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
UINT32_MAX : UINT64_MAX;
+ /* if we are starting from a non-zero sn value */
+ if (prm->ipsec_xform.esn.value > 0) {
+ if (prm->ipsec_xform.direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+ sa->sqn.outb = prm->ipsec_xform.esn.value;
+ }
+
rc = esp_sa_init(sa, prm, &cxf);
if (rc != 0)
rte_ipsec_sa_fini(sa);
/* fill replay window related fields */
if (nb != 0)
- fill_sa_replay(sa, wsz, nb);
+ fill_sa_replay(sa, wsz, nb, prm->ipsec_xform.esn.value);
return sz;
}
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH v7 0/8] new features for ipsec and security libraries
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 0/8] " Radu Nicolau
` (7 preceding siblings ...)
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 8/8] ipsec: add support for initial SQN value Radu Nicolau
@ 2021-10-08 18:26 ` Akhil Goyal
2021-10-08 20:33 ` Akhil Goyal
8 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2021-10-08 18:26 UTC (permalink / raw)
To: Radu Nicolau
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, Anoob Joseph,
declan.doherty, abhijit.sinha, daniel.m.buckley,
Archana Muniganti, Tejasree Kondoj, matan
> Add support for:
> TSO, NAT-T/UDP encapsulation, ESN
> AES_CCM, CHACHA20_POLY1305 and AES_GMAC
> SA telemetry
> mbuf offload flags
> Initial SQN value
>
> Radu Nicolau (8):
> security: add ESN field to ipsec_xform
> ipsec: add support for AEAD algorithms
> security: add UDP params for IPsec NAT-T
> ipsec: add support for NAT-T
> mbuf: add IPsec ESP tunnel type
> ipsec: add transmit segmentation offload support
> ipsec: add support for SA telemetry
> ipsec: add support for initial SQN value
>
> lib/ipsec/crypto.h | 137 +++++++++++++++++++++
> lib/ipsec/esp_inb.c | 84 +++++++++++--
> lib/ipsec/esp_outb.c | 210 ++++++++++++++++++++++++++++----
> lib/ipsec/ipsec_telemetry.c | 237 ++++++++++++++++++++++++++++++++++++
> lib/ipsec/meson.build | 6 +-
> lib/ipsec/rte_ipsec.h | 23 ++++
> lib/ipsec/rte_ipsec_sa.h | 9 +-
> lib/ipsec/sa.c | 117 +++++++++++++++---
> lib/ipsec/sa.h | 15 +++
> lib/ipsec/version.map | 9 ++
> lib/mbuf/rte_mbuf_core.h | 1 +
> lib/security/rte_security.h | 15 +++
> 12 files changed, 811 insertions(+), 52 deletions(-)
> create mode 100644 lib/ipsec/ipsec_telemetry.c
>
> --
> v2: fixed lib/ipsec/version.map updates to show correct version
> v3: fixed build error and corrected misspelled email address
> v4: add doxygen comments for the IPsec telemetry APIs
> update inline comments refering to the wrong RFC
> v5: update commit messages after feedback
> update the UDP encapsulation patch to actually use the configured ports
> v6: fix initial SQN value
> v7: reworked the patches after feedback
>
Release notes missing. At least some of the features deserve update in release notes.
For ipsec lib add a main bullet and then add sub-bullets for subsequent features.
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH v7 0/8] new features for ipsec and security libraries
2021-10-08 18:26 ` [dpdk-dev] [EXT] [PATCH v7 0/8] new features for ipsec and security libraries Akhil Goyal
@ 2021-10-08 20:33 ` Akhil Goyal
0 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2021-10-08 20:33 UTC (permalink / raw)
To: Radu Nicolau
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, Anoob Joseph,
declan.doherty, abhijit.sinha, daniel.m.buckley,
Archana Muniganti, Tejasree Kondoj, matan
> Subject: RE: [EXT] [PATCH v7 0/8] new features for ipsec and security libraries
>
> > Add support for:
> > TSO, NAT-T/UDP encapsulation, ESN
> > AES_CCM, CHACHA20_POLY1305 and AES_GMAC
> > SA telemetry
> > mbuf offload flags
> > Initial SQN value
> >
> > Radu Nicolau (8):
> > security: add ESN field to ipsec_xform
> > ipsec: add support for AEAD algorithms
> > security: add UDP params for IPsec NAT-T
> > ipsec: add support for NAT-T
> > mbuf: add IPsec ESP tunnel type
> > ipsec: add transmit segmentation offload support
> > ipsec: add support for SA telemetry
> > ipsec: add support for initial SQN value
> >
> > lib/ipsec/crypto.h | 137 +++++++++++++++++++++
> > lib/ipsec/esp_inb.c | 84 +++++++++++--
> > lib/ipsec/esp_outb.c | 210 ++++++++++++++++++++++++++++----
> > lib/ipsec/ipsec_telemetry.c | 237
> ++++++++++++++++++++++++++++++++++++
> > lib/ipsec/meson.build | 6 +-
> > lib/ipsec/rte_ipsec.h | 23 ++++
> > lib/ipsec/rte_ipsec_sa.h | 9 +-
> > lib/ipsec/sa.c | 117 +++++++++++++++---
> > lib/ipsec/sa.h | 15 +++
> > lib/ipsec/version.map | 9 ++
> > lib/mbuf/rte_mbuf_core.h | 1 +
> > lib/security/rte_security.h | 15 +++
> > 12 files changed, 811 insertions(+), 52 deletions(-)
> > create mode 100644 lib/ipsec/ipsec_telemetry.c
> >
> > --
> > v2: fixed lib/ipsec/version.map updates to show correct version
> > v3: fixed build error and corrected misspelled email address
> > v4: add doxygen comments for the IPsec telemetry APIs
> > update inline comments refering to the wrong RFC
> > v5: update commit messages after feedback
> > update the UDP encapsulation patch to actually use the configured ports
> > v6: fix initial SQN value
> > v7: reworked the patches after feedback
> >
> Release notes missing. At least some of the features deserve update in
> release notes.
> For ipsec lib add a main bullet and then add sub-bullets for subsequent
> features.
Also remove deprecation notices in the patch which added support for that.
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v8 00/10] new features for ipsec and security libraries
2021-07-13 13:35 [dpdk-dev] [PATCH 00/10] new features for ipsec and security libraries Radu Nicolau
` (15 preceding siblings ...)
2021-10-01 9:50 ` [dpdk-dev] [PATCH v7 0/8] " Radu Nicolau
@ 2021-10-11 11:29 ` Radu Nicolau
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 01/10] security: add ESN field to ipsec_xform Radu Nicolau
` (9 more replies)
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 00/10] new features for ipsec and security libraries Radu Nicolau
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 0/9] new features for ipsec and security libraries Radu Nicolau
18 siblings, 10 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-11 11:29 UTC (permalink / raw)
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, gakhil, anoobj,
declan.doherty, abhijit.sinha, daniel.m.buckley, marchana,
ktejasree, matan, Radu Nicolau
Add support for:
TSO, NAT-T/UDP encapsulation, ESN
AES_CCM, CHACHA20_POLY1305 and AES_GMAC
SA telemetry
mbuf offload flags
Initial SQN value
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Radu Nicolau (10):
security: add ESN field to ipsec_xform
ipsec: add support for AEAD algorithms
security: add UDP params for IPsec NAT-T
ipsec: add support for NAT-T
mbuf: add IPsec ESP tunnel type
ipsec: add transmit segmentation offload support
ipsec: add support for SA telemetry
ipsec: add support for initial SQN value
doc: remove unneeded ipsec new field deprecation
doc: remove unneeded security deprecation
doc/guides/prog_guide/ipsec_lib.rst | 14 +-
doc/guides/rel_notes/deprecation.rst | 9 +-
doc/guides/rel_notes/release_21_11.rst | 17 ++
lib/ipsec/crypto.h | 137 ++++++++++++++
lib/ipsec/esp_inb.c | 84 ++++++++-
lib/ipsec/esp_outb.c | 210 +++++++++++++++++++---
lib/ipsec/ipsec_telemetry.c | 237 +++++++++++++++++++++++++
lib/ipsec/meson.build | 6 +-
lib/ipsec/rte_ipsec.h | 23 +++
lib/ipsec/rte_ipsec_sa.h | 9 +-
lib/ipsec/sa.c | 117 ++++++++++--
lib/ipsec/sa.h | 15 ++
lib/ipsec/version.map | 9 +
lib/mbuf/rte_mbuf_core.h | 1 +
lib/security/rte_security.h | 15 ++
15 files changed, 842 insertions(+), 61 deletions(-)
create mode 100644 lib/ipsec/ipsec_telemetry.c
--
v2: fixed lib/ipsec/version.map updates to show correct version
v3: fixed build error and corrected misspelled email address
v4: add doxygen comments for the IPsec telemetry APIs
update inline comments refering to the wrong RFC
v5: update commit messages after feedback
update the UDP encapsulation patch to actually use the configured ports
v6: fix initial SQN value
v7: reworked the patches after feedback
v8: updated library doc, release notes and removed deprecation notices
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v8 01/10] security: add ESN field to ipsec_xform
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 00/10] " Radu Nicolau
@ 2021-10-11 11:29 ` Radu Nicolau
2021-10-12 10:23 ` Ananyev, Konstantin
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 02/10] ipsec: add support for AEAD algorithms Radu Nicolau
` (8 subsequent siblings)
9 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-10-11 11:29 UTC (permalink / raw)
To: Ray Kinsella, Akhil Goyal, Declan Doherty
Cc: dev, konstantin.ananyev, vladimir.medvedkin, bruce.richardson,
roy.fan.zhang, hemant.agrawal, anoobj, abhijit.sinha,
daniel.m.buckley, marchana, ktejasree, matan, Radu Nicolau
Update ipsec_xform definition to include ESN field.
This allows the application to control the ESN starting value.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
---
doc/guides/rel_notes/deprecation.rst | 2 +-
doc/guides/rel_notes/release_21_11.rst | 4 ++++
lib/security/rte_security.h | 8 ++++++++
3 files changed, 13 insertions(+), 1 deletion(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index baf15aa722..8b7b0beee2 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -212,7 +212,7 @@ Deprecation Notices
* security: The structure ``rte_security_ipsec_xform`` will be extended with
multiple fields: source and destination port of UDP encapsulation,
- IPsec payload MSS (Maximum Segment Size), and ESN (Extended Sequence Number).
+ IPsec payload MSS (Maximum Segment Size).
* security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
will be updated with new fields to support new features like IPsec inner
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index c0a7f75518..401c6d453a 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -229,6 +229,10 @@ ABI Changes
``rte_security_ipsec_xform`` to allow applications to configure SA soft
and hard expiry limits. Limits can be either in number of packets or bytes.
+* security: A new structure ``esn`` was added in structure
+ ``rte_security_ipsec_xform`` to set an initial ESN value. This permits
+ application to start from an arbitrary ESN value for debug and SA lifetime
+ enforcement purposes.
Known Issues
------------
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 2013e65e49..371d64647a 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -280,6 +280,14 @@ struct rte_security_ipsec_xform {
/**< Anti replay window size to enable sequence replay attack handling.
* replay checking is disabled if the window size is 0.
*/
+ union {
+ uint64_t value;
+ struct {
+ uint32_t low;
+ uint32_t hi;
+ };
+ } esn;
+ /**< Extended Sequence Number */
};
/**
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v8 01/10] security: add ESN field to ipsec_xform
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 01/10] security: add ESN field to ipsec_xform Radu Nicolau
@ 2021-10-12 10:23 ` Ananyev, Konstantin
0 siblings, 0 replies; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-10-12 10:23 UTC (permalink / raw)
To: Nicolau, Radu, Ray Kinsella, Akhil Goyal, Doherty, Declan
Cc: dev, Medvedkin, Vladimir, Richardson, Bruce, Zhang, Roy Fan,
hemant.agrawal, anoobj, Sinha, Abhijit, Buckley, Daniel M,
marchana, ktejasree, matan
>
> Update ipsec_xform definition to include ESN field.
> This allows the application to control the ESN starting value.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> Acked-by: Anoob Joseph <anoobj@marvell.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 2 +-
> doc/guides/rel_notes/release_21_11.rst | 4 ++++
> lib/security/rte_security.h | 8 ++++++++
> 3 files changed, 13 insertions(+), 1 deletion(-)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index baf15aa722..8b7b0beee2 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -212,7 +212,7 @@ Deprecation Notices
>
> * security: The structure ``rte_security_ipsec_xform`` will be extended with
> multiple fields: source and destination port of UDP encapsulation,
> - IPsec payload MSS (Maximum Segment Size), and ESN (Extended Sequence Number).
> + IPsec payload MSS (Maximum Segment Size).
>
> * security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
> will be updated with new fields to support new features like IPsec inner
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index c0a7f75518..401c6d453a 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -229,6 +229,10 @@ ABI Changes
> ``rte_security_ipsec_xform`` to allow applications to configure SA soft
> and hard expiry limits. Limits can be either in number of packets or bytes.
>
> +* security: A new structure ``esn`` was added in structure
> + ``rte_security_ipsec_xform`` to set an initial ESN value. This permits
> + application to start from an arbitrary ESN value for debug and SA lifetime
> + enforcement purposes.
>
> Known Issues
> ------------
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index 2013e65e49..371d64647a 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -280,6 +280,14 @@ struct rte_security_ipsec_xform {
> /**< Anti replay window size to enable sequence replay attack handling.
> * replay checking is disabled if the window size is 0.
> */
> + union {
> + uint64_t value;
> + struct {
> + uint32_t low;
> + uint32_t hi;
> + };
> + } esn;
> + /**< Extended Sequence Number */
> };
>
> /**
> --
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v8 02/10] ipsec: add support for AEAD algorithms
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 00/10] " Radu Nicolau
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 01/10] security: add ESN field to ipsec_xform Radu Nicolau
@ 2021-10-11 11:29 ` Radu Nicolau
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 03/10] security: add UDP params for IPsec NAT-T Radu Nicolau
` (7 subsequent siblings)
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-11 11:29 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add support for AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/prog_guide/ipsec_lib.rst | 3 +-
doc/guides/rel_notes/release_21_11.rst | 4 +
lib/ipsec/crypto.h | 137 +++++++++++++++++++++++++
lib/ipsec/esp_inb.c | 66 +++++++++++-
lib/ipsec/esp_outb.c | 70 ++++++++++++-
lib/ipsec/sa.c | 54 +++++++++-
lib/ipsec/sa.h | 6 ++
7 files changed, 328 insertions(+), 12 deletions(-)
diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
index 9f2b26072d..93e213bf36 100644
--- a/doc/guides/prog_guide/ipsec_lib.rst
+++ b/doc/guides/prog_guide/ipsec_lib.rst
@@ -313,7 +313,8 @@ Supported features
* ESN and replay window.
-* algorithms: 3DES-CBC, AES-CBC, AES-CTR, AES-GCM, HMAC-SHA1, NULL.
+* algorithms: 3DES-CBC, AES-CBC, AES-CTR, AES-GCM, AES_CCM, CHACHA20_POLY1305,
+ AES_GMAC, HMAC-SHA1, NULL.
Limitations
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 401c6d453a..8ac6632abf 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -134,6 +134,10 @@ New Features
* Added tests to validate packets hard expiry.
* Added tests to verify tunnel header verification in IPsec inbound.
+* **IPsec library new features.**
+
+ * Added support for AEAD algorithms AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
+
Removed Items
-------------
diff --git a/lib/ipsec/crypto.h b/lib/ipsec/crypto.h
index 3d03034590..93d20aaaa0 100644
--- a/lib/ipsec/crypto.h
+++ b/lib/ipsec/crypto.h
@@ -21,6 +21,37 @@ struct aesctr_cnt_blk {
uint32_t cnt;
} __rte_packed;
+ /*
+ * CHACHA20-POLY1305 devices have some specific requirements
+ * for IV and AAD formats.
+ * Ideally that to be done by the driver itself.
+ */
+
+struct aead_chacha20_poly1305_iv {
+ uint32_t salt;
+ uint64_t iv;
+ uint32_t cnt;
+} __rte_packed;
+
+struct aead_chacha20_poly1305_aad {
+ uint32_t spi;
+ /*
+ * RFC 4106, section 5:
+ * Two formats of the AAD are defined:
+ * one for 32-bit sequence numbers, and one for 64-bit ESN.
+ */
+ union {
+ uint32_t u32[2];
+ uint64_t u64;
+ } sqn;
+ uint32_t align0; /* align to 16B boundary */
+} __rte_packed;
+
+struct chacha20_poly1305_esph_iv {
+ struct rte_esp_hdr esph;
+ uint64_t iv;
+} __rte_packed;
+
/*
* AES-GCM devices have some specific requirements for IV and AAD formats.
* Ideally that to be done by the driver itself.
@@ -51,6 +82,47 @@ struct gcm_esph_iv {
uint64_t iv;
} __rte_packed;
+ /*
+ * AES-CCM devices have some specific requirements for IV and AAD formats.
+ * Ideally that to be done by the driver itself.
+ */
+union aead_ccm_salt {
+ uint32_t salt;
+ struct inner {
+ uint8_t salt8[3];
+ uint8_t ccm_flags;
+ } inner;
+} __rte_packed;
+
+
+struct aead_ccm_iv {
+ uint8_t ccm_flags;
+ uint8_t salt[3];
+ uint64_t iv;
+ uint32_t cnt;
+} __rte_packed;
+
+struct aead_ccm_aad {
+ uint8_t padding[18];
+ uint32_t spi;
+ /*
+ * RFC 4309, section 5:
+ * Two formats of the AAD are defined:
+ * one for 32-bit sequence numbers, and one for 64-bit ESN.
+ */
+ union {
+ uint32_t u32[2];
+ uint64_t u64;
+ } sqn;
+ uint32_t align0; /* align to 16B boundary */
+} __rte_packed;
+
+struct ccm_esph_iv {
+ struct rte_esp_hdr esph;
+ uint64_t iv;
+} __rte_packed;
+
+
static inline void
aes_ctr_cnt_blk_fill(struct aesctr_cnt_blk *ctr, uint64_t iv, uint32_t nonce)
{
@@ -59,6 +131,16 @@ aes_ctr_cnt_blk_fill(struct aesctr_cnt_blk *ctr, uint64_t iv, uint32_t nonce)
ctr->cnt = rte_cpu_to_be_32(1);
}
+static inline void
+aead_chacha20_poly1305_iv_fill(struct aead_chacha20_poly1305_iv
+ *chacha20_poly1305,
+ uint64_t iv, uint32_t salt)
+{
+ chacha20_poly1305->salt = salt;
+ chacha20_poly1305->iv = iv;
+ chacha20_poly1305->cnt = rte_cpu_to_be_32(1);
+}
+
static inline void
aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
{
@@ -67,6 +149,21 @@ aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
gcm->cnt = rte_cpu_to_be_32(1);
}
+static inline void
+aead_ccm_iv_fill(struct aead_ccm_iv *ccm, uint64_t iv, uint32_t salt)
+{
+ union aead_ccm_salt tsalt;
+
+ tsalt.salt = salt;
+ ccm->ccm_flags = tsalt.inner.ccm_flags;
+ ccm->salt[0] = tsalt.inner.salt8[0];
+ ccm->salt[1] = tsalt.inner.salt8[1];
+ ccm->salt[2] = tsalt.inner.salt8[2];
+ ccm->iv = iv;
+ ccm->cnt = rte_cpu_to_be_32(1);
+}
+
+
/*
* RFC 4106, 5 AAD Construction
* spi and sqn should already be converted into network byte order.
@@ -86,6 +183,25 @@ aead_gcm_aad_fill(struct aead_gcm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
aad->align0 = 0;
}
+/*
+ * RFC 4309, 5 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_ccm_aad_fill(struct aead_ccm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
+ int esn)
+{
+ aad->spi = spi;
+ if (esn)
+ aad->sqn.u64 = sqn;
+ else {
+ aad->sqn.u32[0] = sqn_low32(sqn);
+ aad->sqn.u32[1] = 0;
+ }
+ aad->align0 = 0;
+}
+
static inline void
gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
{
@@ -93,6 +209,27 @@ gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
iv[1] = 0;
}
+
+/*
+ * RFC 7634, 2.1 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_chacha20_poly1305_aad_fill(struct aead_chacha20_poly1305_aad *aad,
+ rte_be32_t spi, rte_be64_t sqn,
+ int esn)
+{
+ aad->spi = spi;
+ if (esn)
+ aad->sqn.u64 = sqn;
+ else {
+ aad->sqn.u32[0] = sqn_low32(sqn);
+ aad->sqn.u32[1] = 0;
+ }
+ aad->align0 = 0;
+}
+
/*
* Helper routine to copy IV
* Right now we support only algorithms with IV length equals 0/8/16 bytes.
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index 2b1df6a032..d66c88f05d 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -63,6 +63,8 @@ inb_cop_prepare(struct rte_crypto_op *cop,
{
struct rte_crypto_sym_op *sop;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint64_t *ivc, *ivp;
uint32_t algo;
@@ -83,6 +85,24 @@ inb_cop_prepare(struct rte_crypto_op *cop,
sa->iv_ofs);
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ sop_aead_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ ccm = rte_crypto_op_ctod_offset(cop, struct aead_ccm_iv *,
+ sa->iv_ofs);
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ sop_aead_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ chacha20_poly1305 = rte_crypto_op_ctod_offset(cop,
+ struct aead_chacha20_poly1305_iv *,
+ sa->iv_ofs);
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CBC:
case ALGO_TYPE_3DES_CBC:
sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
@@ -91,6 +111,14 @@ inb_cop_prepare(struct rte_crypto_op *cop,
ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs);
copy_iv(ivc, ivp, sa->iv_len);
break;
+ case ALGO_TYPE_AES_GMAC:
+ sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+ sa->iv_ofs);
+ aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
@@ -110,6 +138,8 @@ inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
uint32_t *pofs, uint32_t plen, void *iv)
{
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint64_t *ivp;
uint32_t clen;
@@ -120,9 +150,19 @@ inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
switch (sa->algo_type) {
case ALGO_TYPE_AES_GCM:
+ case ALGO_TYPE_AES_GMAC:
gcm = (struct aead_gcm_iv *)iv;
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ ccm = (struct aead_ccm_iv *)iv;
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ chacha20_poly1305 = (struct aead_chacha20_poly1305_iv *)iv;
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CBC:
case ALGO_TYPE_3DES_CBC:
copy_iv(iv, ivp, sa->iv_len);
@@ -175,6 +215,8 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
const union sym_op_data *icv)
{
struct aead_gcm_aad *aad;
+ struct aead_ccm_aad *caad;
+ struct aead_chacha20_poly1305_aad *chacha_aad;
/* insert SQN.hi between ESP trailer and ICV */
if (sa->sqh_len != 0)
@@ -184,9 +226,27 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
* fill AAD fields, if any (aad fields are placed after icv),
* right now we support only one AEAD algorithm: AES-GCM.
*/
- if (sa->aad_len != 0) {
- aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
- aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ switch (sa->algo_type) {
+ case ALGO_TYPE_AES_GCM:
+ if (sa->aad_len != 0) {
+ aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+ aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_AES_CCM:
+ if (sa->aad_len != 0) {
+ caad = (struct aead_ccm_aad *)(icv->va + sa->icv_len);
+ aead_ccm_aad_fill(caad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ if (sa->aad_len != 0) {
+ chacha_aad = (struct aead_chacha20_poly1305_aad *)
+ (icv->va + sa->icv_len);
+ aead_chacha20_poly1305_aad_fill(chacha_aad,
+ sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
}
}
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 1e181cf2ce..a3f77469c3 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -63,6 +63,8 @@ outb_cop_prepare(struct rte_crypto_op *cop,
{
struct rte_crypto_sym_op *sop;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint32_t algo;
@@ -80,6 +82,15 @@ outb_cop_prepare(struct rte_crypto_op *cop,
/* NULL case */
sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
break;
+ case ALGO_TYPE_AES_GMAC:
+ /* GMAC case */
+ sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+ sa->iv_ofs);
+ aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_GCM:
/* AEAD (AES_GCM) case */
sop_aead_prepare(sop, sa, icv, hlen, plen);
@@ -89,6 +100,26 @@ outb_cop_prepare(struct rte_crypto_op *cop,
sa->iv_ofs);
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ /* AEAD (AES_CCM) case */
+ sop_aead_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ ccm = rte_crypto_op_ctod_offset(cop, struct aead_ccm_iv *,
+ sa->iv_ofs);
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ /* AEAD (CHACHA20_POLY) case */
+ sop_aead_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ chacha20_poly1305 = rte_crypto_op_ctod_offset(cop,
+ struct aead_chacha20_poly1305_iv *,
+ sa->iv_ofs);
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
/* Cipher-Auth (AES-CTR *) case */
sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
@@ -196,7 +227,9 @@ outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
const union sym_op_data *icv)
{
uint32_t *psqh;
- struct aead_gcm_aad *aad;
+ struct aead_gcm_aad *gaad;
+ struct aead_ccm_aad *caad;
+ struct aead_chacha20_poly1305_aad *chacha20_poly1305_aad;
/* insert SQN.hi between ESP trailer and ICV */
if (sa->sqh_len != 0) {
@@ -208,9 +241,29 @@ outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
* fill IV and AAD fields, if any (aad fields are placed after icv),
* right now we support only one AEAD algorithm: AES-GCM .
*/
+ switch (sa->algo_type) {
+ case ALGO_TYPE_AES_GCM:
if (sa->aad_len != 0) {
- aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
- aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ gaad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+ aead_gcm_aad_fill(gaad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_AES_CCM:
+ if (sa->aad_len != 0) {
+ caad = (struct aead_ccm_aad *)(icv->va + sa->icv_len);
+ aead_ccm_aad_fill(caad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ if (sa->aad_len != 0) {
+ chacha20_poly1305_aad = (struct aead_chacha20_poly1305_aad *)
+ (icv->va + sa->icv_len);
+ aead_chacha20_poly1305_aad_fill(chacha20_poly1305_aad,
+ sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ default:
+ break;
}
}
@@ -418,6 +471,8 @@ outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
{
uint64_t *ivp = iv;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint32_t clen;
@@ -426,6 +481,15 @@ outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
gcm = iv;
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ ccm = iv;
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ chacha20_poly1305 = iv;
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
ctr = iv;
aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt);
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index e59189d215..720e0f365b 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -47,6 +47,15 @@ fill_crypto_xform(struct crypto_xform *xform, uint64_t type,
if (xfn != NULL)
return -EINVAL;
xform->aead = &xf->aead;
+
+ /* GMAC has only auth */
+ } else if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+ xf->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ if (xfn != NULL)
+ return -EINVAL;
+ xform->auth = &xf->auth;
+ xform->cipher = &xfn->cipher;
+
/*
* CIPHER+AUTH xforms are expected in strict order,
* depending on SA direction:
@@ -247,12 +256,13 @@ esp_inb_init(struct rte_ipsec_sa *sa)
sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
/*
- * for AEAD and NULL algorithms we can assume that
+ * for AEAD algorithms we can assume that
* auth and cipher offsets would be equal.
*/
switch (sa->algo_type) {
case ALGO_TYPE_AES_GCM:
- case ALGO_TYPE_NULL:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
sa->ctp.auth.raw = sa->ctp.cipher.raw;
break;
default:
@@ -294,6 +304,8 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
switch (algo_type) {
case ALGO_TYPE_AES_GCM:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
case ALGO_TYPE_AES_CTR:
case ALGO_TYPE_NULL:
sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr) +
@@ -305,15 +317,20 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr);
sa->ctp.cipher.length = sa->iv_len;
break;
+ case ALGO_TYPE_AES_GMAC:
+ sa->ctp.cipher.offset = 0;
+ sa->ctp.cipher.length = 0;
+ break;
}
/*
- * for AEAD and NULL algorithms we can assume that
+ * for AEAD algorithms we can assume that
* auth and cipher offsets would be equal.
*/
switch (algo_type) {
case ALGO_TYPE_AES_GCM:
- case ALGO_TYPE_NULL:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
sa->ctp.auth.raw = sa->ctp.cipher.raw;
break;
default:
@@ -374,13 +391,39 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->pad_align = IPSEC_PAD_AES_GCM;
sa->algo_type = ALGO_TYPE_AES_GCM;
break;
+ case RTE_CRYPTO_AEAD_AES_CCM:
+ /* RFC 4309 */
+ sa->aad_len = sizeof(struct aead_ccm_aad);
+ sa->icv_len = cxf->aead->digest_length;
+ sa->iv_ofs = cxf->aead->iv.offset;
+ sa->iv_len = sizeof(uint64_t);
+ sa->pad_align = IPSEC_PAD_AES_CCM;
+ sa->algo_type = ALGO_TYPE_AES_CCM;
+ break;
+ case RTE_CRYPTO_AEAD_CHACHA20_POLY1305:
+ /* RFC 7634 & 8439*/
+ sa->aad_len = sizeof(struct aead_chacha20_poly1305_aad);
+ sa->icv_len = cxf->aead->digest_length;
+ sa->iv_ofs = cxf->aead->iv.offset;
+ sa->iv_len = sizeof(uint64_t);
+ sa->pad_align = IPSEC_PAD_CHACHA20_POLY1305;
+ sa->algo_type = ALGO_TYPE_CHACHA20_POLY1305;
+ break;
default:
return -EINVAL;
}
+ } else if (cxf->auth->algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ /* RFC 4543 */
+ /* AES-GMAC is a special case of auth that needs IV */
+ sa->pad_align = IPSEC_PAD_AES_GMAC;
+ sa->iv_len = sizeof(uint64_t);
+ sa->icv_len = cxf->auth->digest_length;
+ sa->iv_ofs = cxf->auth->iv.offset;
+ sa->algo_type = ALGO_TYPE_AES_GMAC;
+
} else {
sa->icv_len = cxf->auth->digest_length;
sa->iv_ofs = cxf->cipher->iv.offset;
- sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
switch (cxf->cipher->algo) {
case RTE_CRYPTO_CIPHER_NULL:
@@ -414,6 +457,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
}
}
+ sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
sa->udata = prm->userdata;
sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi);
sa->salt = prm->ipsec_xform.salt;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 1bffe751f5..107ebd1519 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -19,7 +19,10 @@ enum {
IPSEC_PAD_AES_CBC = IPSEC_MAX_IV_SIZE,
IPSEC_PAD_AES_CTR = IPSEC_PAD_DEFAULT,
IPSEC_PAD_AES_GCM = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_AES_CCM = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_CHACHA20_POLY1305 = IPSEC_PAD_DEFAULT,
IPSEC_PAD_NULL = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_AES_GMAC = IPSEC_PAD_DEFAULT,
};
/* iv sizes for different algorithms */
@@ -67,6 +70,9 @@ enum sa_algo_type {
ALGO_TYPE_AES_CBC,
ALGO_TYPE_AES_CTR,
ALGO_TYPE_AES_GCM,
+ ALGO_TYPE_AES_CCM,
+ ALGO_TYPE_CHACHA20_POLY1305,
+ ALGO_TYPE_AES_GMAC,
ALGO_TYPE_MAX
};
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v8 03/10] security: add UDP params for IPsec NAT-T
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 00/10] " Radu Nicolau
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 01/10] security: add ESN field to ipsec_xform Radu Nicolau
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 02/10] ipsec: add support for AEAD algorithms Radu Nicolau
@ 2021-10-11 11:29 ` Radu Nicolau
2021-10-12 10:24 ` Ananyev, Konstantin
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 04/10] ipsec: add support for NAT-T Radu Nicolau
` (6 subsequent siblings)
9 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-10-11 11:29 UTC (permalink / raw)
To: Ray Kinsella, Akhil Goyal, Declan Doherty
Cc: dev, konstantin.ananyev, vladimir.medvedkin, bruce.richardson,
roy.fan.zhang, hemant.agrawal, anoobj, abhijit.sinha,
daniel.m.buckley, marchana, ktejasree, matan, Radu Nicolau
Add support for specifying UDP port params for UDP encapsulation option.
RFC3948 section-2.1 does not enforce using specific the UDP ports for
UDP-Encapsulated ESP Header
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
---
doc/guides/rel_notes/deprecation.rst | 5 ++---
doc/guides/rel_notes/release_21_11.rst | 5 +++++
lib/security/rte_security.h | 7 +++++++
3 files changed, 14 insertions(+), 3 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 8b7b0beee2..d24d69b669 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -210,9 +210,8 @@ Deprecation Notices
pointer for the private data to the application which can be attached
to the packet while enqueuing.
-* security: The structure ``rte_security_ipsec_xform`` will be extended with
- multiple fields: source and destination port of UDP encapsulation,
- IPsec payload MSS (Maximum Segment Size).
+* security: The structure ``rte_security_ipsec_xform`` will be extended with:
+ new field: IPsec payload MSS (Maximum Segment Size).
* security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
will be updated with new fields to support new features like IPsec inner
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 8ac6632abf..1a29640eea 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -238,6 +238,11 @@ ABI Changes
application to start from an arbitrary ESN value for debug and SA lifetime
enforcement purposes.
+* security: A new structure ``udp`` was added in structure
+ ``rte_security_ipsec_xform`` to allow setting the source and destination ports
+ for UDP encapsulated IPsec traffic.
+
+
Known Issues
------------
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 371d64647a..b30425e206 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -128,6 +128,11 @@ struct rte_security_ipsec_tunnel_param {
};
};
+struct rte_security_ipsec_udp_param {
+ uint16_t sport;
+ uint16_t dport;
+};
+
/**
* IPsec Security Association option flags
*/
@@ -288,6 +293,8 @@ struct rte_security_ipsec_xform {
};
} esn;
/**< Extended Sequence Number */
+ struct rte_security_ipsec_udp_param udp;
+ /**< UDP parameters, ignored when udp_encap option not specified */
};
/**
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v8 03/10] security: add UDP params for IPsec NAT-T
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 03/10] security: add UDP params for IPsec NAT-T Radu Nicolau
@ 2021-10-12 10:24 ` Ananyev, Konstantin
0 siblings, 0 replies; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-10-12 10:24 UTC (permalink / raw)
To: Nicolau, Radu, Ray Kinsella, Akhil Goyal, Doherty, Declan
Cc: dev, Medvedkin, Vladimir, Richardson, Bruce, Zhang, Roy Fan,
hemant.agrawal, anoobj, Sinha, Abhijit, Buckley, Daniel M,
marchana, ktejasree, matan
> Add support for specifying UDP port params for UDP encapsulation option.
> RFC3948 section-2.1 does not enforce using specific the UDP ports for
> UDP-Encapsulated ESP Header
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> Acked-by: Anoob Joseph <anoobj@marvell.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 5 ++---
> doc/guides/rel_notes/release_21_11.rst | 5 +++++
> lib/security/rte_security.h | 7 +++++++
> 3 files changed, 14 insertions(+), 3 deletions(-)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 8b7b0beee2..d24d69b669 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -210,9 +210,8 @@ Deprecation Notices
> pointer for the private data to the application which can be attached
> to the packet while enqueuing.
>
> -* security: The structure ``rte_security_ipsec_xform`` will be extended with
> - multiple fields: source and destination port of UDP encapsulation,
> - IPsec payload MSS (Maximum Segment Size).
> +* security: The structure ``rte_security_ipsec_xform`` will be extended with:
> + new field: IPsec payload MSS (Maximum Segment Size).
>
> * security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
> will be updated with new fields to support new features like IPsec inner
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index 8ac6632abf..1a29640eea 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -238,6 +238,11 @@ ABI Changes
> application to start from an arbitrary ESN value for debug and SA lifetime
> enforcement purposes.
>
> +* security: A new structure ``udp`` was added in structure
> + ``rte_security_ipsec_xform`` to allow setting the source and destination ports
> + for UDP encapsulated IPsec traffic.
> +
> +
> Known Issues
> ------------
>
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index 371d64647a..b30425e206 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -128,6 +128,11 @@ struct rte_security_ipsec_tunnel_param {
> };
> };
>
> +struct rte_security_ipsec_udp_param {
> + uint16_t sport;
> + uint16_t dport;
> +};
> +
> /**
> * IPsec Security Association option flags
> */
> @@ -288,6 +293,8 @@ struct rte_security_ipsec_xform {
> };
> } esn;
> /**< Extended Sequence Number */
> + struct rte_security_ipsec_udp_param udp;
> + /**< UDP parameters, ignored when udp_encap option not specified */
> };
>
> /**
> --
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v8 04/10] ipsec: add support for NAT-T
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 00/10] " Radu Nicolau
` (2 preceding siblings ...)
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 03/10] security: add UDP params for IPsec NAT-T Radu Nicolau
@ 2021-10-11 11:29 ` Radu Nicolau
2021-10-12 10:50 ` Ananyev, Konstantin
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 05/10] mbuf: add IPsec ESP tunnel type Radu Nicolau
` (5 subsequent siblings)
9 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-10-11 11:29 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add support for the IPsec NAT-Traversal use case for Tunnel mode
packets.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
doc/guides/prog_guide/ipsec_lib.rst | 2 ++
doc/guides/rel_notes/release_21_11.rst | 1 +
lib/ipsec/esp_outb.c | 9 +++++++++
lib/ipsec/rte_ipsec_sa.h | 9 ++++++++-
lib/ipsec/sa.c | 28 +++++++++++++++++++++++---
5 files changed, 45 insertions(+), 4 deletions(-)
diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
index 93e213bf36..af51ff8131 100644
--- a/doc/guides/prog_guide/ipsec_lib.rst
+++ b/doc/guides/prog_guide/ipsec_lib.rst
@@ -313,6 +313,8 @@ Supported features
* ESN and replay window.
+* NAT-T / UDP encapsulated ESP.
+
* algorithms: 3DES-CBC, AES-CBC, AES-CTR, AES-GCM, AES_CCM, CHACHA20_POLY1305,
AES_GMAC, HMAC-SHA1, NULL.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 1a29640eea..73a566eaca 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -137,6 +137,7 @@ New Features
* **IPsec library new features.**
* Added support for AEAD algorithms AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
+ * Added support for NAT-T / UDP encapsulated ESP
Removed Items
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index a3f77469c3..0e3314b358 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -5,6 +5,7 @@
#include <rte_ipsec.h>
#include <rte_esp.h>
#include <rte_ip.h>
+#include <rte_udp.h>
#include <rte_errno.h>
#include <rte_cryptodev.h>
@@ -185,6 +186,14 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* copy tunnel pkt header */
rte_memcpy(ph, sa->hdr, sa->hdr_len);
+ /* if UDP encap is enabled update the dgram_len */
+ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
+ struct rte_udp_hdr *udph = (struct rte_udp_hdr *)
+ (ph - sizeof(struct rte_udp_hdr));
+ udph->dgram_len = rte_cpu_to_be_16(mb->pkt_len - sqh_len -
+ sa->hdr_l3_off - sa->hdr_len);
+ }
+
/* update original and new ip header fields */
update_tun_outb_l3hdr(sa, ph + sa->hdr_l3_off, ph + hlen,
mb->pkt_len - sqh_len, sa->hdr_l3_off, sqn_low16(sqc));
diff --git a/lib/ipsec/rte_ipsec_sa.h b/lib/ipsec/rte_ipsec_sa.h
index cf51ad8338..3a22705055 100644
--- a/lib/ipsec/rte_ipsec_sa.h
+++ b/lib/ipsec/rte_ipsec_sa.h
@@ -78,6 +78,7 @@ struct rte_ipsec_sa_prm {
* - for TUNNEL outer IP version (IPv4/IPv6)
* - are SA SQN operations 'atomic'
* - ESN enabled/disabled
+ * - NAT-T UDP encapsulated (TUNNEL mode only)
* ...
*/
@@ -89,7 +90,8 @@ enum {
RTE_SATP_LOG2_SQN = RTE_SATP_LOG2_MODE + 2,
RTE_SATP_LOG2_ESN,
RTE_SATP_LOG2_ECN,
- RTE_SATP_LOG2_DSCP
+ RTE_SATP_LOG2_DSCP,
+ RTE_SATP_LOG2_NATT
};
#define RTE_IPSEC_SATP_IPV_MASK (1ULL << RTE_SATP_LOG2_IPV)
@@ -125,6 +127,11 @@ enum {
#define RTE_IPSEC_SATP_DSCP_DISABLE (0ULL << RTE_SATP_LOG2_DSCP)
#define RTE_IPSEC_SATP_DSCP_ENABLE (1ULL << RTE_SATP_LOG2_DSCP)
+#define RTE_IPSEC_SATP_NATT_MASK (1ULL << RTE_SATP_LOG2_NATT)
+#define RTE_IPSEC_SATP_NATT_DISABLE (0ULL << RTE_SATP_LOG2_NATT)
+#define RTE_IPSEC_SATP_NATT_ENABLE (1ULL << RTE_SATP_LOG2_NATT)
+
+
/**
* get type of given SA
* @return
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 720e0f365b..1dd19467a6 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -5,6 +5,7 @@
#include <rte_ipsec.h>
#include <rte_esp.h>
#include <rte_ip.h>
+#include <rte_udp.h>
#include <rte_errno.h>
#include <rte_cryptodev.h>
@@ -217,6 +218,10 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
} else
return -EINVAL;
+ /* check for UDP encapsulation flag */
+ if (prm->ipsec_xform.options.udp_encap == 1)
+ tp |= RTE_IPSEC_SATP_NATT_ENABLE;
+
/* check for ESN flag */
if (prm->ipsec_xform.options.esn == 0)
tp |= RTE_IPSEC_SATP_ESN_DISABLE;
@@ -355,12 +360,22 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
sa->hdr_len = prm->tun.hdr_len;
sa->hdr_l3_off = prm->tun.hdr_l3_off;
+ memcpy(sa->hdr, prm->tun.hdr, prm->tun.hdr_len);
+
+ /* insert UDP header if UDP encapsulation is inabled */
+ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
+ struct rte_udp_hdr *udph = (struct rte_udp_hdr *)
+ &sa->hdr[prm->tun.hdr_len];
+ sa->hdr_len += sizeof(struct rte_udp_hdr);
+ udph->src_port = prm->ipsec_xform.udp.sport;
+ udph->dst_port = prm->ipsec_xform.udp.dport;
+ udph->dgram_cksum = 0;
+ }
+
/* update l2_len and l3_len fields for outbound mbuf */
sa->tx_offload.val = rte_mbuf_tx_offload(sa->hdr_l3_off,
sa->hdr_len - sa->hdr_l3_off, 0, 0, 0, 0, 0);
- memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
-
esp_outb_init(sa, sa->hdr_len);
}
@@ -372,7 +387,8 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
const struct crypto_xform *cxf)
{
static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
- RTE_IPSEC_SATP_MODE_MASK;
+ RTE_IPSEC_SATP_MODE_MASK |
+ RTE_IPSEC_SATP_NATT_MASK;
if (prm->ipsec_xform.options.ecn)
sa->tos_mask |= RTE_IPV4_HDR_ECN_MASK;
@@ -475,10 +491,16 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
esp_inb_init(sa);
break;
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4 |
+ RTE_IPSEC_SATP_NATT_ENABLE):
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6 |
+ RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
esp_outb_tun_init(sa, prm);
break;
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
+ RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
esp_outb_init(sa, 0);
break;
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v8 04/10] ipsec: add support for NAT-T
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 04/10] ipsec: add support for NAT-T Radu Nicolau
@ 2021-10-12 10:50 ` Ananyev, Konstantin
2021-10-12 11:05 ` Nicolau, Radu
0 siblings, 1 reply; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-10-12 10:50 UTC (permalink / raw)
To: Nicolau, Radu, Iremonger, Bernard, Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal,
gakhil, anoobj, Doherty, Declan, Sinha, Abhijit, Buckley,
Daniel M, marchana, ktejasree, matan
>
> Add support for the IPsec NAT-Traversal use case for Tunnel mode
> packets.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
> doc/guides/prog_guide/ipsec_lib.rst | 2 ++
> doc/guides/rel_notes/release_21_11.rst | 1 +
> lib/ipsec/esp_outb.c | 9 +++++++++
> lib/ipsec/rte_ipsec_sa.h | 9 ++++++++-
> lib/ipsec/sa.c | 28 +++++++++++++++++++++++---
> 5 files changed, 45 insertions(+), 4 deletions(-)
>
> diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
> index 93e213bf36..af51ff8131 100644
> --- a/doc/guides/prog_guide/ipsec_lib.rst
> +++ b/doc/guides/prog_guide/ipsec_lib.rst
> @@ -313,6 +313,8 @@ Supported features
>
> * ESN and replay window.
>
> +* NAT-T / UDP encapsulated ESP.
> +
> * algorithms: 3DES-CBC, AES-CBC, AES-CTR, AES-GCM, AES_CCM, CHACHA20_POLY1305,
> AES_GMAC, HMAC-SHA1, NULL.
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index 1a29640eea..73a566eaca 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -137,6 +137,7 @@ New Features
> * **IPsec library new features.**
>
> * Added support for AEAD algorithms AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
> + * Added support for NAT-T / UDP encapsulated ESP
>
>
> Removed Items
> diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
> index a3f77469c3..0e3314b358 100644
> --- a/lib/ipsec/esp_outb.c
> +++ b/lib/ipsec/esp_outb.c
> @@ -5,6 +5,7 @@
> #include <rte_ipsec.h>
> #include <rte_esp.h>
> #include <rte_ip.h>
> +#include <rte_udp.h>
> #include <rte_errno.h>
> #include <rte_cryptodev.h>
>
> @@ -185,6 +186,14 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
> /* copy tunnel pkt header */
> rte_memcpy(ph, sa->hdr, sa->hdr_len);
>
> + /* if UDP encap is enabled update the dgram_len */
> + if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
> + struct rte_udp_hdr *udph = (struct rte_udp_hdr *)
> + (ph - sizeof(struct rte_udp_hdr));
> + udph->dgram_len = rte_cpu_to_be_16(mb->pkt_len - sqh_len -
> + sa->hdr_l3_off - sa->hdr_len);
> + }
> +
> /* update original and new ip header fields */
> update_tun_outb_l3hdr(sa, ph + sa->hdr_l3_off, ph + hlen,
> mb->pkt_len - sqh_len, sa->hdr_l3_off, sqn_low16(sqc));
> diff --git a/lib/ipsec/rte_ipsec_sa.h b/lib/ipsec/rte_ipsec_sa.h
> index cf51ad8338..3a22705055 100644
> --- a/lib/ipsec/rte_ipsec_sa.h
> +++ b/lib/ipsec/rte_ipsec_sa.h
> @@ -78,6 +78,7 @@ struct rte_ipsec_sa_prm {
> * - for TUNNEL outer IP version (IPv4/IPv6)
> * - are SA SQN operations 'atomic'
> * - ESN enabled/disabled
> + * - NAT-T UDP encapsulated (TUNNEL mode only)
> * ...
> */
>
> @@ -89,7 +90,8 @@ enum {
> RTE_SATP_LOG2_SQN = RTE_SATP_LOG2_MODE + 2,
> RTE_SATP_LOG2_ESN,
> RTE_SATP_LOG2_ECN,
> - RTE_SATP_LOG2_DSCP
> + RTE_SATP_LOG2_DSCP,
> + RTE_SATP_LOG2_NATT
> };
>
> #define RTE_IPSEC_SATP_IPV_MASK (1ULL << RTE_SATP_LOG2_IPV)
> @@ -125,6 +127,11 @@ enum {
> #define RTE_IPSEC_SATP_DSCP_DISABLE (0ULL << RTE_SATP_LOG2_DSCP)
> #define RTE_IPSEC_SATP_DSCP_ENABLE (1ULL << RTE_SATP_LOG2_DSCP)
>
> +#define RTE_IPSEC_SATP_NATT_MASK (1ULL << RTE_SATP_LOG2_NATT)
> +#define RTE_IPSEC_SATP_NATT_DISABLE (0ULL << RTE_SATP_LOG2_NATT)
> +#define RTE_IPSEC_SATP_NATT_ENABLE (1ULL << RTE_SATP_LOG2_NATT)
> +
> +
> /**
> * get type of given SA
> * @return
> diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
> index 720e0f365b..1dd19467a6 100644
> --- a/lib/ipsec/sa.c
> +++ b/lib/ipsec/sa.c
> @@ -5,6 +5,7 @@
> #include <rte_ipsec.h>
> #include <rte_esp.h>
> #include <rte_ip.h>
> +#include <rte_udp.h>
> #include <rte_errno.h>
> #include <rte_cryptodev.h>
>
> @@ -217,6 +218,10 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
> } else
> return -EINVAL;
>
> + /* check for UDP encapsulation flag */
> + if (prm->ipsec_xform.options.udp_encap == 1)
> + tp |= RTE_IPSEC_SATP_NATT_ENABLE;
> +
> /* check for ESN flag */
> if (prm->ipsec_xform.options.esn == 0)
> tp |= RTE_IPSEC_SATP_ESN_DISABLE;
> @@ -355,12 +360,22 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
> sa->hdr_len = prm->tun.hdr_len;
> sa->hdr_l3_off = prm->tun.hdr_l3_off;
>
> + memcpy(sa->hdr, prm->tun.hdr, prm->tun.hdr_len);
> +
> + /* insert UDP header if UDP encapsulation is inabled */
> + if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
> + struct rte_udp_hdr *udph = (struct rte_udp_hdr *)
> + &sa->hdr[prm->tun.hdr_len];
I think we need a check somewhere here (probably in rte_ipsec_sa_init() or so)
to make sure that new sa->hdr_len wouldn't overrun sizeof(sa->hdr).
> + sa->hdr_len += sizeof(struct rte_udp_hdr);
> + udph->src_port = prm->ipsec_xform.udp.sport;
> + udph->dst_port = prm->ipsec_xform.udp.dport;
> + udph->dgram_cksum = 0;
> + }
> +
> /* update l2_len and l3_len fields for outbound mbuf */
> sa->tx_offload.val = rte_mbuf_tx_offload(sa->hdr_l3_off,
> sa->hdr_len - sa->hdr_l3_off, 0, 0, 0, 0, 0);
So for such packets UDP cksum will always be zero, and we don't need to
setup l4_hdr or any TX L4 flags, correct?
>
> - memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
> -
> esp_outb_init(sa, sa->hdr_len);
> }
>
> @@ -372,7 +387,8 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
> const struct crypto_xform *cxf)
> {
> static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
> - RTE_IPSEC_SATP_MODE_MASK;
> + RTE_IPSEC_SATP_MODE_MASK |
> + RTE_IPSEC_SATP_NATT_MASK;
>
> if (prm->ipsec_xform.options.ecn)
> sa->tos_mask |= RTE_IPV4_HDR_ECN_MASK;
> @@ -475,10 +491,16 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
> case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
> esp_inb_init(sa);
> break;
> + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4 |
> + RTE_IPSEC_SATP_NATT_ENABLE):
> + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6 |
> + RTE_IPSEC_SATP_NATT_ENABLE):
> case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
> case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
> esp_outb_tun_init(sa, prm);
> break;
> + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
> + RTE_IPSEC_SATP_NATT_ENABLE):
> case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
> esp_outb_init(sa, 0);
> break;
> --
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v8 04/10] ipsec: add support for NAT-T
2021-10-12 10:50 ` Ananyev, Konstantin
@ 2021-10-12 11:05 ` Nicolau, Radu
0 siblings, 0 replies; 184+ messages in thread
From: Nicolau, Radu @ 2021-10-12 11:05 UTC (permalink / raw)
To: Ananyev, Konstantin, Iremonger, Bernard, Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal,
gakhil, anoobj, Doherty, Declan, Sinha, Abhijit, Buckley,
Daniel M, marchana, ktejasree, matan
On 10/12/2021 11:50 AM, Ananyev, Konstantin wrote:
>
>>
>>
>> + memcpy(sa->hdr, prm->tun.hdr, prm->tun.hdr_len);
>> +
>> + /* insert UDP header if UDP encapsulation is inabled */
>> + if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
>> + struct rte_udp_hdr *udph = (struct rte_udp_hdr *)
>> + &sa->hdr[prm->tun.hdr_len];
> I think we need a check somewhere here (probably in rte_ipsec_sa_init() or so)
> to make sure that new sa->hdr_len wouldn't overrun sizeof(sa->hdr).
Yes, I will add a check.
>
>
>> + sa->hdr_len += sizeof(struct rte_udp_hdr);
>> + udph->src_port = prm->ipsec_xform.udp.sport;
>> + udph->dst_port = prm->ipsec_xform.udp.dport;
>> + udph->dgram_cksum = 0;
>> + }
>> +
>> /* update l2_len and l3_len fields for outbound mbuf */
>> sa->tx_offload.val = rte_mbuf_tx_offload(sa->hdr_l3_off,
>> sa->hdr_len - sa->hdr_l3_off, 0, 0, 0, 0, 0);
>
> So for such packets UDP cksum will always be zero, and we don't need to
> setup l4_hdr or any TX L4 flags, correct?
UDP checksum should be 0 and must not be checked, this is what RFC
requires indeed. So from what I can see we don't need to setup the l4 flags.
>
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v8 05/10] mbuf: add IPsec ESP tunnel type
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 00/10] " Radu Nicolau
` (3 preceding siblings ...)
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 04/10] ipsec: add support for NAT-T Radu Nicolau
@ 2021-10-11 11:29 ` Radu Nicolau
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 06/10] ipsec: add transmit segmentation offload support Radu Nicolau
` (4 subsequent siblings)
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-11 11:29 UTC (permalink / raw)
To: Olivier Matz
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, gakhil, anoobj,
declan.doherty, abhijit.sinha, daniel.m.buckley, marchana,
ktejasree, matan, Radu Nicolau
Add ESP tunnel type to the tunnel types list that can be specified
for TSO or checksum on the inner part of tunnel packets.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
lib/mbuf/rte_mbuf_core.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index 9d8e3ddc86..4747c0c452 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -255,6 +255,7 @@ extern "C" {
#define PKT_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
#define PKT_TX_TUNNEL_VXLAN_GPE (0x6ULL << 45)
#define PKT_TX_TUNNEL_GTP (0x7ULL << 45)
+#define PKT_TX_TUNNEL_ESP (0x8ULL << 45)
/**
* Generic IP encapsulated tunnel type, used for TSO and checksum offload.
* It can be used for tunnels which are not standards or listed above.
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v8 06/10] ipsec: add transmit segmentation offload support
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 00/10] " Radu Nicolau
` (4 preceding siblings ...)
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 05/10] mbuf: add IPsec ESP tunnel type Radu Nicolau
@ 2021-10-11 11:29 ` Radu Nicolau
2021-10-12 12:42 ` Ananyev, Konstantin
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 07/10] ipsec: add support for SA telemetry Radu Nicolau
` (3 subsequent siblings)
9 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-10-11 11:29 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add support for transmit segmentation offload to inline crypto processing
mode. This offload is not supported by other offload modes, as at a
minimum it requires inline crypto for IPsec to be supported on the
network interface.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
doc/guides/prog_guide/ipsec_lib.rst | 2 +
doc/guides/rel_notes/release_21_11.rst | 1 +
lib/ipsec/esp_outb.c | 119 +++++++++++++++++++++----
3 files changed, 103 insertions(+), 19 deletions(-)
diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
index af51ff8131..fc0af5eadb 100644
--- a/doc/guides/prog_guide/ipsec_lib.rst
+++ b/doc/guides/prog_guide/ipsec_lib.rst
@@ -315,6 +315,8 @@ Supported features
* NAT-T / UDP encapsulated ESP.
+* TSO support (only for inline crypto mode)
+
* algorithms: 3DES-CBC, AES-CBC, AES-CTR, AES-GCM, AES_CCM, CHACHA20_POLY1305,
AES_GMAC, HMAC-SHA1, NULL.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 73a566eaca..77535ace36 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -138,6 +138,7 @@ New Features
* Added support for AEAD algorithms AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
* Added support for NAT-T / UDP encapsulated ESP
+ * Added support TSO offload support; only supported for inline crypto mode.
Removed Items
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 0e3314b358..df7d3e8645 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -147,6 +147,7 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
struct rte_esp_tail *espt;
char *ph, *pt;
uint64_t *iv;
+ uint8_t tso = !!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG));
/* calculate extra header space required */
hlen = sa->hdr_len + sa->iv_len + sizeof(*esph);
@@ -157,11 +158,20 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* number of bytes to encrypt */
clen = plen + sizeof(*espt);
- clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+ /* We don't need to pad/ailgn packet when using TSO offload */
+ if (likely(!tso))
+ clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
/* pad length + esp tail */
pdlen = clen - plen;
- tlen = pdlen + sa->icv_len + sqh_len;
+
+ /* We don't append ICV length when using TSO offload */
+ if (likely(!tso))
+ tlen = pdlen + sa->icv_len + sqh_len;
+ else
+ tlen = pdlen + sqh_len;
/* do append and prepend */
ml = rte_pktmbuf_lastseg(mb);
@@ -346,6 +356,7 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
char *ph, *pt;
uint64_t *iv;
uint32_t l2len, l3len;
+ uint8_t tso = !!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG));
l2len = mb->l2_len;
l3len = mb->l3_len;
@@ -358,11 +369,19 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* number of bytes to encrypt */
clen = plen + sizeof(*espt);
- clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+ /* We don't need to pad/ailgn packet when using TSO offload */
+ if (likely(!tso))
+ clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
/* pad length + esp tail */
pdlen = clen - plen;
- tlen = pdlen + sa->icv_len + sqh_len;
+
+ /* We don't append ICV length when using TSO offload */
+ if (likely(!tso))
+ tlen = pdlen + sa->icv_len + sqh_len;
+ else
+ tlen = pdlen + sqh_len;
/* do append and insert */
ml = rte_pktmbuf_lastseg(mb);
@@ -660,6 +679,29 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
}
}
+/* check if packet will exceed MSS and segmentation is required */
+static inline int
+esn_outb_nb_segments(struct rte_mbuf *m) {
+ uint16_t segments = 1;
+ uint16_t pkt_l3len = m->pkt_len - m->l2_len;
+
+ /* Only support segmentation for UDP/TCP flows */
+ if (!(m->packet_type & (RTE_PTYPE_L4_UDP | RTE_PTYPE_L4_TCP)))
+ return segments;
+
+ if (m->tso_segsz > 0 && pkt_l3len > m->tso_segsz) {
+ segments = pkt_l3len / m->tso_segsz;
+ if (segments * m->tso_segsz < pkt_l3len)
+ segments++;
+ if (m->packet_type & RTE_PTYPE_L4_TCP)
+ m->ol_flags |= (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM);
+ else
+ m->ol_flags |= (PKT_TX_UDP_SEG | PKT_TX_UDP_CKSUM);
+ }
+
+ return segments;
+}
+
/*
* process group of ESP outbound tunnel packets destined for
* INLINE_CRYPTO type of device.
@@ -669,24 +711,36 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
int32_t rc;
- uint32_t i, k, n;
+ uint32_t i, k, nb_sqn = 0, nb_sqn_alloc;
uint64_t sqn;
rte_be64_t sqc;
struct rte_ipsec_sa *sa;
union sym_op_data icv;
uint64_t iv[IPSEC_MAX_IV_QWORD];
uint32_t dr[num];
+ uint16_t nb_segs[num];
sa = ss->sa;
- n = num;
- sqn = esn_outb_update_sqn(sa, &n);
- if (n != num)
+ for (i = 0; i != num; i++) {
+ nb_segs[i] = esn_outb_nb_segments(mb[i]);
+ nb_sqn += nb_segs[i];
+ /* setup offload fields for TSO */
+ if (nb_segs[i] > 1) {
+ mb[i]->ol_flags |= (PKT_TX_OUTER_IPV4 |
+ PKT_TX_OUTER_IP_CKSUM |
+ PKT_TX_TUNNEL_ESP);
+ mb[i]->outer_l3_len = mb[i]->l3_len;
+ }
+ }
+
+ nb_sqn_alloc = nb_sqn;
+ sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
+ if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
k = 0;
- for (i = 0; i != n; i++) {
-
+ for (i = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -700,11 +754,18 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
dr[i - k] = i;
rte_errno = -rc;
}
+
+ /**
+ * If packet is using tso, increment sqn by the number of
+ * segments for packet
+ */
+ if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
+ sqn += nb_segs[i] - 1;
}
/* copy not processed mbufs beyond good ones */
- if (k != n && k != 0)
- move_bad_mbufs(mb, dr, n, n - k);
+ if (k != num && k != 0)
+ move_bad_mbufs(mb, dr, num, num - k);
inline_outb_mbuf_prepare(ss, mb, k);
return k;
@@ -719,23 +780,36 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
int32_t rc;
- uint32_t i, k, n;
+ uint32_t i, k, nb_sqn, nb_sqn_alloc;
uint64_t sqn;
rte_be64_t sqc;
struct rte_ipsec_sa *sa;
union sym_op_data icv;
uint64_t iv[IPSEC_MAX_IV_QWORD];
uint32_t dr[num];
+ uint16_t nb_segs[num];
sa = ss->sa;
- n = num;
- sqn = esn_outb_update_sqn(sa, &n);
- if (n != num)
+ /* Calculate number of sequence numbers required */
+ for (i = 0, nb_sqn = 0; i != num; i++) {
+ nb_segs[i] = esn_outb_nb_segments(mb[i]);
+ nb_sqn += nb_segs[i];
+ /* setup offload fields for TSO */
+ if (nb_segs[i] > 1) {
+ mb[i]->ol_flags |= (PKT_TX_OUTER_IPV4 |
+ PKT_TX_OUTER_IP_CKSUM);
+ mb[i]->outer_l3_len = mb[i]->l3_len;
+ }
+ }
+
+ nb_sqn_alloc = nb_sqn;
+ sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
+ if (nb_sqn_alloc != nb_sqn)
rte_errno = EOVERFLOW;
k = 0;
- for (i = 0; i != n; i++) {
+ for (i = 0; i != num; i++) {
sqc = rte_cpu_to_be_64(sqn + i);
gen_iv(iv, sqc);
@@ -750,11 +824,18 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
dr[i - k] = i;
rte_errno = -rc;
}
+
+ /**
+ * If packet is using tso, increment sqn by the number of
+ * segments for packet
+ */
+ if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
+ sqn += nb_segs[i] - 1;
}
/* copy not processed mbufs beyond good ones */
- if (k != n && k != 0)
- move_bad_mbufs(mb, dr, n, n - k);
+ if (k != num && k != 0)
+ move_bad_mbufs(mb, dr, num, num - k);
inline_outb_mbuf_prepare(ss, mb, k);
return k;
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v8 06/10] ipsec: add transmit segmentation offload support
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 06/10] ipsec: add transmit segmentation offload support Radu Nicolau
@ 2021-10-12 12:42 ` Ananyev, Konstantin
2021-10-12 16:25 ` Ananyev, Konstantin
0 siblings, 1 reply; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-10-12 12:42 UTC (permalink / raw)
To: Nicolau, Radu, Iremonger, Bernard, Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal,
gakhil, anoobj, Doherty, Declan, Sinha, Abhijit, Buckley,
Daniel M, marchana, ktejasree, matan
> Add support for transmit segmentation offload to inline crypto processing
> mode. This offload is not supported by other offload modes, as at a
> minimum it requires inline crypto for IPsec to be supported on the
> network interface.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
> doc/guides/prog_guide/ipsec_lib.rst | 2 +
> doc/guides/rel_notes/release_21_11.rst | 1 +
> lib/ipsec/esp_outb.c | 119 +++++++++++++++++++++----
> 3 files changed, 103 insertions(+), 19 deletions(-)
>
> diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
> index af51ff8131..fc0af5eadb 100644
> --- a/doc/guides/prog_guide/ipsec_lib.rst
> +++ b/doc/guides/prog_guide/ipsec_lib.rst
> @@ -315,6 +315,8 @@ Supported features
>
> * NAT-T / UDP encapsulated ESP.
>
> +* TSO support (only for inline crypto mode)
> +
> * algorithms: 3DES-CBC, AES-CBC, AES-CTR, AES-GCM, AES_CCM, CHACHA20_POLY1305,
> AES_GMAC, HMAC-SHA1, NULL.
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index 73a566eaca..77535ace36 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -138,6 +138,7 @@ New Features
>
> * Added support for AEAD algorithms AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
> * Added support for NAT-T / UDP encapsulated ESP
> + * Added support TSO offload support; only supported for inline crypto mode.
>
>
> Removed Items
> diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
> index 0e3314b358..df7d3e8645 100644
> --- a/lib/ipsec/esp_outb.c
> +++ b/lib/ipsec/esp_outb.c
> @@ -147,6 +147,7 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
> struct rte_esp_tail *espt;
> char *ph, *pt;
> uint64_t *iv;
> + uint8_t tso = !!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG));
Why not simply
int tso = mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG)) != 0;
?
As this functionality is supported only for inbound, it is better to pass tso as a aparamer,
Instead of checking it here.
Then for lookaside and cpu invocations it will always be zero, for inline it would be determinied
by packet flags.
>
> /* calculate extra header space required */
> hlen = sa->hdr_len + sa->iv_len + sizeof(*esph);
> @@ -157,11 +158,20 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
>
> /* number of bytes to encrypt */
> clen = plen + sizeof(*espt);
> - clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
> +
> + /* We don't need to pad/ailgn packet when using TSO offload */
> + if (likely(!tso))
I don't think we really do likely/unlikely here.
Especially if it will be a constant for non-inline cases.
> + clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
> +
>
> /* pad length + esp tail */
> pdlen = clen - plen;
> - tlen = pdlen + sa->icv_len + sqh_len;
> +
> + /* We don't append ICV length when using TSO offload */
> + if (likely(!tso))
> + tlen = pdlen + sa->icv_len + sqh_len;
> + else
> + tlen = pdlen + sqh_len;
>
> /* do append and prepend */
> ml = rte_pktmbuf_lastseg(mb);
> @@ -346,6 +356,7 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
> char *ph, *pt;
> uint64_t *iv;
> uint32_t l2len, l3len;
> + uint8_t tso = !!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG));
Same thoughts as for _tun_ counterpart.
>
> l2len = mb->l2_len;
> l3len = mb->l3_len;
> @@ -358,11 +369,19 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
>
> /* number of bytes to encrypt */
> clen = plen + sizeof(*espt);
> - clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
> +
> + /* We don't need to pad/ailgn packet when using TSO offload */
> + if (likely(!tso))
> + clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
>
> /* pad length + esp tail */
> pdlen = clen - plen;
> - tlen = pdlen + sa->icv_len + sqh_len;
> +
> + /* We don't append ICV length when using TSO offload */
> + if (likely(!tso))
> + tlen = pdlen + sa->icv_len + sqh_len;
> + else
> + tlen = pdlen + sqh_len;
>
> /* do append and insert */
> ml = rte_pktmbuf_lastseg(mb);
> @@ -660,6 +679,29 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
> }
> }
>
> +/* check if packet will exceed MSS and segmentation is required */
> +static inline int
> +esn_outb_nb_segments(struct rte_mbuf *m) {
DPDK codying style pls.
> + uint16_t segments = 1;
> + uint16_t pkt_l3len = m->pkt_len - m->l2_len;
> +
> + /* Only support segmentation for UDP/TCP flows */
> + if (!(m->packet_type & (RTE_PTYPE_L4_UDP | RTE_PTYPE_L4_TCP)))
For ptypes it is not a bit flag, it should be something like:
pt = m->packet_type & RTE_PTYPE_L4_MASK;
if (pt == RTE_PTYPE_L4_UDP || pt == RTE_PTYPE_L4_TCP) {...}
BTW, ptype is usually used for RX path.
If you expect user to setup it up on TX path - it has to be documented in formal API comments.
> + return segments;
> +
> + if (m->tso_segsz > 0 && pkt_l3len > m->tso_segsz) {
> + segments = pkt_l3len / m->tso_segsz;
> + if (segments * m->tso_segsz < pkt_l3len)
> + segments++;
Why not simply:
segments = (pkt_l3len >= m->tso_segsz) ? 1 : (pkt_l3len + m->tso_segsz - 1) / m->tso_segsz;
?
> + if (m->packet_type & RTE_PTYPE_L4_TCP)
> + m->ol_flags |= (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM);
> + else
> + m->ol_flags |= (PKT_TX_UDP_SEG | PKT_TX_UDP_CKSUM);
> + }
> +
> + return segments;
> +}
> +
> /*
> * process group of ESP outbound tunnel packets destined for
> * INLINE_CRYPTO type of device.
> @@ -669,24 +711,36 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
> struct rte_mbuf *mb[], uint16_t num)
> {
> int32_t rc;
> - uint32_t i, k, n;
> + uint32_t i, k, nb_sqn = 0, nb_sqn_alloc;
> uint64_t sqn;
> rte_be64_t sqc;
> struct rte_ipsec_sa *sa;
> union sym_op_data icv;
> uint64_t iv[IPSEC_MAX_IV_QWORD];
> uint32_t dr[num];
> + uint16_t nb_segs[num];
>
> sa = ss->sa;
>
> - n = num;
> - sqn = esn_outb_update_sqn(sa, &n);
> - if (n != num)
> + for (i = 0; i != num; i++) {
> + nb_segs[i] = esn_outb_nb_segments(mb[i]);
> + nb_sqn += nb_segs[i];
> + /* setup offload fields for TSO */
> + if (nb_segs[i] > 1) {
> + mb[i]->ol_flags |= (PKT_TX_OUTER_IPV4 |
> + PKT_TX_OUTER_IP_CKSUM |
Hmm..., why did you deicde it would be always ipv4 packet?
Why it definitely needs outer ip cksum?
> + PKT_TX_TUNNEL_ESP);
Another question, why some flags you setup in esn_outb_nb_segments(),
others here?
> + mb[i]->outer_l3_len = mb[i]->l3_len;
Not sure I understand that part:
l3_len will be provided to us by user and will be inner l3_len,
while, we do add our own l3 hdr, which will become outer l3, no?
> + }
> + }
> +
> + nb_sqn_alloc = nb_sqn;
> + sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
> + if (nb_sqn_alloc != nb_sqn)
> rte_errno = EOVERFLOW;
>
> k = 0;
> - for (i = 0; i != n; i++) {
> -
> + for (i = 0; i != num; i++) {
You can't expect that nb_sqn_alloc == nb_sqn.
You need to handle EOVERFLOW here properly.
> sqc = rte_cpu_to_be_64(sqn + i);
> gen_iv(iv, sqc);
>
> @@ -700,11 +754,18 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
> dr[i - k] = i;
> rte_errno = -rc;
> }
> +
> + /**
> + * If packet is using tso, increment sqn by the number of
> + * segments for packet
> + */
> + if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
> + sqn += nb_segs[i] - 1;
I think instead of that you can just:
sqn += nb_segs[i];
and then above:
- sqc = rte_cpu_to_be_64(sqn + i);
+ sqc = rte_cpu_to_be_64(sqn);
> }
>
> /* copy not processed mbufs beyond good ones */
> - if (k != n && k != 0)
> - move_bad_mbufs(mb, dr, n, n - k);
> + if (k != num && k != 0)
> + move_bad_mbufs(mb, dr, num, num - k);
Same as above - you can't just assume there would be no failures
with SQN allocation.
> inline_outb_mbuf_prepare(ss, mb, k);
> return k;
Similar thoughts for _trs_ counterpart.
Honestly considering amount of changes introduced, I'd would like to see a new test-case for it.
Otherwise it is really hard to be sure that it does work as expected.
Can you add a new test-case for it to examples/ipsec-secgw/test?
> @@ -719,23 +780,36 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
> struct rte_mbuf *mb[], uint16_t num)
> {
> int32_t rc;
> - uint32_t i, k, n;
> + uint32_t i, k, nb_sqn, nb_sqn_alloc;
> uint64_t sqn;
> rte_be64_t sqc;
> struct rte_ipsec_sa *sa;
> union sym_op_data icv;
> uint64_t iv[IPSEC_MAX_IV_QWORD];
> uint32_t dr[num];
> + uint16_t nb_segs[num];
>
> sa = ss->sa;
>
> - n = num;
> - sqn = esn_outb_update_sqn(sa, &n);
> - if (n != num)
> + /* Calculate number of sequence numbers required */
> + for (i = 0, nb_sqn = 0; i != num; i++) {
> + nb_segs[i] = esn_outb_nb_segments(mb[i]);
> + nb_sqn += nb_segs[i];
> + /* setup offload fields for TSO */
> + if (nb_segs[i] > 1) {
> + mb[i]->ol_flags |= (PKT_TX_OUTER_IPV4 |
> + PKT_TX_OUTER_IP_CKSUM);
> + mb[i]->outer_l3_len = mb[i]->l3_len;
> + }
> + }
> +
> + nb_sqn_alloc = nb_sqn;
> + sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
> + if (nb_sqn_alloc != nb_sqn)
> rte_errno = EOVERFLOW;
>
> k = 0;
> - for (i = 0; i != n; i++) {
> + for (i = 0; i != num; i++) {
>
> sqc = rte_cpu_to_be_64(sqn + i);
> gen_iv(iv, sqc);
> @@ -750,11 +824,18 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
> dr[i - k] = i;
> rte_errno = -rc;
> }
> +
> + /**
> + * If packet is using tso, increment sqn by the number of
> + * segments for packet
> + */
> + if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
> + sqn += nb_segs[i] - 1;
> }
>
> /* copy not processed mbufs beyond good ones */
> - if (k != n && k != 0)
> - move_bad_mbufs(mb, dr, n, n - k);
> + if (k != num && k != 0)
> + move_bad_mbufs(mb, dr, num, num - k);
>
> inline_outb_mbuf_prepare(ss, mb, k);
> return k;
> --
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v8 06/10] ipsec: add transmit segmentation offload support
2021-10-12 12:42 ` Ananyev, Konstantin
@ 2021-10-12 16:25 ` Ananyev, Konstantin
2021-10-13 12:15 ` Nicolau, Radu
0 siblings, 1 reply; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-10-12 16:25 UTC (permalink / raw)
To: Nicolau, Radu, Iremonger, Bernard, Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal,
gakhil, anoobj, Doherty, Declan, Sinha, Abhijit, Buckley,
Daniel M, marchana, ktejasree, matan
> > +/* check if packet will exceed MSS and segmentation is required */
> > +static inline int
> > +esn_outb_nb_segments(struct rte_mbuf *m) {
>
> DPDK codying style pls.
>
> > + uint16_t segments = 1;
> > + uint16_t pkt_l3len = m->pkt_len - m->l2_len;
> > +
> > + /* Only support segmentation for UDP/TCP flows */
> > + if (!(m->packet_type & (RTE_PTYPE_L4_UDP | RTE_PTYPE_L4_TCP)))
>
> For ptypes it is not a bit flag, it should be something like:
>
> pt = m->packet_type & RTE_PTYPE_L4_MASK;
> if (pt == RTE_PTYPE_L4_UDP || pt == RTE_PTYPE_L4_TCP) {...}
>
> BTW, ptype is usually used for RX path.
> If you expect user to setup it up on TX path - it has to be documented in formal API comments.
Thinking a bit more about it:
Do we really need to force user to set ptypes to use this feature?
Might be something as simple as follows would work:
1. If user expects that he would need TSO for the ESP packet,
he would simply set PKT_TX_TCP_SEG flag and usual offload fields required
(l2_len, l3_len, l4_len, tso_segsz).
2. In ipsec lib we'll check for PKT_TX_TCP_SEG - and if it is set we'll do extra processing
(as your patch does - calc number of segments, fill ESP data in a bit different way,
fill outer_l2_len, outer_l3_len etc.)
3. If user overestimate things and there would be just one segment within packet with
PKT_TX_TCP_SEG - I don't think it is a big deal, things will keep working correctly and AFAIK
there would be no slowdown.
That way it should probably simplify things for this feature and would help
avoid setting extra ol_flags inside ipsec lib.
One side question - how PMD will report that this feature is supported?
Would it be extra field in rte_security_ipsec_xform or something different?
>
> > + return segments;
> > +
> > + if (m->tso_segsz > 0 && pkt_l3len > m->tso_segsz) {
> > + segments = pkt_l3len / m->tso_segsz;
> > + if (segments * m->tso_segsz < pkt_l3len)
> > + segments++;
>
> Why not simply:
> segments = (pkt_l3len >= m->tso_segsz) ? 1 : (pkt_l3len + m->tso_segsz - 1) / m->tso_segsz;
> ?
>
> > + if (m->packet_type & RTE_PTYPE_L4_TCP)
> > + m->ol_flags |= (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM);
> > + else
> > + m->ol_flags |= (PKT_TX_UDP_SEG | PKT_TX_UDP_CKSUM);
> > + }
> > +
> > + return segments;
> > +}
> > +
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v8 06/10] ipsec: add transmit segmentation offload support
2021-10-12 16:25 ` Ananyev, Konstantin
@ 2021-10-13 12:15 ` Nicolau, Radu
2021-10-14 14:44 ` Ananyev, Konstantin
0 siblings, 1 reply; 184+ messages in thread
From: Nicolau, Radu @ 2021-10-13 12:15 UTC (permalink / raw)
To: Ananyev, Konstantin, Iremonger, Bernard, Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal,
gakhil, anoobj, Doherty, Declan, Sinha, Abhijit, Buckley,
Daniel M, marchana, ktejasree, matan
On 10/12/2021 5:25 PM, Ananyev, Konstantin wrote:
>>> +/* check if packet will exceed MSS and segmentation is required */
>>> +static inline int
>>> +esn_outb_nb_segments(struct rte_mbuf *m) {
>> DPDK codying style pls.
>>
>>> + uint16_t segments = 1;
>>> + uint16_t pkt_l3len = m->pkt_len - m->l2_len;
>>> +
>>> + /* Only support segmentation for UDP/TCP flows */
>>> + if (!(m->packet_type & (RTE_PTYPE_L4_UDP | RTE_PTYPE_L4_TCP)))
>> For ptypes it is not a bit flag, it should be something like:
>>
>> pt = m->packet_type & RTE_PTYPE_L4_MASK;
>> if (pt == RTE_PTYPE_L4_UDP || pt == RTE_PTYPE_L4_TCP) {...}
>>
>> BTW, ptype is usually used for RX path.
>> If you expect user to setup it up on TX path - it has to be documented in formal API comments.
> Thinking a bit more about it:
> Do we really need to force user to set ptypes to use this feature?
> Might be something as simple as follows would work:
>
> 1. If user expects that he would need TSO for the ESP packet,
> he would simply set PKT_TX_TCP_SEG flag and usual offload fields required
> (l2_len, l3_len, l4_len, tso_segsz).
> 2. In ipsec lib we'll check for PKT_TX_TCP_SEG - and if it is set we'll do extra processing
> (as your patch does - calc number of segments, fill ESP data in a bit different way,
> fill outer_l2_len, outer_l3_len etc.)
> 3. If user overestimate things and there would be just one segment within packet with
> PKT_TX_TCP_SEG - I don't think it is a big deal, things will keep working correctly and AFAIK
> there would be no slowdown.
>
> That way it should probably simplify things for this feature and would help
> avoid setting extra ol_flags inside ipsec lib.
Yes, this sounds good, I will rework it like so.
> One side question - how PMD will report that this feature is supported?
> Would it be extra field in rte_security_ipsec_xform or something different?
The assumption is that if a PMD supports inline crypto and TSO it will
support them together, as DEV_TX_OFFLOAD_SECURITY | DEV_TX_OFFLOAD_TCP_TSO
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v8 06/10] ipsec: add transmit segmentation offload support
2021-10-13 12:15 ` Nicolau, Radu
@ 2021-10-14 14:44 ` Ananyev, Konstantin
0 siblings, 0 replies; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-10-14 14:44 UTC (permalink / raw)
To: Nicolau, Radu, Iremonger, Bernard, Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal,
gakhil, anoobj, Doherty, Declan, Sinha, Abhijit, Buckley,
Daniel M, marchana, ktejasree, matan
>
> On 10/12/2021 5:25 PM, Ananyev, Konstantin wrote:
> >>> +/* check if packet will exceed MSS and segmentation is required */
> >>> +static inline int
> >>> +esn_outb_nb_segments(struct rte_mbuf *m) {
> >> DPDK codying style pls.
> >>
> >>> + uint16_t segments = 1;
> >>> + uint16_t pkt_l3len = m->pkt_len - m->l2_len;
> >>> +
> >>> + /* Only support segmentation for UDP/TCP flows */
> >>> + if (!(m->packet_type & (RTE_PTYPE_L4_UDP | RTE_PTYPE_L4_TCP)))
> >> For ptypes it is not a bit flag, it should be something like:
> >>
> >> pt = m->packet_type & RTE_PTYPE_L4_MASK;
> >> if (pt == RTE_PTYPE_L4_UDP || pt == RTE_PTYPE_L4_TCP) {...}
> >>
> >> BTW, ptype is usually used for RX path.
> >> If you expect user to setup it up on TX path - it has to be documented in formal API comments.
> > Thinking a bit more about it:
> > Do we really need to force user to set ptypes to use this feature?
> > Might be something as simple as follows would work:
> >
> > 1. If user expects that he would need TSO for the ESP packet,
> > he would simply set PKT_TX_TCP_SEG flag and usual offload fields required
> > (l2_len, l3_len, l4_len, tso_segsz).
> > 2. In ipsec lib we'll check for PKT_TX_TCP_SEG - and if it is set we'll do extra processing
> > (as your patch does - calc number of segments, fill ESP data in a bit different way,
> > fill outer_l2_len, outer_l3_len etc.)
> > 3. If user overestimate things and there would be just one segment within packet with
> > PKT_TX_TCP_SEG - I don't think it is a big deal, things will keep working correctly and AFAIK
> > there would be no slowdown.
> >
> > That way it should probably simplify things for this feature and would help
> > avoid setting extra ol_flags inside ipsec lib.
>
> Yes, this sounds good, I will rework it like so.
>
> > One side question - how PMD will report that this feature is supported?
> > Would it be extra field in rte_security_ipsec_xform or something different?
>
> The assumption is that if a PMD supports inline crypto and TSO it will
> support them together, as DEV_TX_OFFLOAD_SECURITY | DEV_TX_OFFLOAD_TCP_TSO
Ok, but could we have situation when HW supports them on separately, but not together?
I.E. HW supports DEV_TX_OFFLOAD_TCP_TSO for plain packets, and HW supports
DEV_TX_OFFLOAD_SECURITY, but without segmentation?
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v8 07/10] ipsec: add support for SA telemetry
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 00/10] " Radu Nicolau
` (5 preceding siblings ...)
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 06/10] ipsec: add transmit segmentation offload support Radu Nicolau
@ 2021-10-11 11:29 ` Radu Nicolau
2021-10-12 15:25 ` Ananyev, Konstantin
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 08/10] ipsec: add support for initial SQN value Radu Nicolau
` (2 subsequent siblings)
9 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-10-11 11:29 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin, Ray Kinsella
Cc: dev, bruce.richardson, roy.fan.zhang, hemant.agrawal, gakhil,
anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add telemetry support for ipsec SAs
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
doc/guides/prog_guide/ipsec_lib.rst | 7 +
doc/guides/rel_notes/release_21_11.rst | 1 +
lib/ipsec/esp_inb.c | 18 +-
lib/ipsec/esp_outb.c | 12 +-
lib/ipsec/ipsec_telemetry.c | 237 +++++++++++++++++++++++++
lib/ipsec/meson.build | 6 +-
lib/ipsec/rte_ipsec.h | 23 +++
lib/ipsec/sa.c | 10 +-
lib/ipsec/sa.h | 9 +
lib/ipsec/version.map | 9 +
10 files changed, 321 insertions(+), 11 deletions(-)
create mode 100644 lib/ipsec/ipsec_telemetry.c
diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
index fc0af5eadb..2a262f8c51 100644
--- a/doc/guides/prog_guide/ipsec_lib.rst
+++ b/doc/guides/prog_guide/ipsec_lib.rst
@@ -321,6 +321,13 @@ Supported features
AES_GMAC, HMAC-SHA1, NULL.
+Telemetry support
+------------------
+Telemetry support implements SA details and IPsec packet add data counters
+statistics. Per SA telemetry statistics can be enabled using
+``rte_ipsec_telemetry_sa_add`` and disabled using
+``rte_ipsec_telemetry_sa_del``. Note that these calls are not thread safe.
+
Limitations
-----------
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 77535ace36..f0bc4438a4 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -139,6 +139,7 @@ New Features
* Added support for AEAD algorithms AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
* Added support for NAT-T / UDP encapsulated ESP
* Added support TSO offload support; only supported for inline crypto mode.
+ * Added support for SA telemetry.
Removed Items
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index d66c88f05d..6fbe468a61 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -15,7 +15,7 @@
#include "misc.h"
#include "pad.h"
-typedef uint16_t (*esp_inb_process_t)(const struct rte_ipsec_sa *sa,
+typedef uint16_t (*esp_inb_process_t)(struct rte_ipsec_sa *sa,
struct rte_mbuf *mb[], uint32_t sqn[], uint32_t dr[], uint16_t num,
uint8_t sqh_len);
@@ -573,10 +573,10 @@ tun_process_step3(struct rte_mbuf *mb, uint64_t txof_msk, uint64_t txof_val)
* *process* function for tunnel packets
*/
static inline uint16_t
-tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
+tun_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
uint32_t sqn[], uint32_t dr[], uint16_t num, uint8_t sqh_len)
{
- uint32_t adj, i, k, tl;
+ uint32_t adj, i, k, tl, bytes;
uint32_t hl[num], to[num];
struct rte_esp_tail espt[num];
struct rte_mbuf *ml[num];
@@ -598,6 +598,7 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
process_step1(mb[i], tlen, &ml[i], &espt[i], &hl[i], &to[i]);
k = 0;
+ bytes = 0;
for (i = 0; i != num; i++) {
adj = hl[i] + cofs;
@@ -621,10 +622,13 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
tun_process_step3(mb[i], sa->tx_offload.msk,
sa->tx_offload.val);
k++;
+ bytes += mb[i]->pkt_len;
} else
dr[i - k] = i;
}
+ sa->statistics.count += k;
+ sa->statistics.bytes += bytes;
return k;
}
@@ -632,11 +636,11 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
* *process* function for tunnel packets
*/
static inline uint16_t
-trs_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
+trs_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
uint32_t sqn[], uint32_t dr[], uint16_t num, uint8_t sqh_len)
{
char *np;
- uint32_t i, k, l2, tl;
+ uint32_t i, k, l2, tl, bytes;
uint32_t hl[num], to[num];
struct rte_esp_tail espt[num];
struct rte_mbuf *ml[num];
@@ -656,6 +660,7 @@ trs_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
process_step1(mb[i], tlen, &ml[i], &espt[i], &hl[i], &to[i]);
k = 0;
+ bytes = 0;
for (i = 0; i != num; i++) {
tl = tlen + espt[i].pad_len;
@@ -674,10 +679,13 @@ trs_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
/* update mbuf's metadata */
trs_process_step3(mb[i]);
k++;
+ bytes += mb[i]->pkt_len;
} else
dr[i - k] = i;
}
+ sa->statistics.count += k;
+ sa->statistics.bytes += bytes;
return k;
}
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index df7d3e8645..b18057b7da 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -625,7 +625,7 @@ uint16_t
esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
uint16_t num)
{
- uint32_t i, k, icv_len, *icv;
+ uint32_t i, k, icv_len, *icv, bytes;
struct rte_mbuf *ml;
struct rte_ipsec_sa *sa;
uint32_t dr[num];
@@ -634,6 +634,7 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
k = 0;
icv_len = sa->icv_len;
+ bytes = 0;
for (i = 0; i != num; i++) {
if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
@@ -644,10 +645,13 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
icv = rte_pktmbuf_mtod_offset(ml, void *,
ml->data_len - icv_len);
remove_sqh(icv, icv_len);
+ bytes += mb[i]->pkt_len;
k++;
} else
dr[i - k] = i;
}
+ sa->statistics.count += k;
+ sa->statistics.bytes += bytes;
/* handle unprocessed mbufs */
if (k != num) {
@@ -667,16 +671,20 @@ static inline void
inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- uint32_t i, ol_flags;
+ uint32_t i, ol_flags, bytes;
ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
+ bytes = 0;
for (i = 0; i != num; i++) {
mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
+ bytes += mb[i]->pkt_len;
if (ol_flags != 0)
rte_security_set_pkt_metadata(ss->security.ctx,
ss->security.ses, mb[i], NULL);
}
+ ss->sa->statistics.count += num;
+ ss->sa->statistics.bytes += bytes;
}
/* check if packet will exceed MSS and segmentation is required */
diff --git a/lib/ipsec/ipsec_telemetry.c b/lib/ipsec/ipsec_telemetry.c
new file mode 100644
index 0000000000..f963d062a8
--- /dev/null
+++ b/lib/ipsec/ipsec_telemetry.c
@@ -0,0 +1,237 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include <rte_ipsec.h>
+#include <rte_telemetry.h>
+#include <rte_malloc.h>
+#include "sa.h"
+
+
+struct ipsec_telemetry_entry {
+ LIST_ENTRY(ipsec_telemetry_entry) next;
+ struct rte_ipsec_sa *sa;
+};
+static LIST_HEAD(ipsec_telemetry_head, ipsec_telemetry_entry)
+ ipsec_telemetry_list = LIST_HEAD_INITIALIZER();
+
+static int
+handle_telemetry_cmd_ipsec_sa_list(const char *cmd __rte_unused,
+ const char *params __rte_unused,
+ struct rte_tel_data *data)
+{
+ struct ipsec_telemetry_entry *entry;
+ rte_tel_data_start_array(data, RTE_TEL_U64_VAL);
+
+ LIST_FOREACH(entry, &ipsec_telemetry_list, next) {
+ struct rte_ipsec_sa *sa = entry->sa;
+ rte_tel_data_add_array_u64(data, rte_be_to_cpu_32(sa->spi));
+ }
+
+ return 0;
+}
+
+/**
+ * Handle IPsec SA statistics telemetry request
+ *
+ * Return dict of SA's with dict of key/value counters
+ *
+ * {
+ * "SA_SPI_XX": {"count": 0, "bytes": 0, "errors": 0},
+ * "SA_SPI_YY": {"count": 0, "bytes": 0, "errors": 0}
+ * }
+ *
+ */
+static int
+handle_telemetry_cmd_ipsec_sa_stats(const char *cmd __rte_unused,
+ const char *params,
+ struct rte_tel_data *data)
+{
+ struct ipsec_telemetry_entry *entry;
+ struct rte_ipsec_sa *sa;
+ uint32_t sa_spi = 0;
+
+ if (params)
+ sa_spi = rte_cpu_to_be_32((uint32_t)strtoul(params, NULL, 10));
+
+ rte_tel_data_start_dict(data);
+
+ LIST_FOREACH(entry, &ipsec_telemetry_list, next) {
+ char sa_name[64];
+ sa = entry->sa;
+ static const char *name_pkt_cnt = "count";
+ static const char *name_byte_cnt = "bytes";
+ static const char *name_error_cnt = "errors";
+ struct rte_tel_data *sa_data;
+
+ /* If user provided SPI only get telemetry for that SA */
+ if (sa_spi && (sa_spi != sa->spi))
+ continue;
+
+ /* allocate telemetry data struct for SA telemetry */
+ sa_data = rte_tel_data_alloc();
+ if (!sa_data)
+ return -ENOMEM;
+
+ rte_tel_data_start_dict(sa_data);
+
+ /* add telemetry key/values pairs */
+ rte_tel_data_add_dict_u64(sa_data, name_pkt_cnt,
+ sa->statistics.count);
+
+ rte_tel_data_add_dict_u64(sa_data, name_byte_cnt,
+ sa->statistics.bytes -
+ (sa->statistics.count * sa->hdr_len));
+
+ rte_tel_data_add_dict_u64(sa_data, name_error_cnt,
+ sa->statistics.errors.count);
+
+ /* generate telemetry label */
+ snprintf(sa_name, sizeof(sa_name), "SA_SPI_%i",
+ rte_be_to_cpu_32(sa->spi));
+
+ /* add SA telemetry to dictionary container */
+ rte_tel_data_add_dict_container(data, sa_name, sa_data, 0);
+ }
+
+ return 0;
+}
+
+static int
+handle_telemetry_cmd_ipsec_sa_details(const char *cmd __rte_unused,
+ const char *params,
+ struct rte_tel_data *data)
+{
+ struct ipsec_telemetry_entry *entry;
+ struct rte_ipsec_sa *sa;
+ uint32_t sa_spi;
+
+ if (params)
+ sa_spi = rte_cpu_to_be_32((uint32_t)strtoul(params, NULL, 10));
+ else
+ return -EINVAL;
+
+ rte_tel_data_start_dict(data);
+
+ LIST_FOREACH(entry, &ipsec_telemetry_list, next) {
+ uint64_t mode;
+ sa = entry->sa;
+ if (sa_spi != sa->spi)
+ continue;
+
+ /* add SA configuration key/values pairs */
+ rte_tel_data_add_dict_string(data, "Type",
+ (sa->type & RTE_IPSEC_SATP_PROTO_MASK) ==
+ RTE_IPSEC_SATP_PROTO_AH ? "AH" : "ESP");
+
+ rte_tel_data_add_dict_string(data, "Direction",
+ (sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+ RTE_IPSEC_SATP_DIR_IB ? "Inbound" : "Outbound");
+
+ mode = sa->type & RTE_IPSEC_SATP_MODE_MASK;
+
+ if (mode == RTE_IPSEC_SATP_MODE_TRANS) {
+ rte_tel_data_add_dict_string(data, "Mode", "Transport");
+ } else {
+ rte_tel_data_add_dict_string(data, "Mode", "Tunnel");
+
+ if ((sa->type & RTE_IPSEC_SATP_NATT_MASK) ==
+ RTE_IPSEC_SATP_NATT_ENABLE) {
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ } else if (sa->type &
+ RTE_IPSEC_SATP_MODE_TUNLV6) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ }
+ } else {
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ } else if (sa->type &
+ RTE_IPSEC_SATP_MODE_TUNLV6) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ }
+ }
+ }
+
+ rte_tel_data_add_dict_string(data,
+ "extended-sequence-number",
+ (sa->type & RTE_IPSEC_SATP_ESN_MASK) ==
+ RTE_IPSEC_SATP_ESN_ENABLE ?
+ "enabled" : "disabled");
+
+ if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+ RTE_IPSEC_SATP_DIR_IB)
+
+ if (sa->sqn.inb.rsn[sa->sqn.inb.rdidx])
+ rte_tel_data_add_dict_u64(data,
+ "sequence-number",
+ sa->sqn.inb.rsn[sa->sqn.inb.rdidx]->sqn);
+ else
+ rte_tel_data_add_dict_u64(data,
+ "sequence-number", 0);
+ else
+ rte_tel_data_add_dict_u64(data, "sequence-number",
+ sa->sqn.outb);
+
+ rte_tel_data_add_dict_string(data,
+ "explicit-congestion-notification",
+ (sa->type & RTE_IPSEC_SATP_ECN_MASK) ==
+ RTE_IPSEC_SATP_ECN_ENABLE ?
+ "enabled" : "disabled");
+
+ rte_tel_data_add_dict_string(data,
+ "copy-DSCP",
+ (sa->type & RTE_IPSEC_SATP_DSCP_MASK) ==
+ RTE_IPSEC_SATP_DSCP_ENABLE ?
+ "enabled" : "disabled");
+ }
+
+ return 0;
+}
+
+
+int
+rte_ipsec_telemetry_sa_add(struct rte_ipsec_sa *sa)
+{
+ struct ipsec_telemetry_entry *entry = rte_zmalloc(NULL,
+ sizeof(struct ipsec_telemetry_entry), 0);
+ entry->sa = sa;
+ LIST_INSERT_HEAD(&ipsec_telemetry_list, entry, next);
+ return 0;
+}
+
+void
+rte_ipsec_telemetry_sa_del(struct rte_ipsec_sa *sa)
+{
+ struct ipsec_telemetry_entry *entry;
+ LIST_FOREACH(entry, &ipsec_telemetry_list, next) {
+ if (sa == entry->sa) {
+ LIST_REMOVE(entry, next);
+ rte_free(entry);
+ return;
+ }
+ }
+}
+
+
+RTE_INIT(rte_ipsec_telemetry_init)
+{
+ rte_telemetry_register_cmd("/ipsec/sa/list",
+ handle_telemetry_cmd_ipsec_sa_list,
+ "Return list of IPsec SAs with telemetry enabled.");
+ rte_telemetry_register_cmd("/ipsec/sa/stats",
+ handle_telemetry_cmd_ipsec_sa_stats,
+ "Returns IPsec SA stastistics. Parameters: int sa_spi");
+ rte_telemetry_register_cmd("/ipsec/sa/details",
+ handle_telemetry_cmd_ipsec_sa_details,
+ "Returns IPsec SA configuration. Parameters: int sa_spi");
+}
+
diff --git a/lib/ipsec/meson.build b/lib/ipsec/meson.build
index 1497f573bb..ddb9ea1767 100644
--- a/lib/ipsec/meson.build
+++ b/lib/ipsec/meson.build
@@ -1,9 +1,11 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2018 Intel Corporation
-sources = files('esp_inb.c', 'esp_outb.c', 'sa.c', 'ses.c', 'ipsec_sad.c')
+sources = files('esp_inb.c', 'esp_outb.c',
+ 'sa.c', 'ses.c', 'ipsec_sad.c',
+ 'ipsec_telemetry.c')
headers = files('rte_ipsec.h', 'rte_ipsec_sa.h', 'rte_ipsec_sad.h')
indirect_headers += files('rte_ipsec_group.h')
-deps += ['mbuf', 'net', 'cryptodev', 'security', 'hash']
+deps += ['mbuf', 'net', 'cryptodev', 'security', 'hash', 'telemetry']
diff --git a/lib/ipsec/rte_ipsec.h b/lib/ipsec/rte_ipsec.h
index dd60d95915..85f3ac0fff 100644
--- a/lib/ipsec/rte_ipsec.h
+++ b/lib/ipsec/rte_ipsec.h
@@ -158,6 +158,29 @@ rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
return ss->pkt_func.process(ss, mb, num);
}
+
+/**
+ * Enable per SA telemetry for a specific SA.
+ * Note that this function is not thread safe
+ * @param sa
+ * Pointer to the *rte_ipsec_sa* object that will have telemetry enabled.
+ * @return
+ * 0 on success, negative value otherwise.
+ */
+__rte_experimental
+int
+rte_ipsec_telemetry_sa_add(struct rte_ipsec_sa *sa);
+
+/**
+ * Disable per SA telemetry for a specific SA.
+ * Note that this function is not thread safe
+ * @param sa
+ * Pointer to the *rte_ipsec_sa* object that will have telemetry disabled.
+ */
+__rte_experimental
+void
+rte_ipsec_telemetry_sa_del(struct rte_ipsec_sa *sa);
+
#include <rte_ipsec_group.h>
#ifdef __cplusplus
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 1dd19467a6..44dcc524ee 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -649,19 +649,25 @@ uint16_t
pkt_flag_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- uint32_t i, k;
+ uint32_t i, k, bytes;
uint32_t dr[num];
RTE_SET_USED(ss);
k = 0;
+ bytes = 0;
for (i = 0; i != num; i++) {
- if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
+ if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
k++;
+ bytes += mb[i]->pkt_len;
+ }
else
dr[i - k] = i;
}
+ ss->sa->statistics.count += k;
+ ss->sa->statistics.bytes += bytes;
+
/* handle unprocessed mbufs */
if (k != num) {
rte_errno = EBADMSG;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 107ebd1519..6e59f18e16 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -132,6 +132,15 @@ struct rte_ipsec_sa {
struct replay_sqn *rsn[REPLAY_SQN_NUM];
} inb;
} sqn;
+ /* Statistics */
+ struct {
+ uint64_t count;
+ uint64_t bytes;
+ struct {
+ uint64_t count;
+ uint64_t authentication_failed;
+ } errors;
+ } statistics;
} __rte_cache_aligned;
diff --git a/lib/ipsec/version.map b/lib/ipsec/version.map
index ba8753eac4..0af27ffd60 100644
--- a/lib/ipsec/version.map
+++ b/lib/ipsec/version.map
@@ -19,3 +19,12 @@ DPDK_22 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ # added in 21.11
+ rte_ipsec_telemetry_sa_add;
+ rte_ipsec_telemetry_sa_del;
+
+};
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v8 07/10] ipsec: add support for SA telemetry
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 07/10] ipsec: add support for SA telemetry Radu Nicolau
@ 2021-10-12 15:25 ` Ananyev, Konstantin
0 siblings, 0 replies; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-10-12 15:25 UTC (permalink / raw)
To: Nicolau, Radu, Iremonger, Bernard, Medvedkin, Vladimir, Ray Kinsella
Cc: dev, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal, gakhil,
anoobj, Doherty, Declan, Sinha, Abhijit, Buckley, Daniel M,
marchana, ktejasree, matan
>
> Add telemetry support for ipsec SAs
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
> /* check if packet will exceed MSS and segmentation is required */
> diff --git a/lib/ipsec/ipsec_telemetry.c b/lib/ipsec/ipsec_telemetry.c
> new file mode 100644
> index 0000000000..f963d062a8
> --- /dev/null
> +++ b/lib/ipsec/ipsec_telemetry.c
> @@ -0,0 +1,237 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2021 Intel Corporation
> + */
> +
> +#include <rte_ipsec.h>
> +#include <rte_telemetry.h>
> +#include <rte_malloc.h>
> +#include "sa.h"
> +
> +
> +struct ipsec_telemetry_entry {
> + LIST_ENTRY(ipsec_telemetry_entry) next;
> + struct rte_ipsec_sa *sa;
As a nit, might be const struct rte_ipsec_sa *sa?
> +};
> +static LIST_HEAD(ipsec_telemetry_head, ipsec_telemetry_entry)
> + ipsec_telemetry_list = LIST_HEAD_INITIALIZER();
> +
> +static int
> +handle_telemetry_cmd_ipsec_sa_list(const char *cmd __rte_unused,
> + const char *params __rte_unused,
> + struct rte_tel_data *data)
> +{
> + struct ipsec_telemetry_entry *entry;
> + rte_tel_data_start_array(data, RTE_TEL_U64_VAL);
> +
> + LIST_FOREACH(entry, &ipsec_telemetry_list, next) {
> + struct rte_ipsec_sa *sa = entry->sa;
> + rte_tel_data_add_array_u64(data, rte_be_to_cpu_32(sa->spi));
> + }
> +
> + return 0;
> +}
> +
> +/**
> + * Handle IPsec SA statistics telemetry request
> + *
> + * Return dict of SA's with dict of key/value counters
> + *
> + * {
> + * "SA_SPI_XX": {"count": 0, "bytes": 0, "errors": 0},
> + * "SA_SPI_YY": {"count": 0, "bytes": 0, "errors": 0}
> + * }
> + *
> + */
> +static int
> +handle_telemetry_cmd_ipsec_sa_stats(const char *cmd __rte_unused,
> + const char *params,
> + struct rte_tel_data *data)
> +{
> + struct ipsec_telemetry_entry *entry;
> + struct rte_ipsec_sa *sa;
> + uint32_t sa_spi = 0;
> +
> + if (params)
> + sa_spi = rte_cpu_to_be_32((uint32_t)strtoul(params, NULL, 10));
As a nit - probably worth to check that strtoul didn't return any errors.
Also, I don't know should be limit users to decimal input only.
> +
> + rte_tel_data_start_dict(data);
> +
> + LIST_FOREACH(entry, &ipsec_telemetry_list, next) {
> + char sa_name[64];
> + sa = entry->sa;
> + static const char *name_pkt_cnt = "count";
> + static const char *name_byte_cnt = "bytes";
> + static const char *name_error_cnt = "errors";
> + struct rte_tel_data *sa_data;
> +
> + /* If user provided SPI only get telemetry for that SA */
> + if (sa_spi && (sa_spi != sa->spi))
> + continue;
> +
> + /* allocate telemetry data struct for SA telemetry */
> + sa_data = rte_tel_data_alloc();
> + if (!sa_data)
> + return -ENOMEM;
> +
> + rte_tel_data_start_dict(sa_data);
> +
> + /* add telemetry key/values pairs */
> + rte_tel_data_add_dict_u64(sa_data, name_pkt_cnt,
> + sa->statistics.count);
> +
> + rte_tel_data_add_dict_u64(sa_data, name_byte_cnt,
> + sa->statistics.bytes -
> + (sa->statistics.count * sa->hdr_len));
> +
> + rte_tel_data_add_dict_u64(sa_data, name_error_cnt,
> + sa->statistics.errors.count);
> +
> + /* generate telemetry label */
> + snprintf(sa_name, sizeof(sa_name), "SA_SPI_%i",
> + rte_be_to_cpu_32(sa->spi));
> +
> + /* add SA telemetry to dictionary container */
> + rte_tel_data_add_dict_container(data, sa_name, sa_data, 0);
> + }
> +
> + return 0;
> +}
> +
> +static int
> +handle_telemetry_cmd_ipsec_sa_details(const char *cmd __rte_unused,
> + const char *params,
> + struct rte_tel_data *data)
> +{
> + struct ipsec_telemetry_entry *entry;
> + struct rte_ipsec_sa *sa;
> + uint32_t sa_spi;
> +
> + if (params)
> + sa_spi = rte_cpu_to_be_32((uint32_t)strtoul(params, NULL, 10));
> + else
> + return -EINVAL;
> +
> + rte_tel_data_start_dict(data);
> +
> + LIST_FOREACH(entry, &ipsec_telemetry_list, next) {
> + uint64_t mode;
> + sa = entry->sa;
> + if (sa_spi != sa->spi)
> + continue;
> +
> + /* add SA configuration key/values pairs */
> + rte_tel_data_add_dict_string(data, "Type",
> + (sa->type & RTE_IPSEC_SATP_PROTO_MASK) ==
> + RTE_IPSEC_SATP_PROTO_AH ? "AH" : "ESP");
> +
> + rte_tel_data_add_dict_string(data, "Direction",
> + (sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
> + RTE_IPSEC_SATP_DIR_IB ? "Inbound" : "Outbound");
> +
> + mode = sa->type & RTE_IPSEC_SATP_MODE_MASK;
> +
> + if (mode == RTE_IPSEC_SATP_MODE_TRANS) {
> + rte_tel_data_add_dict_string(data, "Mode", "Transport");
> + } else {
> + rte_tel_data_add_dict_string(data, "Mode", "Tunnel");
> +
> + if ((sa->type & RTE_IPSEC_SATP_NATT_MASK) ==
> + RTE_IPSEC_SATP_NATT_ENABLE) {
> + if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
> + rte_tel_data_add_dict_string(data,
> + "Tunnel-Type",
> + "IPv4-UDP");
> + } else if (sa->type &
> + RTE_IPSEC_SATP_MODE_TUNLV6) {
> + rte_tel_data_add_dict_string(data,
> + "Tunnel-Type",
> + "IPv4-UDP");
> + }
> + } else {
> + if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
> + rte_tel_data_add_dict_string(data,
> + "Tunnel-Type",
> + "IPv4-UDP");
> + } else if (sa->type &
> + RTE_IPSEC_SATP_MODE_TUNLV6) {
> + rte_tel_data_add_dict_string(data,
> + "Tunnel-Type",
> + "IPv4-UDP");
> + }
> + }
> + }
> +
> + rte_tel_data_add_dict_string(data,
> + "extended-sequence-number",
> + (sa->type & RTE_IPSEC_SATP_ESN_MASK) ==
> + RTE_IPSEC_SATP_ESN_ENABLE ?
> + "enabled" : "disabled");
> +
> + if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
> + RTE_IPSEC_SATP_DIR_IB)
> +
> + if (sa->sqn.inb.rsn[sa->sqn.inb.rdidx])
> + rte_tel_data_add_dict_u64(data,
> + "sequence-number",
> + sa->sqn.inb.rsn[sa->sqn.inb.rdidx]->sqn);
> + else
> + rte_tel_data_add_dict_u64(data,
> + "sequence-number", 0);
> + else
> + rte_tel_data_add_dict_u64(data, "sequence-number",
> + sa->sqn.outb);
> +
> + rte_tel_data_add_dict_string(data,
> + "explicit-congestion-notification",
> + (sa->type & RTE_IPSEC_SATP_ECN_MASK) ==
> + RTE_IPSEC_SATP_ECN_ENABLE ?
> + "enabled" : "disabled");
> +
> + rte_tel_data_add_dict_string(data,
> + "copy-DSCP",
> + (sa->type & RTE_IPSEC_SATP_DSCP_MASK) ==
> + RTE_IPSEC_SATP_DSCP_ENABLE ?
> + "enabled" : "disabled");
> + }
> +
> + return 0;
> +}
> +
> +
> +int
> +rte_ipsec_telemetry_sa_add(struct rte_ipsec_sa *sa)
Just as a nit, here and in _del:
it probably could be const struct rte_ipsec_sa *sa
> +{
> + struct ipsec_telemetry_entry *entry = rte_zmalloc(NULL,
> + sizeof(struct ipsec_telemetry_entry), 0);
Need to check malloc() return value.
Can't assume it is always successful.
With that fixed:
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> + entry->sa = sa;
> + LIST_INSERT_HEAD(&ipsec_telemetry_list, entry, next);
> + return 0;
> +}
> +
> +void
> +rte_ipsec_telemetry_sa_del(struct rte_ipsec_sa *sa)
> +{
> + struct ipsec_telemetry_entry *entry;
> + LIST_FOREACH(entry, &ipsec_telemetry_list, next) {
> + if (sa == entry->sa) {
> + LIST_REMOVE(entry, next);
> + rte_free(entry);
> + return;
> + }
> + }
> +}
> +
> +
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v8 08/10] ipsec: add support for initial SQN value
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 00/10] " Radu Nicolau
` (6 preceding siblings ...)
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 07/10] ipsec: add support for SA telemetry Radu Nicolau
@ 2021-10-11 11:29 ` Radu Nicolau
2021-10-12 15:35 ` Ananyev, Konstantin
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 09/10] doc: remove unneeded ipsec new field deprecation Radu Nicolau
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 10/10] doc: remove unneeded security deprecation Radu Nicolau
9 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-10-11 11:29 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Update IPsec library to support initial SQN value.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
doc/guides/rel_notes/release_21_11.rst | 1 +
lib/ipsec/sa.c | 25 ++++++++++++++++++-------
2 files changed, 19 insertions(+), 7 deletions(-)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index f0bc4438a4..0686679677 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -140,6 +140,7 @@ New Features
* Added support for NAT-T / UDP encapsulated ESP
* Added support TSO offload support; only supported for inline crypto mode.
* Added support for SA telemetry.
+ * Added support for setting a non default starting ESN value.
Removed Items
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 44dcc524ee..85e06069de 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -294,11 +294,11 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
* Init ESP outbound specific things.
*/
static void
-esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
+esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen, uint64_t sqn)
{
uint8_t algo_type;
- sa->sqn.outb = 1;
+ sa->sqn.outb = sqn > 1 ? sqn : 1;
algo_type = sa->algo_type;
@@ -376,7 +376,7 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
sa->tx_offload.val = rte_mbuf_tx_offload(sa->hdr_l3_off,
sa->hdr_len - sa->hdr_l3_off, 0, 0, 0, 0, 0);
- esp_outb_init(sa, sa->hdr_len);
+ esp_outb_init(sa, sa->hdr_len, prm->ipsec_xform.esn.value);
}
/*
@@ -502,7 +502,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
- esp_outb_init(sa, 0);
+ esp_outb_init(sa, 0, prm->ipsec_xform.esn.value);
break;
}
@@ -513,15 +513,19 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
* helper function, init SA replay structure.
*/
static void
-fill_sa_replay(struct rte_ipsec_sa *sa, uint32_t wnd_sz, uint32_t nb_bucket)
+fill_sa_replay(struct rte_ipsec_sa *sa, uint32_t wnd_sz, uint32_t nb_bucket,
+ uint64_t sqn)
{
sa->replay.win_sz = wnd_sz;
sa->replay.nb_bucket = nb_bucket;
sa->replay.bucket_index_mask = nb_bucket - 1;
sa->sqn.inb.rsn[0] = (struct replay_sqn *)(sa + 1);
- if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+ sa->sqn.inb.rsn[0]->sqn = sqn;
+ if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM) {
sa->sqn.inb.rsn[1] = (struct replay_sqn *)
((uintptr_t)sa->sqn.inb.rsn[0] + rsn_size(nb_bucket));
+ sa->sqn.inb.rsn[1]->sqn = sqn;
+ }
}
int
@@ -591,13 +595,20 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
UINT32_MAX : UINT64_MAX;
+ /* if we are starting from a non-zero sn value */
+ if (prm->ipsec_xform.esn.value > 0) {
+ if (prm->ipsec_xform.direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+ sa->sqn.outb = prm->ipsec_xform.esn.value;
+ }
+
rc = esp_sa_init(sa, prm, &cxf);
if (rc != 0)
rte_ipsec_sa_fini(sa);
/* fill replay window related fields */
if (nb != 0)
- fill_sa_replay(sa, wsz, nb);
+ fill_sa_replay(sa, wsz, nb, prm->ipsec_xform.esn.value);
return sz;
}
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v8 08/10] ipsec: add support for initial SQN value
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 08/10] ipsec: add support for initial SQN value Radu Nicolau
@ 2021-10-12 15:35 ` Ananyev, Konstantin
0 siblings, 0 replies; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-10-12 15:35 UTC (permalink / raw)
To: Nicolau, Radu, Iremonger, Bernard, Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal,
gakhil, anoobj, Doherty, Declan, Sinha, Abhijit, Buckley,
Daniel M, marchana, ktejasree, matan
> Update IPsec library to support initial SQN value.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
> doc/guides/rel_notes/release_21_11.rst | 1 +
> lib/ipsec/sa.c | 25 ++++++++++++++++++-------
> 2 files changed, 19 insertions(+), 7 deletions(-)
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index f0bc4438a4..0686679677 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -140,6 +140,7 @@ New Features
> * Added support for NAT-T / UDP encapsulated ESP
> * Added support TSO offload support; only supported for inline crypto mode.
> * Added support for SA telemetry.
> + * Added support for setting a non default starting ESN value.
>
>
> Removed Items
> diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
> index 44dcc524ee..85e06069de 100644
> --- a/lib/ipsec/sa.c
> +++ b/lib/ipsec/sa.c
> @@ -294,11 +294,11 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
> * Init ESP outbound specific things.
> */
> static void
> -esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
> +esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen, uint64_t sqn)
> {
> uint8_t algo_type;
>
> - sa->sqn.outb = 1;
> + sa->sqn.outb = sqn > 1 ? sqn : 1;
>
> algo_type = sa->algo_type;
>
> @@ -376,7 +376,7 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
> sa->tx_offload.val = rte_mbuf_tx_offload(sa->hdr_l3_off,
> sa->hdr_len - sa->hdr_l3_off, 0, 0, 0, 0, 0);
>
> - esp_outb_init(sa, sa->hdr_len);
> + esp_outb_init(sa, sa->hdr_len, prm->ipsec_xform.esn.value);
> }
>
> /*
> @@ -502,7 +502,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
> case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
> RTE_IPSEC_SATP_NATT_ENABLE):
> case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
> - esp_outb_init(sa, 0);
> + esp_outb_init(sa, 0, prm->ipsec_xform.esn.value);
> break;
> }
>
> @@ -513,15 +513,19 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
> * helper function, init SA replay structure.
> */
> static void
> -fill_sa_replay(struct rte_ipsec_sa *sa, uint32_t wnd_sz, uint32_t nb_bucket)
> +fill_sa_replay(struct rte_ipsec_sa *sa, uint32_t wnd_sz, uint32_t nb_bucket,
> + uint64_t sqn)
> {
> sa->replay.win_sz = wnd_sz;
> sa->replay.nb_bucket = nb_bucket;
> sa->replay.bucket_index_mask = nb_bucket - 1;
> sa->sqn.inb.rsn[0] = (struct replay_sqn *)(sa + 1);
> - if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
> + sa->sqn.inb.rsn[0]->sqn = sqn;
> + if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM) {
> sa->sqn.inb.rsn[1] = (struct replay_sqn *)
> ((uintptr_t)sa->sqn.inb.rsn[0] + rsn_size(nb_bucket));
> + sa->sqn.inb.rsn[1]->sqn = sqn;
> + }
> }
>
> int
> @@ -591,13 +595,20 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
> sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
> UINT32_MAX : UINT64_MAX;
>
> + /* if we are starting from a non-zero sn value */
> + if (prm->ipsec_xform.esn.value > 0) {
> + if (prm->ipsec_xform.direction ==
> + RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
> + sa->sqn.outb = prm->ipsec_xform.esn.value;
> + }
> +
I think I already asked this question for previous version, but don't
remember what was the answer, so I'll ask again:
You do set sa->sqn.outb inside esp_outb_init().
Which will be invoked by esp_sa_init() below.
Why do you need to duplicate it here?
> rc = esp_sa_init(sa, prm, &cxf);
> if (rc != 0)
> rte_ipsec_sa_fini(sa);
>
> /* fill replay window related fields */
> if (nb != 0)
> - fill_sa_replay(sa, wsz, nb);
> + fill_sa_replay(sa, wsz, nb, prm->ipsec_xform.esn.value);
>
> return sz;
> }
> --
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v8 09/10] doc: remove unneeded ipsec new field deprecation
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 00/10] " Radu Nicolau
` (7 preceding siblings ...)
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 08/10] ipsec: add support for initial SQN value Radu Nicolau
@ 2021-10-11 11:29 ` Radu Nicolau
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 10/10] doc: remove unneeded security deprecation Radu Nicolau
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-11 11:29 UTC (permalink / raw)
To: Ray Kinsella
Cc: dev, konstantin.ananyev, vladimir.medvedkin, bruce.richardson,
roy.fan.zhang, hemant.agrawal, gakhil, anoobj, declan.doherty,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau
The deprecation notice regarding extendig rte_ipsec_sa_prm with a
new field hdr_l3_len is no longer applicable.
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 3 ---
1 file changed, 3 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index d24d69b669..b4e367712a 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -217,9 +217,6 @@ Deprecation Notices
will be updated with new fields to support new features like IPsec inner
checksum, TSO in case of protocol offload.
-* ipsec: The structure ``rte_ipsec_sa_prm`` will be extended with a new field
- ``hdr_l3_len`` to configure tunnel L3 header length.
-
* eventdev: The file ``rte_eventdev_pmd.h`` will be renamed to ``eventdev_driver.h``
to make the driver interface as internal and the structures ``rte_eventdev_data``,
``rte_eventdev`` and ``rte_eventdevs`` will be moved to a new file named
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v8 10/10] doc: remove unneeded security deprecation
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 00/10] " Radu Nicolau
` (8 preceding siblings ...)
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 09/10] doc: remove unneeded ipsec new field deprecation Radu Nicolau
@ 2021-10-11 11:29 ` Radu Nicolau
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-11 11:29 UTC (permalink / raw)
To: Ray Kinsella
Cc: dev, konstantin.ananyev, vladimir.medvedkin, bruce.richardson,
roy.fan.zhang, hemant.agrawal, gakhil, anoobj, declan.doherty,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau
The new fields regarding TSO support were not implemented following
feedback, it was decided to implement TSO support by using existing
mbuf fields.
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index b4e367712a..6670443248 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -210,12 +210,9 @@ Deprecation Notices
pointer for the private data to the application which can be attached
to the packet while enqueuing.
-* security: The structure ``rte_security_ipsec_xform`` will be extended with:
- new field: IPsec payload MSS (Maximum Segment Size).
-
* security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
will be updated with new fields to support new features like IPsec inner
- checksum, TSO in case of protocol offload.
+ checksum.
* eventdev: The file ``rte_eventdev_pmd.h`` will be renamed to ``eventdev_driver.h``
to make the driver interface as internal and the structures ``rte_eventdev_data``,
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v9 00/10] new features for ipsec and security libraries
2021-07-13 13:35 [dpdk-dev] [PATCH 00/10] new features for ipsec and security libraries Radu Nicolau
` (16 preceding siblings ...)
2021-10-11 11:29 ` [dpdk-dev] [PATCH v8 00/10] " Radu Nicolau
@ 2021-10-13 12:13 ` Radu Nicolau
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 01/10] security: add ESN field to ipsec_xform Radu Nicolau
` (9 more replies)
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 0/9] new features for ipsec and security libraries Radu Nicolau
18 siblings, 10 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-13 12:13 UTC (permalink / raw)
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, gakhil, anoobj,
declan.doherty, abhijit.sinha, daniel.m.buckley, marchana,
ktejasree, matan, Radu Nicolau
Add support for:
TSO, NAT-T/UDP encapsulation, ESN
AES_CCM, CHACHA20_POLY1305 and AES_GMAC
SA telemetry
mbuf offload flags
Initial SQN value
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Radu Nicolau (10):
security: add ESN field to ipsec_xform
ipsec: add support for AEAD algorithms
security: add UDP params for IPsec NAT-T
ipsec: add support for NAT-T
mbuf: add IPsec ESP tunnel type
ipsec: add transmit segmentation offload support
ipsec: add support for SA telemetry
ipsec: add support for initial SQN value
doc: remove unneeded ipsec new field deprecation
doc: remove unneeded security deprecation
doc/guides/prog_guide/ipsec_lib.rst | 14 +-
doc/guides/rel_notes/deprecation.rst | 11 --
doc/guides/rel_notes/release_21_11.rst | 17 ++
lib/ipsec/crypto.h | 137 ++++++++++++++
lib/ipsec/esp_inb.c | 84 ++++++++-
lib/ipsec/esp_outb.c | 211 +++++++++++++++++----
lib/ipsec/ipsec_telemetry.c | 244 +++++++++++++++++++++++++
lib/ipsec/meson.build | 6 +-
lib/ipsec/rte_ipsec.h | 23 +++
lib/ipsec/rte_ipsec_sa.h | 9 +-
lib/ipsec/sa.c | 119 ++++++++++--
lib/ipsec/sa.h | 15 ++
lib/ipsec/version.map | 9 +
lib/mbuf/rte_mbuf_core.h | 1 +
lib/security/rte_security.h | 15 ++
15 files changed, 843 insertions(+), 72 deletions(-)
create mode 100644 lib/ipsec/ipsec_telemetry.c
--
v2: fixed lib/ipsec/version.map updates to show correct version
v3: fixed build error and corrected misspelled email address
v4: add doxygen comments for the IPsec telemetry APIs
update inline comments refering to the wrong RFC
v5: update commit messages after feedback
update the UDP encapsulation patch to actually use the configured ports
v6: fix initial SQN value
v7: reworked the patches after feedback
v8: updated library doc, release notes and removed deprecation notices
v9: reworked telemetry, tso and esn patches
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v9 01/10] security: add ESN field to ipsec_xform
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 00/10] new features for ipsec and security libraries Radu Nicolau
@ 2021-10-13 12:13 ` Radu Nicolau
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 02/10] ipsec: add support for AEAD algorithms Radu Nicolau
` (8 subsequent siblings)
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-13 12:13 UTC (permalink / raw)
To: Ray Kinsella, Akhil Goyal, Declan Doherty
Cc: dev, konstantin.ananyev, vladimir.medvedkin, bruce.richardson,
roy.fan.zhang, hemant.agrawal, anoobj, abhijit.sinha,
daniel.m.buckley, marchana, ktejasree, matan, Radu Nicolau
Update ipsec_xform definition to include ESN field.
This allows the application to control the ESN starting value.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 2 +-
doc/guides/rel_notes/release_21_11.rst | 5 +++++
lib/security/rte_security.h | 8 ++++++++
3 files changed, 14 insertions(+), 1 deletion(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index a4e86b31f5..accb9c7d83 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -206,7 +206,7 @@ Deprecation Notices
* security: The structure ``rte_security_ipsec_xform`` will be extended with
multiple fields: source and destination port of UDP encapsulation,
- IPsec payload MSS (Maximum Segment Size), and ESN (Extended Sequence Number).
+ IPsec payload MSS (Maximum Segment Size).
* security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
will be updated with new fields to support new features like TSO in case of
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 54718ff367..f840586a20 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -265,6 +265,11 @@ ABI Changes
packet IPv4 header checksum and L4 checksum need to be offloaded to
security device.
+* security: A new structure ``esn`` was added in structure
+ ``rte_security_ipsec_xform`` to set an initial ESN value. This permits
+ application to start from an arbitrary ESN value for debug and SA lifetime
+ enforcement purposes.
+
* bbdev: Added capability related to more comprehensive CRC options,
shifting values of the ``enum rte_bbdev_op_ldpcdec_flag_bitmasks``.
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 7eb9f109ae..764ce83bca 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -318,6 +318,14 @@ struct rte_security_ipsec_xform {
/**< Anti replay window size to enable sequence replay attack handling.
* replay checking is disabled if the window size is 0.
*/
+ union {
+ uint64_t value;
+ struct {
+ uint32_t low;
+ uint32_t hi;
+ };
+ } esn;
+ /**< Extended Sequence Number */
};
/**
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v9 02/10] ipsec: add support for AEAD algorithms
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 00/10] new features for ipsec and security libraries Radu Nicolau
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 01/10] security: add ESN field to ipsec_xform Radu Nicolau
@ 2021-10-13 12:13 ` Radu Nicolau
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 03/10] security: add UDP params for IPsec NAT-T Radu Nicolau
` (7 subsequent siblings)
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-13 12:13 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add support for AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/prog_guide/ipsec_lib.rst | 3 +-
doc/guides/rel_notes/release_21_11.rst | 4 +
lib/ipsec/crypto.h | 137 +++++++++++++++++++++++++
lib/ipsec/esp_inb.c | 66 +++++++++++-
lib/ipsec/esp_outb.c | 70 ++++++++++++-
lib/ipsec/sa.c | 54 +++++++++-
lib/ipsec/sa.h | 6 ++
7 files changed, 328 insertions(+), 12 deletions(-)
diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
index 9f2b26072d..93e213bf36 100644
--- a/doc/guides/prog_guide/ipsec_lib.rst
+++ b/doc/guides/prog_guide/ipsec_lib.rst
@@ -313,7 +313,8 @@ Supported features
* ESN and replay window.
-* algorithms: 3DES-CBC, AES-CBC, AES-CTR, AES-GCM, HMAC-SHA1, NULL.
+* algorithms: 3DES-CBC, AES-CBC, AES-CTR, AES-GCM, AES_CCM, CHACHA20_POLY1305,
+ AES_GMAC, HMAC-SHA1, NULL.
Limitations
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index f840586a20..544d44b1a8 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -154,6 +154,10 @@ New Features
* Added tests to verify tunnel header verification in IPsec inbound.
* Added tests to verify inner checksum.
+* **IPsec library new features.**
+
+ * Added support for AEAD algorithms AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
+
Removed Items
-------------
diff --git a/lib/ipsec/crypto.h b/lib/ipsec/crypto.h
index 3d03034590..93d20aaaa0 100644
--- a/lib/ipsec/crypto.h
+++ b/lib/ipsec/crypto.h
@@ -21,6 +21,37 @@ struct aesctr_cnt_blk {
uint32_t cnt;
} __rte_packed;
+ /*
+ * CHACHA20-POLY1305 devices have some specific requirements
+ * for IV and AAD formats.
+ * Ideally that to be done by the driver itself.
+ */
+
+struct aead_chacha20_poly1305_iv {
+ uint32_t salt;
+ uint64_t iv;
+ uint32_t cnt;
+} __rte_packed;
+
+struct aead_chacha20_poly1305_aad {
+ uint32_t spi;
+ /*
+ * RFC 4106, section 5:
+ * Two formats of the AAD are defined:
+ * one for 32-bit sequence numbers, and one for 64-bit ESN.
+ */
+ union {
+ uint32_t u32[2];
+ uint64_t u64;
+ } sqn;
+ uint32_t align0; /* align to 16B boundary */
+} __rte_packed;
+
+struct chacha20_poly1305_esph_iv {
+ struct rte_esp_hdr esph;
+ uint64_t iv;
+} __rte_packed;
+
/*
* AES-GCM devices have some specific requirements for IV and AAD formats.
* Ideally that to be done by the driver itself.
@@ -51,6 +82,47 @@ struct gcm_esph_iv {
uint64_t iv;
} __rte_packed;
+ /*
+ * AES-CCM devices have some specific requirements for IV and AAD formats.
+ * Ideally that to be done by the driver itself.
+ */
+union aead_ccm_salt {
+ uint32_t salt;
+ struct inner {
+ uint8_t salt8[3];
+ uint8_t ccm_flags;
+ } inner;
+} __rte_packed;
+
+
+struct aead_ccm_iv {
+ uint8_t ccm_flags;
+ uint8_t salt[3];
+ uint64_t iv;
+ uint32_t cnt;
+} __rte_packed;
+
+struct aead_ccm_aad {
+ uint8_t padding[18];
+ uint32_t spi;
+ /*
+ * RFC 4309, section 5:
+ * Two formats of the AAD are defined:
+ * one for 32-bit sequence numbers, and one for 64-bit ESN.
+ */
+ union {
+ uint32_t u32[2];
+ uint64_t u64;
+ } sqn;
+ uint32_t align0; /* align to 16B boundary */
+} __rte_packed;
+
+struct ccm_esph_iv {
+ struct rte_esp_hdr esph;
+ uint64_t iv;
+} __rte_packed;
+
+
static inline void
aes_ctr_cnt_blk_fill(struct aesctr_cnt_blk *ctr, uint64_t iv, uint32_t nonce)
{
@@ -59,6 +131,16 @@ aes_ctr_cnt_blk_fill(struct aesctr_cnt_blk *ctr, uint64_t iv, uint32_t nonce)
ctr->cnt = rte_cpu_to_be_32(1);
}
+static inline void
+aead_chacha20_poly1305_iv_fill(struct aead_chacha20_poly1305_iv
+ *chacha20_poly1305,
+ uint64_t iv, uint32_t salt)
+{
+ chacha20_poly1305->salt = salt;
+ chacha20_poly1305->iv = iv;
+ chacha20_poly1305->cnt = rte_cpu_to_be_32(1);
+}
+
static inline void
aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
{
@@ -67,6 +149,21 @@ aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
gcm->cnt = rte_cpu_to_be_32(1);
}
+static inline void
+aead_ccm_iv_fill(struct aead_ccm_iv *ccm, uint64_t iv, uint32_t salt)
+{
+ union aead_ccm_salt tsalt;
+
+ tsalt.salt = salt;
+ ccm->ccm_flags = tsalt.inner.ccm_flags;
+ ccm->salt[0] = tsalt.inner.salt8[0];
+ ccm->salt[1] = tsalt.inner.salt8[1];
+ ccm->salt[2] = tsalt.inner.salt8[2];
+ ccm->iv = iv;
+ ccm->cnt = rte_cpu_to_be_32(1);
+}
+
+
/*
* RFC 4106, 5 AAD Construction
* spi and sqn should already be converted into network byte order.
@@ -86,6 +183,25 @@ aead_gcm_aad_fill(struct aead_gcm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
aad->align0 = 0;
}
+/*
+ * RFC 4309, 5 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_ccm_aad_fill(struct aead_ccm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
+ int esn)
+{
+ aad->spi = spi;
+ if (esn)
+ aad->sqn.u64 = sqn;
+ else {
+ aad->sqn.u32[0] = sqn_low32(sqn);
+ aad->sqn.u32[1] = 0;
+ }
+ aad->align0 = 0;
+}
+
static inline void
gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
{
@@ -93,6 +209,27 @@ gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
iv[1] = 0;
}
+
+/*
+ * RFC 7634, 2.1 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_chacha20_poly1305_aad_fill(struct aead_chacha20_poly1305_aad *aad,
+ rte_be32_t spi, rte_be64_t sqn,
+ int esn)
+{
+ aad->spi = spi;
+ if (esn)
+ aad->sqn.u64 = sqn;
+ else {
+ aad->sqn.u32[0] = sqn_low32(sqn);
+ aad->sqn.u32[1] = 0;
+ }
+ aad->align0 = 0;
+}
+
/*
* Helper routine to copy IV
* Right now we support only algorithms with IV length equals 0/8/16 bytes.
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index 2b1df6a032..d66c88f05d 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -63,6 +63,8 @@ inb_cop_prepare(struct rte_crypto_op *cop,
{
struct rte_crypto_sym_op *sop;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint64_t *ivc, *ivp;
uint32_t algo;
@@ -83,6 +85,24 @@ inb_cop_prepare(struct rte_crypto_op *cop,
sa->iv_ofs);
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ sop_aead_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ ccm = rte_crypto_op_ctod_offset(cop, struct aead_ccm_iv *,
+ sa->iv_ofs);
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ sop_aead_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ chacha20_poly1305 = rte_crypto_op_ctod_offset(cop,
+ struct aead_chacha20_poly1305_iv *,
+ sa->iv_ofs);
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CBC:
case ALGO_TYPE_3DES_CBC:
sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
@@ -91,6 +111,14 @@ inb_cop_prepare(struct rte_crypto_op *cop,
ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs);
copy_iv(ivc, ivp, sa->iv_len);
break;
+ case ALGO_TYPE_AES_GMAC:
+ sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+ sa->iv_ofs);
+ aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
@@ -110,6 +138,8 @@ inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
uint32_t *pofs, uint32_t plen, void *iv)
{
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint64_t *ivp;
uint32_t clen;
@@ -120,9 +150,19 @@ inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
switch (sa->algo_type) {
case ALGO_TYPE_AES_GCM:
+ case ALGO_TYPE_AES_GMAC:
gcm = (struct aead_gcm_iv *)iv;
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ ccm = (struct aead_ccm_iv *)iv;
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ chacha20_poly1305 = (struct aead_chacha20_poly1305_iv *)iv;
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CBC:
case ALGO_TYPE_3DES_CBC:
copy_iv(iv, ivp, sa->iv_len);
@@ -175,6 +215,8 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
const union sym_op_data *icv)
{
struct aead_gcm_aad *aad;
+ struct aead_ccm_aad *caad;
+ struct aead_chacha20_poly1305_aad *chacha_aad;
/* insert SQN.hi between ESP trailer and ICV */
if (sa->sqh_len != 0)
@@ -184,9 +226,27 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
* fill AAD fields, if any (aad fields are placed after icv),
* right now we support only one AEAD algorithm: AES-GCM.
*/
- if (sa->aad_len != 0) {
- aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
- aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ switch (sa->algo_type) {
+ case ALGO_TYPE_AES_GCM:
+ if (sa->aad_len != 0) {
+ aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+ aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_AES_CCM:
+ if (sa->aad_len != 0) {
+ caad = (struct aead_ccm_aad *)(icv->va + sa->icv_len);
+ aead_ccm_aad_fill(caad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ if (sa->aad_len != 0) {
+ chacha_aad = (struct aead_chacha20_poly1305_aad *)
+ (icv->va + sa->icv_len);
+ aead_chacha20_poly1305_aad_fill(chacha_aad,
+ sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
}
}
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 1e181cf2ce..a3f77469c3 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -63,6 +63,8 @@ outb_cop_prepare(struct rte_crypto_op *cop,
{
struct rte_crypto_sym_op *sop;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint32_t algo;
@@ -80,6 +82,15 @@ outb_cop_prepare(struct rte_crypto_op *cop,
/* NULL case */
sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
break;
+ case ALGO_TYPE_AES_GMAC:
+ /* GMAC case */
+ sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+ sa->iv_ofs);
+ aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_GCM:
/* AEAD (AES_GCM) case */
sop_aead_prepare(sop, sa, icv, hlen, plen);
@@ -89,6 +100,26 @@ outb_cop_prepare(struct rte_crypto_op *cop,
sa->iv_ofs);
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ /* AEAD (AES_CCM) case */
+ sop_aead_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ ccm = rte_crypto_op_ctod_offset(cop, struct aead_ccm_iv *,
+ sa->iv_ofs);
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ /* AEAD (CHACHA20_POLY) case */
+ sop_aead_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ chacha20_poly1305 = rte_crypto_op_ctod_offset(cop,
+ struct aead_chacha20_poly1305_iv *,
+ sa->iv_ofs);
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
/* Cipher-Auth (AES-CTR *) case */
sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
@@ -196,7 +227,9 @@ outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
const union sym_op_data *icv)
{
uint32_t *psqh;
- struct aead_gcm_aad *aad;
+ struct aead_gcm_aad *gaad;
+ struct aead_ccm_aad *caad;
+ struct aead_chacha20_poly1305_aad *chacha20_poly1305_aad;
/* insert SQN.hi between ESP trailer and ICV */
if (sa->sqh_len != 0) {
@@ -208,9 +241,29 @@ outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
* fill IV and AAD fields, if any (aad fields are placed after icv),
* right now we support only one AEAD algorithm: AES-GCM .
*/
+ switch (sa->algo_type) {
+ case ALGO_TYPE_AES_GCM:
if (sa->aad_len != 0) {
- aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
- aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ gaad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+ aead_gcm_aad_fill(gaad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_AES_CCM:
+ if (sa->aad_len != 0) {
+ caad = (struct aead_ccm_aad *)(icv->va + sa->icv_len);
+ aead_ccm_aad_fill(caad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ if (sa->aad_len != 0) {
+ chacha20_poly1305_aad = (struct aead_chacha20_poly1305_aad *)
+ (icv->va + sa->icv_len);
+ aead_chacha20_poly1305_aad_fill(chacha20_poly1305_aad,
+ sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ default:
+ break;
}
}
@@ -418,6 +471,8 @@ outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
{
uint64_t *ivp = iv;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint32_t clen;
@@ -426,6 +481,15 @@ outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
gcm = iv;
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ ccm = iv;
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ chacha20_poly1305 = iv;
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
ctr = iv;
aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt);
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index e59189d215..720e0f365b 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -47,6 +47,15 @@ fill_crypto_xform(struct crypto_xform *xform, uint64_t type,
if (xfn != NULL)
return -EINVAL;
xform->aead = &xf->aead;
+
+ /* GMAC has only auth */
+ } else if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+ xf->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ if (xfn != NULL)
+ return -EINVAL;
+ xform->auth = &xf->auth;
+ xform->cipher = &xfn->cipher;
+
/*
* CIPHER+AUTH xforms are expected in strict order,
* depending on SA direction:
@@ -247,12 +256,13 @@ esp_inb_init(struct rte_ipsec_sa *sa)
sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
/*
- * for AEAD and NULL algorithms we can assume that
+ * for AEAD algorithms we can assume that
* auth and cipher offsets would be equal.
*/
switch (sa->algo_type) {
case ALGO_TYPE_AES_GCM:
- case ALGO_TYPE_NULL:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
sa->ctp.auth.raw = sa->ctp.cipher.raw;
break;
default:
@@ -294,6 +304,8 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
switch (algo_type) {
case ALGO_TYPE_AES_GCM:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
case ALGO_TYPE_AES_CTR:
case ALGO_TYPE_NULL:
sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr) +
@@ -305,15 +317,20 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr);
sa->ctp.cipher.length = sa->iv_len;
break;
+ case ALGO_TYPE_AES_GMAC:
+ sa->ctp.cipher.offset = 0;
+ sa->ctp.cipher.length = 0;
+ break;
}
/*
- * for AEAD and NULL algorithms we can assume that
+ * for AEAD algorithms we can assume that
* auth and cipher offsets would be equal.
*/
switch (algo_type) {
case ALGO_TYPE_AES_GCM:
- case ALGO_TYPE_NULL:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
sa->ctp.auth.raw = sa->ctp.cipher.raw;
break;
default:
@@ -374,13 +391,39 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->pad_align = IPSEC_PAD_AES_GCM;
sa->algo_type = ALGO_TYPE_AES_GCM;
break;
+ case RTE_CRYPTO_AEAD_AES_CCM:
+ /* RFC 4309 */
+ sa->aad_len = sizeof(struct aead_ccm_aad);
+ sa->icv_len = cxf->aead->digest_length;
+ sa->iv_ofs = cxf->aead->iv.offset;
+ sa->iv_len = sizeof(uint64_t);
+ sa->pad_align = IPSEC_PAD_AES_CCM;
+ sa->algo_type = ALGO_TYPE_AES_CCM;
+ break;
+ case RTE_CRYPTO_AEAD_CHACHA20_POLY1305:
+ /* RFC 7634 & 8439*/
+ sa->aad_len = sizeof(struct aead_chacha20_poly1305_aad);
+ sa->icv_len = cxf->aead->digest_length;
+ sa->iv_ofs = cxf->aead->iv.offset;
+ sa->iv_len = sizeof(uint64_t);
+ sa->pad_align = IPSEC_PAD_CHACHA20_POLY1305;
+ sa->algo_type = ALGO_TYPE_CHACHA20_POLY1305;
+ break;
default:
return -EINVAL;
}
+ } else if (cxf->auth->algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ /* RFC 4543 */
+ /* AES-GMAC is a special case of auth that needs IV */
+ sa->pad_align = IPSEC_PAD_AES_GMAC;
+ sa->iv_len = sizeof(uint64_t);
+ sa->icv_len = cxf->auth->digest_length;
+ sa->iv_ofs = cxf->auth->iv.offset;
+ sa->algo_type = ALGO_TYPE_AES_GMAC;
+
} else {
sa->icv_len = cxf->auth->digest_length;
sa->iv_ofs = cxf->cipher->iv.offset;
- sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
switch (cxf->cipher->algo) {
case RTE_CRYPTO_CIPHER_NULL:
@@ -414,6 +457,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
}
}
+ sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
sa->udata = prm->userdata;
sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi);
sa->salt = prm->ipsec_xform.salt;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 1bffe751f5..107ebd1519 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -19,7 +19,10 @@ enum {
IPSEC_PAD_AES_CBC = IPSEC_MAX_IV_SIZE,
IPSEC_PAD_AES_CTR = IPSEC_PAD_DEFAULT,
IPSEC_PAD_AES_GCM = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_AES_CCM = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_CHACHA20_POLY1305 = IPSEC_PAD_DEFAULT,
IPSEC_PAD_NULL = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_AES_GMAC = IPSEC_PAD_DEFAULT,
};
/* iv sizes for different algorithms */
@@ -67,6 +70,9 @@ enum sa_algo_type {
ALGO_TYPE_AES_CBC,
ALGO_TYPE_AES_CTR,
ALGO_TYPE_AES_GCM,
+ ALGO_TYPE_AES_CCM,
+ ALGO_TYPE_CHACHA20_POLY1305,
+ ALGO_TYPE_AES_GMAC,
ALGO_TYPE_MAX
};
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v9 03/10] security: add UDP params for IPsec NAT-T
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 00/10] new features for ipsec and security libraries Radu Nicolau
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 01/10] security: add ESN field to ipsec_xform Radu Nicolau
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 02/10] ipsec: add support for AEAD algorithms Radu Nicolau
@ 2021-10-13 12:13 ` Radu Nicolau
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 04/10] ipsec: add support for NAT-T Radu Nicolau
` (6 subsequent siblings)
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-13 12:13 UTC (permalink / raw)
To: Ray Kinsella, Akhil Goyal, Declan Doherty
Cc: dev, konstantin.ananyev, vladimir.medvedkin, bruce.richardson,
roy.fan.zhang, hemant.agrawal, anoobj, abhijit.sinha,
daniel.m.buckley, marchana, ktejasree, matan, Radu Nicolau
Add support for specifying UDP port params for UDP encapsulation option.
RFC3948 section-2.1 does not enforce using specific the UDP ports for
UDP-Encapsulated ESP Header
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 5 ++---
doc/guides/rel_notes/release_21_11.rst | 4 ++++
lib/security/rte_security.h | 7 +++++++
3 files changed, 13 insertions(+), 3 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index accb9c7d83..6517e7821f 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -204,9 +204,8 @@ Deprecation Notices
pointer for the private data to the application which can be attached
to the packet while enqueuing.
-* security: The structure ``rte_security_ipsec_xform`` will be extended with
- multiple fields: source and destination port of UDP encapsulation,
- IPsec payload MSS (Maximum Segment Size).
+* security: The structure ``rte_security_ipsec_xform`` will be extended with:
+ new field: IPsec payload MSS (Maximum Segment Size).
* security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
will be updated with new fields to support new features like TSO in case of
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 544d44b1a8..1748c2db05 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -274,6 +274,10 @@ ABI Changes
application to start from an arbitrary ESN value for debug and SA lifetime
enforcement purposes.
+* security: A new structure ``udp`` was added in structure
+ ``rte_security_ipsec_xform`` to allow setting the source and destination ports
+ for UDP encapsulated IPsec traffic.
+
* bbdev: Added capability related to more comprehensive CRC options,
shifting values of the ``enum rte_bbdev_op_ldpcdec_flag_bitmasks``.
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 764ce83bca..17d0e95412 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -128,6 +128,11 @@ struct rte_security_ipsec_tunnel_param {
};
};
+struct rte_security_ipsec_udp_param {
+ uint16_t sport;
+ uint16_t dport;
+};
+
/**
* IPsec Security Association option flags
*/
@@ -326,6 +331,8 @@ struct rte_security_ipsec_xform {
};
} esn;
/**< Extended Sequence Number */
+ struct rte_security_ipsec_udp_param udp;
+ /**< UDP parameters, ignored when udp_encap option not specified */
};
/**
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v9 04/10] ipsec: add support for NAT-T
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 00/10] new features for ipsec and security libraries Radu Nicolau
` (2 preceding siblings ...)
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 03/10] security: add UDP params for IPsec NAT-T Radu Nicolau
@ 2021-10-13 12:13 ` Radu Nicolau
2021-10-14 12:34 ` Ananyev, Konstantin
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 05/10] mbuf: add IPsec ESP tunnel type Radu Nicolau
` (5 subsequent siblings)
9 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-10-13 12:13 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add support for the IPsec NAT-Traversal use case for Tunnel mode
packets.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
doc/guides/prog_guide/ipsec_lib.rst | 2 ++
doc/guides/rel_notes/release_21_11.rst | 1 +
lib/ipsec/esp_outb.c | 9 ++++++
lib/ipsec/rte_ipsec_sa.h | 9 +++++-
lib/ipsec/sa.c | 39 ++++++++++++++++++++++----
5 files changed, 54 insertions(+), 6 deletions(-)
diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
index 93e213bf36..af51ff8131 100644
--- a/doc/guides/prog_guide/ipsec_lib.rst
+++ b/doc/guides/prog_guide/ipsec_lib.rst
@@ -313,6 +313,8 @@ Supported features
* ESN and replay window.
+* NAT-T / UDP encapsulated ESP.
+
* algorithms: 3DES-CBC, AES-CBC, AES-CTR, AES-GCM, AES_CCM, CHACHA20_POLY1305,
AES_GMAC, HMAC-SHA1, NULL.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 1748c2db05..e9fb169d44 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -157,6 +157,7 @@ New Features
* **IPsec library new features.**
* Added support for AEAD algorithms AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
+ * Added support for NAT-T / UDP encapsulated ESP
Removed Items
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index a3f77469c3..0e3314b358 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -5,6 +5,7 @@
#include <rte_ipsec.h>
#include <rte_esp.h>
#include <rte_ip.h>
+#include <rte_udp.h>
#include <rte_errno.h>
#include <rte_cryptodev.h>
@@ -185,6 +186,14 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* copy tunnel pkt header */
rte_memcpy(ph, sa->hdr, sa->hdr_len);
+ /* if UDP encap is enabled update the dgram_len */
+ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
+ struct rte_udp_hdr *udph = (struct rte_udp_hdr *)
+ (ph - sizeof(struct rte_udp_hdr));
+ udph->dgram_len = rte_cpu_to_be_16(mb->pkt_len - sqh_len -
+ sa->hdr_l3_off - sa->hdr_len);
+ }
+
/* update original and new ip header fields */
update_tun_outb_l3hdr(sa, ph + sa->hdr_l3_off, ph + hlen,
mb->pkt_len - sqh_len, sa->hdr_l3_off, sqn_low16(sqc));
diff --git a/lib/ipsec/rte_ipsec_sa.h b/lib/ipsec/rte_ipsec_sa.h
index cf51ad8338..3a22705055 100644
--- a/lib/ipsec/rte_ipsec_sa.h
+++ b/lib/ipsec/rte_ipsec_sa.h
@@ -78,6 +78,7 @@ struct rte_ipsec_sa_prm {
* - for TUNNEL outer IP version (IPv4/IPv6)
* - are SA SQN operations 'atomic'
* - ESN enabled/disabled
+ * - NAT-T UDP encapsulated (TUNNEL mode only)
* ...
*/
@@ -89,7 +90,8 @@ enum {
RTE_SATP_LOG2_SQN = RTE_SATP_LOG2_MODE + 2,
RTE_SATP_LOG2_ESN,
RTE_SATP_LOG2_ECN,
- RTE_SATP_LOG2_DSCP
+ RTE_SATP_LOG2_DSCP,
+ RTE_SATP_LOG2_NATT
};
#define RTE_IPSEC_SATP_IPV_MASK (1ULL << RTE_SATP_LOG2_IPV)
@@ -125,6 +127,11 @@ enum {
#define RTE_IPSEC_SATP_DSCP_DISABLE (0ULL << RTE_SATP_LOG2_DSCP)
#define RTE_IPSEC_SATP_DSCP_ENABLE (1ULL << RTE_SATP_LOG2_DSCP)
+#define RTE_IPSEC_SATP_NATT_MASK (1ULL << RTE_SATP_LOG2_NATT)
+#define RTE_IPSEC_SATP_NATT_DISABLE (0ULL << RTE_SATP_LOG2_NATT)
+#define RTE_IPSEC_SATP_NATT_ENABLE (1ULL << RTE_SATP_LOG2_NATT)
+
+
/**
* get type of given SA
* @return
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 720e0f365b..2830506385 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -5,6 +5,7 @@
#include <rte_ipsec.h>
#include <rte_esp.h>
#include <rte_ip.h>
+#include <rte_udp.h>
#include <rte_errno.h>
#include <rte_cryptodev.h>
@@ -217,6 +218,10 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
} else
return -EINVAL;
+ /* check for UDP encapsulation flag */
+ if (prm->ipsec_xform.options.udp_encap == 1)
+ tp |= RTE_IPSEC_SATP_NATT_ENABLE;
+
/* check for ESN flag */
if (prm->ipsec_xform.options.esn == 0)
tp |= RTE_IPSEC_SATP_ESN_DISABLE;
@@ -348,20 +353,36 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
/*
* Init ESP outbound tunnel specific things.
*/
-static void
+static int
esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
{
sa->proto = prm->tun.next_proto;
sa->hdr_len = prm->tun.hdr_len;
sa->hdr_l3_off = prm->tun.hdr_l3_off;
+ if (prm->tun.hdr_len > IPSEC_MAX_HDR_SIZE)
+ return -EINVAL;
+ memcpy(sa->hdr, prm->tun.hdr, prm->tun.hdr_len);
+
+ /* insert UDP header if UDP encapsulation is inabled */
+ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
+ struct rte_udp_hdr *udph = (struct rte_udp_hdr *)
+ &sa->hdr[prm->tun.hdr_len];
+ sa->hdr_len += sizeof(struct rte_udp_hdr);
+ if (sa->hdr_len > IPSEC_MAX_HDR_SIZE)
+ return -EINVAL;
+ udph->src_port = prm->ipsec_xform.udp.sport;
+ udph->dst_port = prm->ipsec_xform.udp.dport;
+ udph->dgram_cksum = 0;
+ }
+
/* update l2_len and l3_len fields for outbound mbuf */
sa->tx_offload.val = rte_mbuf_tx_offload(sa->hdr_l3_off,
sa->hdr_len - sa->hdr_l3_off, 0, 0, 0, 0, 0);
- memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
-
esp_outb_init(sa, sa->hdr_len);
+
+ return 0;
}
/*
@@ -372,7 +393,8 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
const struct crypto_xform *cxf)
{
static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
- RTE_IPSEC_SATP_MODE_MASK;
+ RTE_IPSEC_SATP_MODE_MASK |
+ RTE_IPSEC_SATP_NATT_MASK;
if (prm->ipsec_xform.options.ecn)
sa->tos_mask |= RTE_IPV4_HDR_ECN_MASK;
@@ -475,10 +497,17 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
esp_inb_init(sa);
break;
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4 |
+ RTE_IPSEC_SATP_NATT_ENABLE):
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6 |
+ RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
- esp_outb_tun_init(sa, prm);
+ if (esp_outb_tun_init(sa, prm))
+ return -EINVAL;
break;
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
+ RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
esp_outb_init(sa, 0);
break;
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v9 04/10] ipsec: add support for NAT-T
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 04/10] ipsec: add support for NAT-T Radu Nicolau
@ 2021-10-14 12:34 ` Ananyev, Konstantin
0 siblings, 0 replies; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-10-14 12:34 UTC (permalink / raw)
To: Nicolau, Radu, Iremonger, Bernard, Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal,
gakhil, anoobj, Doherty, Declan, Sinha, Abhijit, Buckley,
Daniel M, marchana, ktejasree, matan
>
> Add support for the IPsec NAT-Traversal use case for Tunnel mode
> packets.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
> doc/guides/prog_guide/ipsec_lib.rst | 2 ++
> doc/guides/rel_notes/release_21_11.rst | 1 +
> lib/ipsec/esp_outb.c | 9 ++++++
> lib/ipsec/rte_ipsec_sa.h | 9 +++++-
> lib/ipsec/sa.c | 39 ++++++++++++++++++++++----
> 5 files changed, 54 insertions(+), 6 deletions(-)
>
> diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
> index 93e213bf36..af51ff8131 100644
> --- a/doc/guides/prog_guide/ipsec_lib.rst
> +++ b/doc/guides/prog_guide/ipsec_lib.rst
> @@ -313,6 +313,8 @@ Supported features
>
> * ESN and replay window.
>
> +* NAT-T / UDP encapsulated ESP.
> +
> * algorithms: 3DES-CBC, AES-CBC, AES-CTR, AES-GCM, AES_CCM, CHACHA20_POLY1305,
> AES_GMAC, HMAC-SHA1, NULL.
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index 1748c2db05..e9fb169d44 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -157,6 +157,7 @@ New Features
> * **IPsec library new features.**
>
> * Added support for AEAD algorithms AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
> + * Added support for NAT-T / UDP encapsulated ESP
>
>
> Removed Items
> diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
> index a3f77469c3..0e3314b358 100644
> --- a/lib/ipsec/esp_outb.c
> +++ b/lib/ipsec/esp_outb.c
> @@ -5,6 +5,7 @@
> #include <rte_ipsec.h>
> #include <rte_esp.h>
> #include <rte_ip.h>
> +#include <rte_udp.h>
> #include <rte_errno.h>
> #include <rte_cryptodev.h>
>
> @@ -185,6 +186,14 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
> /* copy tunnel pkt header */
> rte_memcpy(ph, sa->hdr, sa->hdr_len);
>
> + /* if UDP encap is enabled update the dgram_len */
> + if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
> + struct rte_udp_hdr *udph = (struct rte_udp_hdr *)
> + (ph - sizeof(struct rte_udp_hdr));
> + udph->dgram_len = rte_cpu_to_be_16(mb->pkt_len - sqh_len -
> + sa->hdr_l3_off - sa->hdr_len);
> + }
> +
> /* update original and new ip header fields */
> update_tun_outb_l3hdr(sa, ph + sa->hdr_l3_off, ph + hlen,
> mb->pkt_len - sqh_len, sa->hdr_l3_off, sqn_low16(sqc));
> diff --git a/lib/ipsec/rte_ipsec_sa.h b/lib/ipsec/rte_ipsec_sa.h
> index cf51ad8338..3a22705055 100644
> --- a/lib/ipsec/rte_ipsec_sa.h
> +++ b/lib/ipsec/rte_ipsec_sa.h
> @@ -78,6 +78,7 @@ struct rte_ipsec_sa_prm {
> * - for TUNNEL outer IP version (IPv4/IPv6)
> * - are SA SQN operations 'atomic'
> * - ESN enabled/disabled
> + * - NAT-T UDP encapsulated (TUNNEL mode only)
> * ...
> */
>
> @@ -89,7 +90,8 @@ enum {
> RTE_SATP_LOG2_SQN = RTE_SATP_LOG2_MODE + 2,
> RTE_SATP_LOG2_ESN,
> RTE_SATP_LOG2_ECN,
> - RTE_SATP_LOG2_DSCP
> + RTE_SATP_LOG2_DSCP,
> + RTE_SATP_LOG2_NATT
> };
>
> #define RTE_IPSEC_SATP_IPV_MASK (1ULL << RTE_SATP_LOG2_IPV)
> @@ -125,6 +127,11 @@ enum {
> #define RTE_IPSEC_SATP_DSCP_DISABLE (0ULL << RTE_SATP_LOG2_DSCP)
> #define RTE_IPSEC_SATP_DSCP_ENABLE (1ULL << RTE_SATP_LOG2_DSCP)
>
> +#define RTE_IPSEC_SATP_NATT_MASK (1ULL << RTE_SATP_LOG2_NATT)
> +#define RTE_IPSEC_SATP_NATT_DISABLE (0ULL << RTE_SATP_LOG2_NATT)
> +#define RTE_IPSEC_SATP_NATT_ENABLE (1ULL << RTE_SATP_LOG2_NATT)
> +
> +
> /**
> * get type of given SA
> * @return
> diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
> index 720e0f365b..2830506385 100644
> --- a/lib/ipsec/sa.c
> +++ b/lib/ipsec/sa.c
> @@ -5,6 +5,7 @@
> #include <rte_ipsec.h>
> #include <rte_esp.h>
> #include <rte_ip.h>
> +#include <rte_udp.h>
> #include <rte_errno.h>
> #include <rte_cryptodev.h>
>
> @@ -217,6 +218,10 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
> } else
> return -EINVAL;
>
> + /* check for UDP encapsulation flag */
> + if (prm->ipsec_xform.options.udp_encap == 1)
> + tp |= RTE_IPSEC_SATP_NATT_ENABLE;
> +
> /* check for ESN flag */
> if (prm->ipsec_xform.options.esn == 0)
> tp |= RTE_IPSEC_SATP_ESN_DISABLE;
> @@ -348,20 +353,36 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
> /*
> * Init ESP outbound tunnel specific things.
> */
> -static void
> +static int
> esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
> {
> sa->proto = prm->tun.next_proto;
> sa->hdr_len = prm->tun.hdr_len;
> sa->hdr_l3_off = prm->tun.hdr_l3_off;
>
> + if (prm->tun.hdr_len > IPSEC_MAX_HDR_SIZE)
> + return -EINVAL;
That's not exactly what I asked for.
We already have this check in rte_ipsec_sa_init():
if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL &&
prm->tun.hdr_len > sizeof(sa->hdr))
What we need to check is that if NATT enabled, then our new header size wouldn't overflow
our sa->hdr buffer.
So I'd suggest we do instead of that check above, we do something like:
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -560,7 +560,7 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
uint32_t size)
{
int32_t rc, sz;
- uint32_t nb, wsz;
+ uint32_t hlen, nb, wsz;
uint64_t type;
struct crypto_xform cxf;
@@ -584,9 +584,14 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
if (prm->ipsec_xform.proto != RTE_SECURITY_IPSEC_SA_PROTO_ESP)
return -EINVAL;
- if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL &&
- prm->tun.hdr_len > sizeof(sa->hdr))
- return -EINVAL;
+ if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
+
+ hlen = prm->tun.hdr_len;
+ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE)
+ hlen += sizeof(struct rte_udp_hdr);
+ if (hlen > sizeof(sa->hdr))
+ return -EINVAL;
+ }
rc = fill_crypto_xform(&cxf, type, prm);
if (rc != 0)
Then again, we can keep esp_outb_tun_init() as void.
With that in place, feel free to add:
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> + memcpy(sa->hdr, prm->tun.hdr, prm->tun.hdr_len);
> +
> + /* insert UDP header if UDP encapsulation is inabled */
> + if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
> + struct rte_udp_hdr *udph = (struct rte_udp_hdr *)
> + &sa->hdr[prm->tun.hdr_len];
> + sa->hdr_len += sizeof(struct rte_udp_hdr);
> + if (sa->hdr_len > IPSEC_MAX_HDR_SIZE)
> + return -EINVAL;
> + udph->src_port = prm->ipsec_xform.udp.sport;
> + udph->dst_port = prm->ipsec_xform.udp.dport;
> + udph->dgram_cksum = 0;
> + }
> +
> /* update l2_len and l3_len fields for outbound mbuf */
> sa->tx_offload.val = rte_mbuf_tx_offload(sa->hdr_l3_off,
> sa->hdr_len - sa->hdr_l3_off, 0, 0, 0, 0, 0);
>
> - memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
> -
> esp_outb_init(sa, sa->hdr_len);
> +
> + return 0;
> }
>
> /*
> @@ -372,7 +393,8 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
> const struct crypto_xform *cxf)
> {
> static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
> - RTE_IPSEC_SATP_MODE_MASK;
> + RTE_IPSEC_SATP_MODE_MASK |
> + RTE_IPSEC_SATP_NATT_MASK;
>
> if (prm->ipsec_xform.options.ecn)
> sa->tos_mask |= RTE_IPV4_HDR_ECN_MASK;
> @@ -475,10 +497,17 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
> case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
> esp_inb_init(sa);
> break;
> + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4 |
> + RTE_IPSEC_SATP_NATT_ENABLE):
> + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6 |
> + RTE_IPSEC_SATP_NATT_ENABLE):
> case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
> case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
> - esp_outb_tun_init(sa, prm);
> + if (esp_outb_tun_init(sa, prm))
> + return -EINVAL;
> break;
> + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
> + RTE_IPSEC_SATP_NATT_ENABLE):
> case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
> esp_outb_init(sa, 0);
> break;
> --
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v9 05/10] mbuf: add IPsec ESP tunnel type
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 00/10] new features for ipsec and security libraries Radu Nicolau
` (3 preceding siblings ...)
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 04/10] ipsec: add support for NAT-T Radu Nicolau
@ 2021-10-13 12:13 ` Radu Nicolau
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 06/10] ipsec: add transmit segmentation offload support Radu Nicolau
` (4 subsequent siblings)
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-13 12:13 UTC (permalink / raw)
To: Olivier Matz
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, gakhil, anoobj,
declan.doherty, abhijit.sinha, daniel.m.buckley, marchana,
ktejasree, matan, Radu Nicolau
Add ESP tunnel type to the tunnel types list that can be specified
for TSO or checksum on the inner part of tunnel packets.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
lib/mbuf/rte_mbuf_core.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index 9d8e3ddc86..4747c0c452 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -255,6 +255,7 @@ extern "C" {
#define PKT_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
#define PKT_TX_TUNNEL_VXLAN_GPE (0x6ULL << 45)
#define PKT_TX_TUNNEL_GTP (0x7ULL << 45)
+#define PKT_TX_TUNNEL_ESP (0x8ULL << 45)
/**
* Generic IP encapsulated tunnel type, used for TSO and checksum offload.
* It can be used for tunnels which are not standards or listed above.
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v9 06/10] ipsec: add transmit segmentation offload support
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 00/10] new features for ipsec and security libraries Radu Nicolau
` (4 preceding siblings ...)
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 05/10] mbuf: add IPsec ESP tunnel type Radu Nicolau
@ 2021-10-13 12:13 ` Radu Nicolau
2021-10-14 14:42 ` Ananyev, Konstantin
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 07/10] ipsec: add support for SA telemetry Radu Nicolau
` (3 subsequent siblings)
9 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-10-13 12:13 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add support for transmit segmentation offload to inline crypto processing
mode. This offload is not supported by other offload modes, as at a
minimum it requires inline crypto for IPsec to be supported on the
network interface.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
doc/guides/prog_guide/ipsec_lib.rst | 2 +
doc/guides/rel_notes/release_21_11.rst | 1 +
lib/ipsec/esp_outb.c | 120 +++++++++++++++++++------
3 files changed, 97 insertions(+), 26 deletions(-)
diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
index af51ff8131..fc0af5eadb 100644
--- a/doc/guides/prog_guide/ipsec_lib.rst
+++ b/doc/guides/prog_guide/ipsec_lib.rst
@@ -315,6 +315,8 @@ Supported features
* NAT-T / UDP encapsulated ESP.
+* TSO support (only for inline crypto mode)
+
* algorithms: 3DES-CBC, AES-CBC, AES-CTR, AES-GCM, AES_CCM, CHACHA20_POLY1305,
AES_GMAC, HMAC-SHA1, NULL.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index e9fb169d44..0a9c71d92e 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -158,6 +158,7 @@ New Features
* Added support for AEAD algorithms AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
* Added support for NAT-T / UDP encapsulated ESP
+ * Added support TSO offload support; only supported for inline crypto mode.
Removed Items
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 0e3314b358..d327c32a38 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -18,7 +18,7 @@
typedef int32_t (*esp_outb_prepare_t)(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len);
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t tso);
/*
* helper function to fill crypto_sym op for cipher+auth algorithms.
@@ -139,7 +139,7 @@ outb_cop_prepare(struct rte_crypto_op *cop,
static inline int32_t
outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len)
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t tso)
{
uint32_t clen, hlen, l2len, pdlen, pdofs, plen, tlen;
struct rte_mbuf *ml;
@@ -157,11 +157,20 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* number of bytes to encrypt */
clen = plen + sizeof(*espt);
- clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+ /* We don't need to pad/align packet when using TSO offload */
+ if (!tso)
+ clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
/* pad length + esp tail */
pdlen = clen - plen;
- tlen = pdlen + sa->icv_len + sqh_len;
+
+ /* We don't append ICV length when using TSO offload */
+ if (!tso)
+ tlen = pdlen + sa->icv_len + sqh_len;
+ else
+ tlen = pdlen + sqh_len;
/* do append and prepend */
ml = rte_pktmbuf_lastseg(mb);
@@ -309,7 +318,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
/* try to update the packet itself */
rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv,
- sa->sqh_len);
+ sa->sqh_len, 0);
/* success, setup crypto op */
if (rc >= 0) {
outb_pkt_xprepare(sa, sqc, &icv);
@@ -336,7 +345,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
static inline int32_t
outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len)
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t tso)
{
uint8_t np;
uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen;
@@ -358,11 +367,19 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* number of bytes to encrypt */
clen = plen + sizeof(*espt);
- clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+ /* We don't need to pad/align packet when using TSO offload */
+ if (!tso)
+ clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
/* pad length + esp tail */
pdlen = clen - plen;
- tlen = pdlen + sa->icv_len + sqh_len;
+
+ /* We don't append ICV length when using TSO offload */
+ if (!tso)
+ tlen = pdlen + sa->icv_len + sqh_len;
+ else
+ tlen = pdlen + sqh_len;
/* do append and insert */
ml = rte_pktmbuf_lastseg(mb);
@@ -452,7 +469,7 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
/* try to update the packet itself */
rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv,
- sa->sqh_len);
+ sa->sqh_len, 0);
/* success, setup crypto op */
if (rc >= 0) {
outb_pkt_xprepare(sa, sqc, &icv);
@@ -549,7 +566,7 @@ cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss,
gen_iv(ivbuf[k], sqc);
/* try to update the packet itself */
- rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len);
+ rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len, 0);
/* success, proceed with preparations */
if (rc >= 0) {
@@ -660,6 +677,20 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
}
}
+
+static inline int
+esn_outb_nb_segments(struct rte_mbuf *m)
+{
+ if (m->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG)) {
+ uint16_t pkt_l3len = m->pkt_len - m->l2_len;
+ uint16_t segments =
+ (m->tso_segsz > 0 && pkt_l3len > m->tso_segsz) ?
+ (pkt_l3len + m->tso_segsz - 1) / m->tso_segsz : 1;
+ return segments;
+ }
+ return 1; /* no TSO */
+}
+
/*
* process group of ESP outbound tunnel packets destined for
* INLINE_CRYPTO type of device.
@@ -669,29 +700,47 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
int32_t rc;
- uint32_t i, k, n;
+ uint32_t i, k, n, nb_sqn;
uint64_t sqn;
rte_be64_t sqc;
struct rte_ipsec_sa *sa;
union sym_op_data icv;
uint64_t iv[IPSEC_MAX_IV_QWORD];
uint32_t dr[num];
+ uint16_t nb_segs[num];
sa = ss->sa;
+ nb_sqn = 0;
+ for (i = 0; i != num; i++) {
+ nb_segs[i] = esn_outb_nb_segments(mb[i]);
+ nb_sqn += nb_segs[i];
+ /* setup outer l2 and l3 len for TSO */
+ if (nb_segs[i] > 1) {
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
+ mb[i]->outer_l3_len =
+ sizeof(struct rte_ipv4_hdr);
+ else
+ mb[i]->outer_l3_len =
+ sizeof(struct rte_ipv6_hdr);
+ mb[i]->outer_l2_len = mb[i]->l2_len;
+ }
+ }
- n = num;
+ n = nb_sqn;
sqn = esn_outb_update_sqn(sa, &n);
- if (n != num)
+ if (n != nb_sqn)
rte_errno = EOVERFLOW;
k = 0;
- for (i = 0; i != n; i++) {
+ for (i = 0; i != num; i++) {
- sqc = rte_cpu_to_be_64(sqn + i);
+ sqc = rte_cpu_to_be_64(sqn);
gen_iv(iv, sqc);
+ sqn += nb_segs[i];
/* try to update the packet itself */
- rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
+ rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0,
+ nb_segs[i] > 1);
k += (rc >= 0);
@@ -703,8 +752,8 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
}
/* copy not processed mbufs beyond good ones */
- if (k != n && k != 0)
- move_bad_mbufs(mb, dr, n, n - k);
+ if (k != num && k != 0)
+ move_bad_mbufs(mb, dr, num, num - k);
inline_outb_mbuf_prepare(ss, mb, k);
return k;
@@ -719,29 +768,48 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
int32_t rc;
- uint32_t i, k, n;
+ uint32_t i, k, n, nb_sqn;
uint64_t sqn;
rte_be64_t sqc;
struct rte_ipsec_sa *sa;
union sym_op_data icv;
uint64_t iv[IPSEC_MAX_IV_QWORD];
uint32_t dr[num];
+ uint16_t nb_segs[num];
sa = ss->sa;
+ nb_sqn = 0;
+ /* Calculate number of sequence numbers required */
+ for (i = 0; i != num; i++) {
+ nb_segs[i] = esn_outb_nb_segments(mb[i]);
+ nb_sqn += nb_segs[i];
+ /* setup outer l2 and l3 len for TSO */
+ if (nb_segs[i] > 1) {
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
+ mb[i]->outer_l3_len =
+ sizeof(struct rte_ipv4_hdr);
+ else
+ mb[i]->outer_l3_len =
+ sizeof(struct rte_ipv6_hdr);
+ mb[i]->outer_l2_len = mb[i]->l2_len;
+ }
+ }
- n = num;
+ n = nb_sqn;
sqn = esn_outb_update_sqn(sa, &n);
- if (n != num)
+ if (n != nb_sqn)
rte_errno = EOVERFLOW;
k = 0;
- for (i = 0; i != n; i++) {
+ for (i = 0; i != num; i++) {
- sqc = rte_cpu_to_be_64(sqn + i);
+ sqc = rte_cpu_to_be_64(sqn);
gen_iv(iv, sqc);
+ sqn += nb_segs[i];
/* try to update the packet itself */
- rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
+ rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0,
+ nb_segs[i] > 1);
k += (rc >= 0);
@@ -753,8 +821,8 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
}
/* copy not processed mbufs beyond good ones */
- if (k != n && k != 0)
- move_bad_mbufs(mb, dr, n, n - k);
+ if (k != num && k != 0)
+ move_bad_mbufs(mb, dr, num, num - k);
inline_outb_mbuf_prepare(ss, mb, k);
return k;
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v9 06/10] ipsec: add transmit segmentation offload support
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 06/10] ipsec: add transmit segmentation offload support Radu Nicolau
@ 2021-10-14 14:42 ` Ananyev, Konstantin
2021-10-18 16:32 ` Nicolau, Radu
0 siblings, 1 reply; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-10-14 14:42 UTC (permalink / raw)
To: Nicolau, Radu, Iremonger, Bernard, Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal,
gakhil, anoobj, Doherty, Declan, Sinha, Abhijit, Buckley,
Daniel M, marchana, ktejasree, matan
> > Add support for transmit segmentation offload to inline crypto processing
> mode. This offload is not supported by other offload modes, as at a
> minimum it requires inline crypto for IPsec to be supported on the
> network interface.
Thanks for rework.
It looks much better to me now, but still few more comments.
Konstantin
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
> doc/guides/prog_guide/ipsec_lib.rst | 2 +
> doc/guides/rel_notes/release_21_11.rst | 1 +
> lib/ipsec/esp_outb.c | 120 +++++++++++++++++++------
> 3 files changed, 97 insertions(+), 26 deletions(-)
>
> diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
> index af51ff8131..fc0af5eadb 100644
> --- a/doc/guides/prog_guide/ipsec_lib.rst
> +++ b/doc/guides/prog_guide/ipsec_lib.rst
> @@ -315,6 +315,8 @@ Supported features
>
> * NAT-T / UDP encapsulated ESP.
>
> +* TSO support (only for inline crypto mode)
> +
> * algorithms: 3DES-CBC, AES-CBC, AES-CTR, AES-GCM, AES_CCM, CHACHA20_POLY1305,
> AES_GMAC, HMAC-SHA1, NULL.
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index e9fb169d44..0a9c71d92e 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -158,6 +158,7 @@ New Features
>
> * Added support for AEAD algorithms AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
> * Added support for NAT-T / UDP encapsulated ESP
> + * Added support TSO offload support; only supported for inline crypto mode.
>
>
> Removed Items
> diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
> index 0e3314b358..d327c32a38 100644
> --- a/lib/ipsec/esp_outb.c
> +++ b/lib/ipsec/esp_outb.c
> @@ -18,7 +18,7 @@
>
> typedef int32_t (*esp_outb_prepare_t)(struct rte_ipsec_sa *sa, rte_be64_t sqc,
> const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
> - union sym_op_data *icv, uint8_t sqh_len);
> + union sym_op_data *icv, uint8_t sqh_len, uint8_t tso);
>
> /*
> * helper function to fill crypto_sym op for cipher+auth algorithms.
> @@ -139,7 +139,7 @@ outb_cop_prepare(struct rte_crypto_op *cop,
> static inline int32_t
> outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
> const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
> - union sym_op_data *icv, uint8_t sqh_len)
> + union sym_op_data *icv, uint8_t sqh_len, uint8_t tso)
> {
> uint32_t clen, hlen, l2len, pdlen, pdofs, plen, tlen;
> struct rte_mbuf *ml;
> @@ -157,11 +157,20 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
>
> /* number of bytes to encrypt */
> clen = plen + sizeof(*espt);
> - clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
> +
> + /* We don't need to pad/align packet when using TSO offload */
> + if (!tso)
> + clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
> +
>
> /* pad length + esp tail */
> pdlen = clen - plen;
> - tlen = pdlen + sa->icv_len + sqh_len;
> +
> + /* We don't append ICV length when using TSO offload */
> + if (!tso)
> + tlen = pdlen + sa->icv_len + sqh_len;
> + else
> + tlen = pdlen + sqh_len;
>
> /* do append and prepend */
> ml = rte_pktmbuf_lastseg(mb);
> @@ -309,7 +318,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
>
> /* try to update the packet itself */
> rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv,
> - sa->sqh_len);
> + sa->sqh_len, 0);
> /* success, setup crypto op */
> if (rc >= 0) {
> outb_pkt_xprepare(sa, sqc, &icv);
> @@ -336,7 +345,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
> static inline int32_t
> outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
> const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
> - union sym_op_data *icv, uint8_t sqh_len)
> + union sym_op_data *icv, uint8_t sqh_len, uint8_t tso)
> {
> uint8_t np;
> uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen;
> @@ -358,11 +367,19 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
>
> /* number of bytes to encrypt */
> clen = plen + sizeof(*espt);
> - clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
> +
> + /* We don't need to pad/align packet when using TSO offload */
> + if (!tso)
> + clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
>
> /* pad length + esp tail */
> pdlen = clen - plen;
> - tlen = pdlen + sa->icv_len + sqh_len;
> +
> + /* We don't append ICV length when using TSO offload */
> + if (!tso)
> + tlen = pdlen + sa->icv_len + sqh_len;
> + else
> + tlen = pdlen + sqh_len;
>
> /* do append and insert */
> ml = rte_pktmbuf_lastseg(mb);
> @@ -452,7 +469,7 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
>
> /* try to update the packet itself */
> rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv,
> - sa->sqh_len);
> + sa->sqh_len, 0);
> /* success, setup crypto op */
> if (rc >= 0) {
> outb_pkt_xprepare(sa, sqc, &icv);
> @@ -549,7 +566,7 @@ cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss,
> gen_iv(ivbuf[k], sqc);
>
> /* try to update the packet itself */
> - rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len);
> + rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len, 0);
>
> /* success, proceed with preparations */
> if (rc >= 0) {
> @@ -660,6 +677,20 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
> }
> }
>
> +
> +static inline int
> +esn_outb_nb_segments(struct rte_mbuf *m)
> +{
> + if (m->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG)) {
> + uint16_t pkt_l3len = m->pkt_len - m->l2_len;
> + uint16_t segments =
> + (m->tso_segsz > 0 && pkt_l3len > m->tso_segsz) ?
> + (pkt_l3len + m->tso_segsz - 1) / m->tso_segsz : 1;
> + return segments;
> + }
> + return 1; /* no TSO */
> +}
> +
> /*
> * process group of ESP outbound tunnel packets destined for
> * INLINE_CRYPTO type of device.
> @@ -669,29 +700,47 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
> struct rte_mbuf *mb[], uint16_t num)
> {
> int32_t rc;
> - uint32_t i, k, n;
> + uint32_t i, k, n, nb_sqn;
> uint64_t sqn;
> rte_be64_t sqc;
> struct rte_ipsec_sa *sa;
> union sym_op_data icv;
> uint64_t iv[IPSEC_MAX_IV_QWORD];
> uint32_t dr[num];
> + uint16_t nb_segs[num];
>
> sa = ss->sa;
> + nb_sqn = 0;
> + for (i = 0; i != num; i++) {
> + nb_segs[i] = esn_outb_nb_segments(mb[i]);
> + nb_sqn += nb_segs[i];
> + /* setup outer l2 and l3 len for TSO */
> + if (nb_segs[i] > 1) {
> + if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
> + mb[i]->outer_l3_len =
> + sizeof(struct rte_ipv4_hdr);
> + else
> + mb[i]->outer_l3_len =
> + sizeof(struct rte_ipv6_hdr);
> + mb[i]->outer_l2_len = mb[i]->l2_len;
I still don't understand your logic beyond setting these fields here.
How it looks to me:
It is a tunnel mode, so ipsec lib appends it's tunnel header.
In normal case (non-TSO) it sets up l2_len and l3_len that are stored inside sa->tx_offload
(for non-TSO case we don't care about inner/outer case and have to setup outer fields or
set TX_PKT_OUTER flags).
Now for TSO we do need to do that, right?
So as I understand:
sa->tx_offload.l2_len will become mb->outer_l2_len
sa->tx_offload.l3_len will become mb->outer_l3_len
mb->l2_len should be set to zero
mb->l3_len, mb->l4_len, mb->tso_segsz should remain the same
(ipsec lib shouldn't modify them).
Please correct me, if I missed something here.
Also note that right now we setup mbuf tx_offload way below
these lines - at outb_tun_pkt_prepare().
So probably these changes has to be adjusted after that function call.
}
> + }
>
> - n = num;
> + n = nb_sqn;
> sqn = esn_outb_update_sqn(sa, &n);
> - if (n != num)
> + if (n != nb_sqn)
> rte_errno = EOVERFLOW;
>
> k = 0;
> - for (i = 0; i != n; i++) {
> + for (i = 0; i != num; i++) {
As I stated that in previous mail, you can't just assume that n == num always.
That way you just ignores SQN overflow error you get above.
The proper way - would be to find for how many full packets you have
valid SQN value and set 'n' to it.
I know it is an extra pain for TSO mode, but I don't see any better way here.
>
> - sqc = rte_cpu_to_be_64(sqn + i);
> + sqc = rte_cpu_to_be_64(sqn);
> gen_iv(iv, sqc);
> + sqn += nb_segs[i];
>
> /* try to update the packet itself */
> - rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
> + rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0,
> + nb_segs[i] > 1);
I don't think we have to make decision based on number of segments.
Even if whole packet will fit into one TCP segment, TX_TCP_SEG is still set for it,
so HW/PMD expects data in different format.
Probably it should be based on flags value, something like:
mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG).
>
> k += (rc >= 0);
>
> @@ -703,8 +752,8 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
> }
>
> /* copy not processed mbufs beyond good ones */
> - if (k != n && k != 0)
> - move_bad_mbufs(mb, dr, n, n - k);
> + if (k != num && k != 0)
> + move_bad_mbufs(mb, dr, num, num - k);
>
> inline_outb_mbuf_prepare(ss, mb, k);
> return k;
> @@ -719,29 +768,48 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
> struct rte_mbuf *mb[], uint16_t num)
> {
> int32_t rc;
> - uint32_t i, k, n;
> + uint32_t i, k, n, nb_sqn;
> uint64_t sqn;
> rte_be64_t sqc;
> struct rte_ipsec_sa *sa;
> union sym_op_data icv;
> uint64_t iv[IPSEC_MAX_IV_QWORD];
> uint32_t dr[num];
> + uint16_t nb_segs[num];
>
> sa = ss->sa;
> + nb_sqn = 0;
> + /* Calculate number of sequence numbers required */
> + for (i = 0; i != num; i++) {
> + nb_segs[i] = esn_outb_nb_segments(mb[i]);
> + nb_sqn += nb_segs[i];
> + /* setup outer l2 and l3 len for TSO */
> + if (nb_segs[i] > 1) {
> + if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
> + mb[i]->outer_l3_len =
> + sizeof(struct rte_ipv4_hdr);
> + else
> + mb[i]->outer_l3_len =
> + sizeof(struct rte_ipv6_hdr);
Again, that just doesn't look right to me.
> + mb[i]->outer_l2_len = mb[i]->l2_len;
For transport mode actually I am not sure how mb tx_offload fields has to be setuped...
Do we still need to setup outer fields, considering that we are not adding new IP header here?
> + }
> + }
>
> - n = num;
> + n = nb_sqn;
> sqn = esn_outb_update_sqn(sa, &n);
> - if (n != num)
> + if (n != nb_sqn)
> rte_errno = EOVERFLOW;
>
> k = 0;
> - for (i = 0; i != n; i++) {
> + for (i = 0; i != num; i++) {
Same story as for tunnel, we can't just ignore an error here.
>
> - sqc = rte_cpu_to_be_64(sqn + i);
> + sqc = rte_cpu_to_be_64(sqn);
> gen_iv(iv, sqc);
> + sqn += nb_segs[i];
>
> /* try to update the packet itself */
> - rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
> + rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0,
> + nb_segs[i] > 1);
Same thoughts as for tunnel mode.
>
> k += (rc >= 0);
>
> @@ -753,8 +821,8 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
> }
>
> /* copy not processed mbufs beyond good ones */
> - if (k != n && k != 0)
> - move_bad_mbufs(mb, dr, n, n - k);
> + if (k != num && k != 0)
> + move_bad_mbufs(mb, dr, num, num - k);
>
> inline_outb_mbuf_prepare(ss, mb, k);
> return k;
> --
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v9 06/10] ipsec: add transmit segmentation offload support
2021-10-14 14:42 ` Ananyev, Konstantin
@ 2021-10-18 16:32 ` Nicolau, Radu
2021-10-26 17:45 ` Ananyev, Konstantin
0 siblings, 1 reply; 184+ messages in thread
From: Nicolau, Radu @ 2021-10-18 16:32 UTC (permalink / raw)
To: Ananyev, Konstantin, Iremonger, Bernard, Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal,
gakhil, anoobj, Doherty, Declan, Sinha, Abhijit, Buckley,
Daniel M, marchana, ktejasree, matan
Hi, I reworked this patch as part of a new patchset, and some comments
below.
On the comment about the offload capabilities, i.e. what happens if a
PMD supports SECURITY and TSO but not both in the same time, well, if
this is the case they should not be set both, and also this can
theoretically happen with any offloads combinations.
On 10/14/2021 3:42 PM, Ananyev, Konstantin wrote:
>
>>> Add support for transmit segmentation offload to inline crypto processing
>> mode. This offload is not supported by other offload modes, as at a
>> minimum it requires inline crypto for IPsec to be supported on the
>> network interface.
> Thanks for rework.
> It looks much better to me now, but still few more comments.
> Konstantin
>
>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
>> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
>> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
>> ---
>> + for (i = 0; i != num; i++) {
>> + nb_segs[i] = esn_outb_nb_segments(mb[i]);
>> + nb_sqn += nb_segs[i];
>> + /* setup outer l2 and l3 len for TSO */
>> + if (nb_segs[i] > 1) {
>> + if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4)
>> + mb[i]->outer_l3_len =
>> + sizeof(struct rte_ipv4_hdr);
>> + else
>> + mb[i]->outer_l3_len =
>> + sizeof(struct rte_ipv6_hdr);
>> + mb[i]->outer_l2_len = mb[i]->l2_len;
> I still don't understand your logic beyond setting these fields here.
> How it looks to me:
> It is a tunnel mode, so ipsec lib appends it's tunnel header.
> In normal case (non-TSO) it sets up l2_len and l3_len that are stored inside sa->tx_offload
> (for non-TSO case we don't care about inner/outer case and have to setup outer fields or
> set TX_PKT_OUTER flags).
> Now for TSO we do need to do that, right?
> So as I understand:
> sa->tx_offload.l2_len will become mb->outer_l2_len
> sa->tx_offload.l3_len will become mb->outer_l3_len
> mb->l2_len should be set to zero
> mb->l3_len, mb->l4_len, mb->tso_segsz should remain the same
> (ipsec lib shouldn't modify them).
> Please correct me, if I missed something here.
> Also note that right now we setup mbuf tx_offload way below
> these lines - at outb_tun_pkt_prepare().
> So probably these changes has to be adjusted after that function call.
I removed this section, I think it's best to leave the upper layer to
set these fields anyway.
>
> }
>> + }
>>
>> - n = num;
>> + n = nb_sqn;
>> sqn = esn_outb_update_sqn(sa, &n);
>> - if (n != num)
>> + if (n != nb_sqn)
>> rte_errno = EOVERFLOW;
>>
>> k = 0;
>> - for (i = 0; i != n; i++) {
>> + for (i = 0; i != num; i++) {
> As I stated that in previous mail, you can't just assume that n == num always.
> That way you just ignores SQN overflow error you get above.
> The proper way - would be to find for how many full packets you have
> valid SQN value and set 'n' to it.
> I know it is an extra pain for TSO mode, but I don't see any better way here.
I reworked this, I hope I got it right this time :)
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v9 06/10] ipsec: add transmit segmentation offload support
2021-10-18 16:32 ` Nicolau, Radu
@ 2021-10-26 17:45 ` Ananyev, Konstantin
2021-10-27 12:29 ` Nicolau, Radu
0 siblings, 1 reply; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-10-26 17:45 UTC (permalink / raw)
To: Nicolau, Radu, Iremonger, Bernard, Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal,
gakhil, anoobj, Doherty, Declan, Sinha, Abhijit, Buckley,
Daniel M, marchana, ktejasree, matan
>
> Hi, I reworked this patch as part of a new patchset, and some comments
> below.
>
> On the comment about the offload capabilities, i.e. what happens if a
> PMD supports SECURITY and TSO but not both in the same time, well, if
> this is the case they should not be set both,
Ok, let' take an existing PMD: ixgbe.
It supports both IPSEC and TSO offloads.
Would IPSEC+TSO work with current ixgbe PMD?
> and also this can
> theoretically happen with any offloads combinations.
Hmm... can you provide an example with any DPDK PMD when offloads
are supported in separate, but their combination doesn't?
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v9 06/10] ipsec: add transmit segmentation offload support
2021-10-26 17:45 ` Ananyev, Konstantin
@ 2021-10-27 12:29 ` Nicolau, Radu
0 siblings, 0 replies; 184+ messages in thread
From: Nicolau, Radu @ 2021-10-27 12:29 UTC (permalink / raw)
To: Ananyev, Konstantin, Iremonger, Bernard, Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal,
gakhil, anoobj, Doherty, Declan, Sinha, Abhijit, Buckley,
Daniel M, marchana, ktejasree, matan
On 10/26/2021 6:45 PM, Ananyev, Konstantin wrote:
>
>> Hi, I reworked this patch as part of a new patchset, and some comments
>> below.
>>
>> On the comment about the offload capabilities, i.e. what happens if a
>> PMD supports SECURITY and TSO but not both in the same time, well, if
>> this is the case they should not be set both,
> Ok, let' take an existing PMD: ixgbe.
> It supports both IPSEC and TSO offloads.
> Would IPSEC+TSO work with current ixgbe PMD?
Yes, it should work.
>
>> and also this can
>> theoretically happen with any offloads combinations.
> Hmm... can you provide an example with any DPDK PMD when offloads
> are supported in separate, but their combination doesn't?
No, I don't know any, but my point was that if we have a case where a
PMD supports TSO and IPsec but not both at the same time, then it will
be the same as with any other 2 offloads that aren't supported together.
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v9 07/10] ipsec: add support for SA telemetry
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 00/10] new features for ipsec and security libraries Radu Nicolau
` (5 preceding siblings ...)
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 06/10] ipsec: add transmit segmentation offload support Radu Nicolau
@ 2021-10-13 12:13 ` Radu Nicolau
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 08/10] ipsec: add support for initial SQN value Radu Nicolau
` (2 subsequent siblings)
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-13 12:13 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin, Ray Kinsella
Cc: dev, bruce.richardson, roy.fan.zhang, hemant.agrawal, gakhil,
anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add telemetry support for ipsec SAs
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/prog_guide/ipsec_lib.rst | 7 +
doc/guides/rel_notes/release_21_11.rst | 1 +
lib/ipsec/esp_inb.c | 18 +-
lib/ipsec/esp_outb.c | 12 +-
lib/ipsec/ipsec_telemetry.c | 244 +++++++++++++++++++++++++
lib/ipsec/meson.build | 6 +-
lib/ipsec/rte_ipsec.h | 23 +++
lib/ipsec/sa.c | 10 +-
lib/ipsec/sa.h | 9 +
lib/ipsec/version.map | 9 +
10 files changed, 328 insertions(+), 11 deletions(-)
create mode 100644 lib/ipsec/ipsec_telemetry.c
diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
index fc0af5eadb..2a262f8c51 100644
--- a/doc/guides/prog_guide/ipsec_lib.rst
+++ b/doc/guides/prog_guide/ipsec_lib.rst
@@ -321,6 +321,13 @@ Supported features
AES_GMAC, HMAC-SHA1, NULL.
+Telemetry support
+------------------
+Telemetry support implements SA details and IPsec packet add data counters
+statistics. Per SA telemetry statistics can be enabled using
+``rte_ipsec_telemetry_sa_add`` and disabled using
+``rte_ipsec_telemetry_sa_del``. Note that these calls are not thread safe.
+
Limitations
-----------
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 0a9c71d92e..70932fc8a9 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -159,6 +159,7 @@ New Features
* Added support for AEAD algorithms AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
* Added support for NAT-T / UDP encapsulated ESP
* Added support TSO offload support; only supported for inline crypto mode.
+ * Added support for SA telemetry.
Removed Items
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index d66c88f05d..6fbe468a61 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -15,7 +15,7 @@
#include "misc.h"
#include "pad.h"
-typedef uint16_t (*esp_inb_process_t)(const struct rte_ipsec_sa *sa,
+typedef uint16_t (*esp_inb_process_t)(struct rte_ipsec_sa *sa,
struct rte_mbuf *mb[], uint32_t sqn[], uint32_t dr[], uint16_t num,
uint8_t sqh_len);
@@ -573,10 +573,10 @@ tun_process_step3(struct rte_mbuf *mb, uint64_t txof_msk, uint64_t txof_val)
* *process* function for tunnel packets
*/
static inline uint16_t
-tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
+tun_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
uint32_t sqn[], uint32_t dr[], uint16_t num, uint8_t sqh_len)
{
- uint32_t adj, i, k, tl;
+ uint32_t adj, i, k, tl, bytes;
uint32_t hl[num], to[num];
struct rte_esp_tail espt[num];
struct rte_mbuf *ml[num];
@@ -598,6 +598,7 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
process_step1(mb[i], tlen, &ml[i], &espt[i], &hl[i], &to[i]);
k = 0;
+ bytes = 0;
for (i = 0; i != num; i++) {
adj = hl[i] + cofs;
@@ -621,10 +622,13 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
tun_process_step3(mb[i], sa->tx_offload.msk,
sa->tx_offload.val);
k++;
+ bytes += mb[i]->pkt_len;
} else
dr[i - k] = i;
}
+ sa->statistics.count += k;
+ sa->statistics.bytes += bytes;
return k;
}
@@ -632,11 +636,11 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
* *process* function for tunnel packets
*/
static inline uint16_t
-trs_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
+trs_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
uint32_t sqn[], uint32_t dr[], uint16_t num, uint8_t sqh_len)
{
char *np;
- uint32_t i, k, l2, tl;
+ uint32_t i, k, l2, tl, bytes;
uint32_t hl[num], to[num];
struct rte_esp_tail espt[num];
struct rte_mbuf *ml[num];
@@ -656,6 +660,7 @@ trs_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
process_step1(mb[i], tlen, &ml[i], &espt[i], &hl[i], &to[i]);
k = 0;
+ bytes = 0;
for (i = 0; i != num; i++) {
tl = tlen + espt[i].pad_len;
@@ -674,10 +679,13 @@ trs_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
/* update mbuf's metadata */
trs_process_step3(mb[i]);
k++;
+ bytes += mb[i]->pkt_len;
} else
dr[i - k] = i;
}
+ sa->statistics.count += k;
+ sa->statistics.bytes += bytes;
return k;
}
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index d327c32a38..812ba1e5ec 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -623,7 +623,7 @@ uint16_t
esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
uint16_t num)
{
- uint32_t i, k, icv_len, *icv;
+ uint32_t i, k, icv_len, *icv, bytes;
struct rte_mbuf *ml;
struct rte_ipsec_sa *sa;
uint32_t dr[num];
@@ -632,6 +632,7 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
k = 0;
icv_len = sa->icv_len;
+ bytes = 0;
for (i = 0; i != num; i++) {
if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
@@ -642,10 +643,13 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
icv = rte_pktmbuf_mtod_offset(ml, void *,
ml->data_len - icv_len);
remove_sqh(icv, icv_len);
+ bytes += mb[i]->pkt_len;
k++;
} else
dr[i - k] = i;
}
+ sa->statistics.count += k;
+ sa->statistics.bytes += bytes;
/* handle unprocessed mbufs */
if (k != num) {
@@ -665,16 +669,20 @@ static inline void
inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- uint32_t i, ol_flags;
+ uint32_t i, ol_flags, bytes;
ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
+ bytes = 0;
for (i = 0; i != num; i++) {
mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
+ bytes += mb[i]->pkt_len;
if (ol_flags != 0)
rte_security_set_pkt_metadata(ss->security.ctx,
ss->security.ses, mb[i], NULL);
}
+ ss->sa->statistics.count += num;
+ ss->sa->statistics.bytes += bytes;
}
diff --git a/lib/ipsec/ipsec_telemetry.c b/lib/ipsec/ipsec_telemetry.c
new file mode 100644
index 0000000000..713da75f38
--- /dev/null
+++ b/lib/ipsec/ipsec_telemetry.c
@@ -0,0 +1,244 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include <rte_ipsec.h>
+#include <rte_telemetry.h>
+#include <rte_malloc.h>
+#include "sa.h"
+
+
+struct ipsec_telemetry_entry {
+ LIST_ENTRY(ipsec_telemetry_entry) next;
+ const struct rte_ipsec_sa *sa;
+};
+static LIST_HEAD(ipsec_telemetry_head, ipsec_telemetry_entry)
+ ipsec_telemetry_list = LIST_HEAD_INITIALIZER();
+
+static int
+handle_telemetry_cmd_ipsec_sa_list(const char *cmd __rte_unused,
+ const char *params __rte_unused,
+ struct rte_tel_data *data)
+{
+ struct ipsec_telemetry_entry *entry;
+ rte_tel_data_start_array(data, RTE_TEL_U64_VAL);
+
+ LIST_FOREACH(entry, &ipsec_telemetry_list, next) {
+ const struct rte_ipsec_sa *sa = entry->sa;
+ rte_tel_data_add_array_u64(data, rte_be_to_cpu_32(sa->spi));
+ }
+
+ return 0;
+}
+
+/**
+ * Handle IPsec SA statistics telemetry request
+ *
+ * Return dict of SA's with dict of key/value counters
+ *
+ * {
+ * "SA_SPI_XX": {"count": 0, "bytes": 0, "errors": 0},
+ * "SA_SPI_YY": {"count": 0, "bytes": 0, "errors": 0}
+ * }
+ *
+ */
+static int
+handle_telemetry_cmd_ipsec_sa_stats(const char *cmd __rte_unused,
+ const char *params,
+ struct rte_tel_data *data)
+{
+ struct ipsec_telemetry_entry *entry;
+ const struct rte_ipsec_sa *sa;
+ uint32_t sa_spi = 0;
+
+ if (params) {
+ sa_spi = rte_cpu_to_be_32((uint32_t)strtoul(params, NULL, 0));
+ if (sa_spi == 0)
+ return -EINVAL;
+ }
+
+ rte_tel_data_start_dict(data);
+
+ LIST_FOREACH(entry, &ipsec_telemetry_list, next) {
+ char sa_name[64];
+ sa = entry->sa;
+ static const char *name_pkt_cnt = "count";
+ static const char *name_byte_cnt = "bytes";
+ static const char *name_error_cnt = "errors";
+ struct rte_tel_data *sa_data;
+
+ /* If user provided SPI only get telemetry for that SA */
+ if (sa_spi && (sa_spi != sa->spi))
+ continue;
+
+ /* allocate telemetry data struct for SA telemetry */
+ sa_data = rte_tel_data_alloc();
+ if (!sa_data)
+ return -ENOMEM;
+
+ rte_tel_data_start_dict(sa_data);
+
+ /* add telemetry key/values pairs */
+ rte_tel_data_add_dict_u64(sa_data, name_pkt_cnt,
+ sa->statistics.count);
+
+ rte_tel_data_add_dict_u64(sa_data, name_byte_cnt,
+ sa->statistics.bytes -
+ (sa->statistics.count * sa->hdr_len));
+
+ rte_tel_data_add_dict_u64(sa_data, name_error_cnt,
+ sa->statistics.errors.count);
+
+ /* generate telemetry label */
+ snprintf(sa_name, sizeof(sa_name), "SA_SPI_%i",
+ rte_be_to_cpu_32(sa->spi));
+
+ /* add SA telemetry to dictionary container */
+ rte_tel_data_add_dict_container(data, sa_name, sa_data, 0);
+ }
+
+ return 0;
+}
+
+static int
+handle_telemetry_cmd_ipsec_sa_details(const char *cmd __rte_unused,
+ const char *params,
+ struct rte_tel_data *data)
+{
+ struct ipsec_telemetry_entry *entry;
+ const struct rte_ipsec_sa *sa;
+ uint32_t sa_spi = 0;
+
+ if (params)
+ sa_spi = rte_cpu_to_be_32((uint32_t)strtoul(params, NULL, 0));
+ /* valid SPI needed */
+ if (sa_spi == 0)
+ return -EINVAL;
+
+
+ rte_tel_data_start_dict(data);
+
+ LIST_FOREACH(entry, &ipsec_telemetry_list, next) {
+ uint64_t mode;
+ sa = entry->sa;
+ if (sa_spi != sa->spi)
+ continue;
+
+ /* add SA configuration key/values pairs */
+ rte_tel_data_add_dict_string(data, "Type",
+ (sa->type & RTE_IPSEC_SATP_PROTO_MASK) ==
+ RTE_IPSEC_SATP_PROTO_AH ? "AH" : "ESP");
+
+ rte_tel_data_add_dict_string(data, "Direction",
+ (sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+ RTE_IPSEC_SATP_DIR_IB ? "Inbound" : "Outbound");
+
+ mode = sa->type & RTE_IPSEC_SATP_MODE_MASK;
+
+ if (mode == RTE_IPSEC_SATP_MODE_TRANS) {
+ rte_tel_data_add_dict_string(data, "Mode", "Transport");
+ } else {
+ rte_tel_data_add_dict_string(data, "Mode", "Tunnel");
+
+ if ((sa->type & RTE_IPSEC_SATP_NATT_MASK) ==
+ RTE_IPSEC_SATP_NATT_ENABLE) {
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ } else if (sa->type &
+ RTE_IPSEC_SATP_MODE_TUNLV6) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ }
+ } else {
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ } else if (sa->type &
+ RTE_IPSEC_SATP_MODE_TUNLV6) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ }
+ }
+ }
+
+ rte_tel_data_add_dict_string(data,
+ "extended-sequence-number",
+ (sa->type & RTE_IPSEC_SATP_ESN_MASK) ==
+ RTE_IPSEC_SATP_ESN_ENABLE ?
+ "enabled" : "disabled");
+
+ if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+ RTE_IPSEC_SATP_DIR_IB)
+
+ if (sa->sqn.inb.rsn[sa->sqn.inb.rdidx])
+ rte_tel_data_add_dict_u64(data,
+ "sequence-number",
+ sa->sqn.inb.rsn[sa->sqn.inb.rdidx]->sqn);
+ else
+ rte_tel_data_add_dict_u64(data,
+ "sequence-number", 0);
+ else
+ rte_tel_data_add_dict_u64(data, "sequence-number",
+ sa->sqn.outb);
+
+ rte_tel_data_add_dict_string(data,
+ "explicit-congestion-notification",
+ (sa->type & RTE_IPSEC_SATP_ECN_MASK) ==
+ RTE_IPSEC_SATP_ECN_ENABLE ?
+ "enabled" : "disabled");
+
+ rte_tel_data_add_dict_string(data,
+ "copy-DSCP",
+ (sa->type & RTE_IPSEC_SATP_DSCP_MASK) ==
+ RTE_IPSEC_SATP_DSCP_ENABLE ?
+ "enabled" : "disabled");
+ }
+
+ return 0;
+}
+
+
+int
+rte_ipsec_telemetry_sa_add(const struct rte_ipsec_sa *sa)
+{
+ struct ipsec_telemetry_entry *entry = rte_zmalloc(NULL,
+ sizeof(struct ipsec_telemetry_entry), 0);
+ if (entry == NULL)
+ return -ENOMEM;
+ entry->sa = sa;
+ LIST_INSERT_HEAD(&ipsec_telemetry_list, entry, next);
+ return 0;
+}
+
+void
+rte_ipsec_telemetry_sa_del(const struct rte_ipsec_sa *sa)
+{
+ struct ipsec_telemetry_entry *entry;
+ LIST_FOREACH(entry, &ipsec_telemetry_list, next) {
+ if (sa == entry->sa) {
+ LIST_REMOVE(entry, next);
+ rte_free(entry);
+ return;
+ }
+ }
+}
+
+
+RTE_INIT(rte_ipsec_telemetry_init)
+{
+ rte_telemetry_register_cmd("/ipsec/sa/list",
+ handle_telemetry_cmd_ipsec_sa_list,
+ "Return list of IPsec SAs with telemetry enabled.");
+ rte_telemetry_register_cmd("/ipsec/sa/stats",
+ handle_telemetry_cmd_ipsec_sa_stats,
+ "Returns IPsec SA stastistics. Parameters: int sa_spi");
+ rte_telemetry_register_cmd("/ipsec/sa/details",
+ handle_telemetry_cmd_ipsec_sa_details,
+ "Returns IPsec SA configuration. Parameters: int sa_spi");
+}
+
diff --git a/lib/ipsec/meson.build b/lib/ipsec/meson.build
index 1497f573bb..ddb9ea1767 100644
--- a/lib/ipsec/meson.build
+++ b/lib/ipsec/meson.build
@@ -1,9 +1,11 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2018 Intel Corporation
-sources = files('esp_inb.c', 'esp_outb.c', 'sa.c', 'ses.c', 'ipsec_sad.c')
+sources = files('esp_inb.c', 'esp_outb.c',
+ 'sa.c', 'ses.c', 'ipsec_sad.c',
+ 'ipsec_telemetry.c')
headers = files('rte_ipsec.h', 'rte_ipsec_sa.h', 'rte_ipsec_sad.h')
indirect_headers += files('rte_ipsec_group.h')
-deps += ['mbuf', 'net', 'cryptodev', 'security', 'hash']
+deps += ['mbuf', 'net', 'cryptodev', 'security', 'hash', 'telemetry']
diff --git a/lib/ipsec/rte_ipsec.h b/lib/ipsec/rte_ipsec.h
index dd60d95915..5308f250a7 100644
--- a/lib/ipsec/rte_ipsec.h
+++ b/lib/ipsec/rte_ipsec.h
@@ -158,6 +158,29 @@ rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
return ss->pkt_func.process(ss, mb, num);
}
+
+/**
+ * Enable per SA telemetry for a specific SA.
+ * Note that this function is not thread safe
+ * @param sa
+ * Pointer to the *rte_ipsec_sa* object that will have telemetry enabled.
+ * @return
+ * 0 on success, negative value otherwise.
+ */
+__rte_experimental
+int
+rte_ipsec_telemetry_sa_add(const struct rte_ipsec_sa *sa);
+
+/**
+ * Disable per SA telemetry for a specific SA.
+ * Note that this function is not thread safe
+ * @param sa
+ * Pointer to the *rte_ipsec_sa* object that will have telemetry disabled.
+ */
+__rte_experimental
+void
+rte_ipsec_telemetry_sa_del(const struct rte_ipsec_sa *sa);
+
#include <rte_ipsec_group.h>
#ifdef __cplusplus
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 2830506385..d767b2036a 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -656,19 +656,25 @@ uint16_t
pkt_flag_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- uint32_t i, k;
+ uint32_t i, k, bytes;
uint32_t dr[num];
RTE_SET_USED(ss);
k = 0;
+ bytes = 0;
for (i = 0; i != num; i++) {
- if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
+ if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
k++;
+ bytes += mb[i]->pkt_len;
+ }
else
dr[i - k] = i;
}
+ ss->sa->statistics.count += k;
+ ss->sa->statistics.bytes += bytes;
+
/* handle unprocessed mbufs */
if (k != num) {
rte_errno = EBADMSG;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 107ebd1519..6e59f18e16 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -132,6 +132,15 @@ struct rte_ipsec_sa {
struct replay_sqn *rsn[REPLAY_SQN_NUM];
} inb;
} sqn;
+ /* Statistics */
+ struct {
+ uint64_t count;
+ uint64_t bytes;
+ struct {
+ uint64_t count;
+ uint64_t authentication_failed;
+ } errors;
+ } statistics;
} __rte_cache_aligned;
diff --git a/lib/ipsec/version.map b/lib/ipsec/version.map
index ba8753eac4..0af27ffd60 100644
--- a/lib/ipsec/version.map
+++ b/lib/ipsec/version.map
@@ -19,3 +19,12 @@ DPDK_22 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ # added in 21.11
+ rte_ipsec_telemetry_sa_add;
+ rte_ipsec_telemetry_sa_del;
+
+};
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v9 08/10] ipsec: add support for initial SQN value
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 00/10] new features for ipsec and security libraries Radu Nicolau
` (6 preceding siblings ...)
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 07/10] ipsec: add support for SA telemetry Radu Nicolau
@ 2021-10-13 12:13 ` Radu Nicolau
2021-10-14 12:08 ` Ananyev, Konstantin
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 09/10] doc: remove unneeded ipsec new field deprecation Radu Nicolau
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 10/10] doc: remove unneeded security deprecation Radu Nicolau
9 siblings, 1 reply; 184+ messages in thread
From: Radu Nicolau @ 2021-10-13 12:13 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Update IPsec library to support initial SQN value.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
doc/guides/rel_notes/release_21_11.rst | 1 +
lib/ipsec/sa.c | 18 +++++++++++-------
2 files changed, 12 insertions(+), 7 deletions(-)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 70932fc8a9..4ae9fe54d5 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -160,6 +160,7 @@ New Features
* Added support for NAT-T / UDP encapsulated ESP
* Added support TSO offload support; only supported for inline crypto mode.
* Added support for SA telemetry.
+ * Added support for setting a non default starting ESN value.
Removed Items
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index d767b2036a..d32f58dd1b 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -294,11 +294,11 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
* Init ESP outbound specific things.
*/
static void
-esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
+esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen, uint64_t sqn)
{
uint8_t algo_type;
- sa->sqn.outb = 1;
+ sa->sqn.outb = sqn > 1 ? sqn : 1;
algo_type = sa->algo_type;
@@ -380,7 +380,7 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
sa->tx_offload.val = rte_mbuf_tx_offload(sa->hdr_l3_off,
sa->hdr_len - sa->hdr_l3_off, 0, 0, 0, 0, 0);
- esp_outb_init(sa, sa->hdr_len);
+ esp_outb_init(sa, sa->hdr_len, prm->ipsec_xform.esn.value);
return 0;
}
@@ -509,7 +509,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
- esp_outb_init(sa, 0);
+ esp_outb_init(sa, 0, prm->ipsec_xform.esn.value);
break;
}
@@ -520,15 +520,19 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
* helper function, init SA replay structure.
*/
static void
-fill_sa_replay(struct rte_ipsec_sa *sa, uint32_t wnd_sz, uint32_t nb_bucket)
+fill_sa_replay(struct rte_ipsec_sa *sa, uint32_t wnd_sz, uint32_t nb_bucket,
+ uint64_t sqn)
{
sa->replay.win_sz = wnd_sz;
sa->replay.nb_bucket = nb_bucket;
sa->replay.bucket_index_mask = nb_bucket - 1;
sa->sqn.inb.rsn[0] = (struct replay_sqn *)(sa + 1);
- if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+ sa->sqn.inb.rsn[0]->sqn = sqn;
+ if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM) {
sa->sqn.inb.rsn[1] = (struct replay_sqn *)
((uintptr_t)sa->sqn.inb.rsn[0] + rsn_size(nb_bucket));
+ sa->sqn.inb.rsn[1]->sqn = sqn;
+ }
}
int
@@ -604,7 +608,7 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
/* fill replay window related fields */
if (nb != 0)
- fill_sa_replay(sa, wsz, nb);
+ fill_sa_replay(sa, wsz, nb, prm->ipsec_xform.esn.value);
return sz;
}
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH v9 08/10] ipsec: add support for initial SQN value
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 08/10] ipsec: add support for initial SQN value Radu Nicolau
@ 2021-10-14 12:08 ` Ananyev, Konstantin
0 siblings, 0 replies; 184+ messages in thread
From: Ananyev, Konstantin @ 2021-10-14 12:08 UTC (permalink / raw)
To: Nicolau, Radu, Iremonger, Bernard, Medvedkin, Vladimir
Cc: dev, mdr, Richardson, Bruce, Zhang, Roy Fan, hemant.agrawal,
gakhil, anoobj, Doherty, Declan, Sinha, Abhijit, Buckley,
Daniel M, marchana, ktejasree, matan
> Update IPsec library to support initial SQN value.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
> doc/guides/rel_notes/release_21_11.rst | 1 +
> lib/ipsec/sa.c | 18 +++++++++++-------
> 2 files changed, 12 insertions(+), 7 deletions(-)
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> --
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v9 09/10] doc: remove unneeded ipsec new field deprecation
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 00/10] new features for ipsec and security libraries Radu Nicolau
` (7 preceding siblings ...)
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 08/10] ipsec: add support for initial SQN value Radu Nicolau
@ 2021-10-13 12:13 ` Radu Nicolau
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 10/10] doc: remove unneeded security deprecation Radu Nicolau
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-13 12:13 UTC (permalink / raw)
To: Ray Kinsella
Cc: dev, konstantin.ananyev, vladimir.medvedkin, bruce.richardson,
roy.fan.zhang, hemant.agrawal, gakhil, anoobj, declan.doherty,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau
The deprecation notice regarding extendig rte_ipsec_sa_prm with a
new field hdr_l3_len is no longer applicable.
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 3 ---
1 file changed, 3 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 6517e7821f..d1bcd75327 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -211,9 +211,6 @@ Deprecation Notices
will be updated with new fields to support new features like TSO in case of
protocol offload.
-* ipsec: The structure ``rte_ipsec_sa_prm`` will be extended with a new field
- ``hdr_l3_len`` to configure tunnel L3 header length.
-
* eventdev: The file ``rte_eventdev_pmd.h`` will be renamed to ``eventdev_driver.h``
to make the driver interface as internal and the structures ``rte_eventdev_data``,
``rte_eventdev`` and ``rte_eventdevs`` will be moved to a new file named
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v9 10/10] doc: remove unneeded security deprecation
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 00/10] new features for ipsec and security libraries Radu Nicolau
` (8 preceding siblings ...)
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 09/10] doc: remove unneeded ipsec new field deprecation Radu Nicolau
@ 2021-10-13 12:13 ` Radu Nicolau
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-13 12:13 UTC (permalink / raw)
To: Ray Kinsella
Cc: dev, konstantin.ananyev, vladimir.medvedkin, bruce.richardson,
roy.fan.zhang, hemant.agrawal, gakhil, anoobj, declan.doherty,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau
The new fields regarding TSO support were not implemented following
feedback, it was decided to implement TSO support by using existing
mbuf fields.
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 7 -------
1 file changed, 7 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index d1bcd75327..53ce7466c0 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -204,13 +204,6 @@ Deprecation Notices
pointer for the private data to the application which can be attached
to the packet while enqueuing.
-* security: The structure ``rte_security_ipsec_xform`` will be extended with:
- new field: IPsec payload MSS (Maximum Segment Size).
-
-* security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
- will be updated with new fields to support new features like TSO in case of
- protocol offload.
-
* eventdev: The file ``rte_eventdev_pmd.h`` will be renamed to ``eventdev_driver.h``
to make the driver interface as internal and the structures ``rte_eventdev_data``,
``rte_eventdev`` and ``rte_eventdevs`` will be moved to a new file named
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v10 0/9] new features for ipsec and security libraries
2021-07-13 13:35 [dpdk-dev] [PATCH 00/10] new features for ipsec and security libraries Radu Nicolau
` (17 preceding siblings ...)
2021-10-13 12:13 ` [dpdk-dev] [PATCH v9 00/10] new features for ipsec and security libraries Radu Nicolau
@ 2021-10-14 16:03 ` Radu Nicolau
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 1/9] security: add ESN field to ipsec_xform Radu Nicolau
` (9 more replies)
18 siblings, 10 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-14 16:03 UTC (permalink / raw)
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, gakhil, anoobj,
declan.doherty, abhijit.sinha, daniel.m.buckley, marchana,
ktejasree, matan, Radu Nicolau
Add support for:
NAT-T/UDP encapsulation
AES_CCM, CHACHA20_POLY1305 and AES_GMAC
SA telemetry
ESN with initial SQN value
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Radu Nicolau (9):
security: add ESN field to ipsec_xform
ipsec: add support for AEAD algorithms
security: add UDP params for IPsec NAT-T
ipsec: add support for NAT-T
mbuf: add IPsec ESP tunnel type
ipsec: add support for SA telemetry
ipsec: add support for initial SQN value
doc: remove unneeded ipsec new field deprecation
doc: remove unneeded security deprecation
doc/guides/prog_guide/ipsec_lib.rst | 12 +-
doc/guides/rel_notes/deprecation.rst | 11 --
doc/guides/rel_notes/release_21_11.rst | 16 ++
lib/ipsec/crypto.h | 137 ++++++++++++++
lib/ipsec/esp_inb.c | 84 ++++++++-
lib/ipsec/esp_outb.c | 91 ++++++++-
lib/ipsec/ipsec_telemetry.c | 244 +++++++++++++++++++++++++
lib/ipsec/meson.build | 6 +-
lib/ipsec/rte_ipsec.h | 23 +++
lib/ipsec/rte_ipsec_sa.h | 9 +-
lib/ipsec/sa.c | 120 ++++++++++--
lib/ipsec/sa.h | 15 ++
lib/ipsec/version.map | 9 +
lib/mbuf/rte_mbuf_core.h | 1 +
lib/security/rte_security.h | 15 ++
15 files changed, 745 insertions(+), 48 deletions(-)
create mode 100644 lib/ipsec/ipsec_telemetry.c
--
v2: fixed lib/ipsec/version.map updates to show correct version
v3: fixed build error and corrected misspelled email address
v4: add doxygen comments for the IPsec telemetry APIs
update inline comments refering to the wrong RFC
v5: update commit messages after feedback
update the UDP encapsulation patch to actually use the configured ports
v6: fix initial SQN value
v7: reworked the patches after feedback
v8: updated library doc, release notes and removed deprecation notices
v9: reworked telemetry, tso and esn patches
v10: removed TSO patch, addressed feedback
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v10 1/9] security: add ESN field to ipsec_xform
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 0/9] new features for ipsec and security libraries Radu Nicolau
@ 2021-10-14 16:03 ` Radu Nicolau
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 2/9] ipsec: add support for AEAD algorithms Radu Nicolau
` (8 subsequent siblings)
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-14 16:03 UTC (permalink / raw)
To: Ray Kinsella, Akhil Goyal, Declan Doherty
Cc: dev, konstantin.ananyev, vladimir.medvedkin, bruce.richardson,
roy.fan.zhang, hemant.agrawal, anoobj, abhijit.sinha,
daniel.m.buckley, marchana, ktejasree, matan, Radu Nicolau
Update ipsec_xform definition to include ESN field.
This allows the application to control the ESN starting value.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 2 +-
doc/guides/rel_notes/release_21_11.rst | 5 +++++
lib/security/rte_security.h | 8 ++++++++
3 files changed, 14 insertions(+), 1 deletion(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 45239ca56e..adec0a5677 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -201,7 +201,7 @@ Deprecation Notices
* security: The structure ``rte_security_ipsec_xform`` will be extended with
multiple fields: source and destination port of UDP encapsulation,
- IPsec payload MSS (Maximum Segment Size), and ESN (Extended Sequence Number).
+ IPsec payload MSS (Maximum Segment Size).
* security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
will be updated with new fields to support new features like TSO in case of
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 4c56cdfeaa..8bc51a048c 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -273,6 +273,11 @@ ABI Changes
packet IPv4 header checksum and L4 checksum need to be offloaded to
security device.
+* security: A new structure ``esn`` was added in structure
+ ``rte_security_ipsec_xform`` to set an initial ESN value. This permits
+ application to start from an arbitrary ESN value for debug and SA lifetime
+ enforcement purposes.
+
* bbdev: Added capability related to more comprehensive CRC options,
shifting values of the ``enum rte_bbdev_op_ldpcdec_flag_bitmasks``.
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 7eb9f109ae..764ce83bca 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -318,6 +318,14 @@ struct rte_security_ipsec_xform {
/**< Anti replay window size to enable sequence replay attack handling.
* replay checking is disabled if the window size is 0.
*/
+ union {
+ uint64_t value;
+ struct {
+ uint32_t low;
+ uint32_t hi;
+ };
+ } esn;
+ /**< Extended Sequence Number */
};
/**
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v10 2/9] ipsec: add support for AEAD algorithms
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 0/9] new features for ipsec and security libraries Radu Nicolau
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 1/9] security: add ESN field to ipsec_xform Radu Nicolau
@ 2021-10-14 16:03 ` Radu Nicolau
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 3/9] security: add UDP params for IPsec NAT-T Radu Nicolau
` (7 subsequent siblings)
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-14 16:03 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add support for AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/prog_guide/ipsec_lib.rst | 3 +-
doc/guides/rel_notes/release_21_11.rst | 4 +
lib/ipsec/crypto.h | 137 +++++++++++++++++++++++++
lib/ipsec/esp_inb.c | 66 +++++++++++-
lib/ipsec/esp_outb.c | 70 ++++++++++++-
lib/ipsec/sa.c | 54 +++++++++-
lib/ipsec/sa.h | 6 ++
7 files changed, 328 insertions(+), 12 deletions(-)
diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
index 9f2b26072d..93e213bf36 100644
--- a/doc/guides/prog_guide/ipsec_lib.rst
+++ b/doc/guides/prog_guide/ipsec_lib.rst
@@ -313,7 +313,8 @@ Supported features
* ESN and replay window.
-* algorithms: 3DES-CBC, AES-CBC, AES-CTR, AES-GCM, HMAC-SHA1, NULL.
+* algorithms: 3DES-CBC, AES-CBC, AES-CTR, AES-GCM, AES_CCM, CHACHA20_POLY1305,
+ AES_GMAC, HMAC-SHA1, NULL.
Limitations
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 8bc51a048c..ef078e756a 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -159,6 +159,10 @@ New Features
* Added tests to verify tunnel header verification in IPsec inbound.
* Added tests to verify inner checksum.
+* **IPsec library new features.**
+
+ * Added support for AEAD algorithms AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
+
Removed Items
-------------
diff --git a/lib/ipsec/crypto.h b/lib/ipsec/crypto.h
index 3d03034590..93d20aaaa0 100644
--- a/lib/ipsec/crypto.h
+++ b/lib/ipsec/crypto.h
@@ -21,6 +21,37 @@ struct aesctr_cnt_blk {
uint32_t cnt;
} __rte_packed;
+ /*
+ * CHACHA20-POLY1305 devices have some specific requirements
+ * for IV and AAD formats.
+ * Ideally that to be done by the driver itself.
+ */
+
+struct aead_chacha20_poly1305_iv {
+ uint32_t salt;
+ uint64_t iv;
+ uint32_t cnt;
+} __rte_packed;
+
+struct aead_chacha20_poly1305_aad {
+ uint32_t spi;
+ /*
+ * RFC 4106, section 5:
+ * Two formats of the AAD are defined:
+ * one for 32-bit sequence numbers, and one for 64-bit ESN.
+ */
+ union {
+ uint32_t u32[2];
+ uint64_t u64;
+ } sqn;
+ uint32_t align0; /* align to 16B boundary */
+} __rte_packed;
+
+struct chacha20_poly1305_esph_iv {
+ struct rte_esp_hdr esph;
+ uint64_t iv;
+} __rte_packed;
+
/*
* AES-GCM devices have some specific requirements for IV and AAD formats.
* Ideally that to be done by the driver itself.
@@ -51,6 +82,47 @@ struct gcm_esph_iv {
uint64_t iv;
} __rte_packed;
+ /*
+ * AES-CCM devices have some specific requirements for IV and AAD formats.
+ * Ideally that to be done by the driver itself.
+ */
+union aead_ccm_salt {
+ uint32_t salt;
+ struct inner {
+ uint8_t salt8[3];
+ uint8_t ccm_flags;
+ } inner;
+} __rte_packed;
+
+
+struct aead_ccm_iv {
+ uint8_t ccm_flags;
+ uint8_t salt[3];
+ uint64_t iv;
+ uint32_t cnt;
+} __rte_packed;
+
+struct aead_ccm_aad {
+ uint8_t padding[18];
+ uint32_t spi;
+ /*
+ * RFC 4309, section 5:
+ * Two formats of the AAD are defined:
+ * one for 32-bit sequence numbers, and one for 64-bit ESN.
+ */
+ union {
+ uint32_t u32[2];
+ uint64_t u64;
+ } sqn;
+ uint32_t align0; /* align to 16B boundary */
+} __rte_packed;
+
+struct ccm_esph_iv {
+ struct rte_esp_hdr esph;
+ uint64_t iv;
+} __rte_packed;
+
+
static inline void
aes_ctr_cnt_blk_fill(struct aesctr_cnt_blk *ctr, uint64_t iv, uint32_t nonce)
{
@@ -59,6 +131,16 @@ aes_ctr_cnt_blk_fill(struct aesctr_cnt_blk *ctr, uint64_t iv, uint32_t nonce)
ctr->cnt = rte_cpu_to_be_32(1);
}
+static inline void
+aead_chacha20_poly1305_iv_fill(struct aead_chacha20_poly1305_iv
+ *chacha20_poly1305,
+ uint64_t iv, uint32_t salt)
+{
+ chacha20_poly1305->salt = salt;
+ chacha20_poly1305->iv = iv;
+ chacha20_poly1305->cnt = rte_cpu_to_be_32(1);
+}
+
static inline void
aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
{
@@ -67,6 +149,21 @@ aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
gcm->cnt = rte_cpu_to_be_32(1);
}
+static inline void
+aead_ccm_iv_fill(struct aead_ccm_iv *ccm, uint64_t iv, uint32_t salt)
+{
+ union aead_ccm_salt tsalt;
+
+ tsalt.salt = salt;
+ ccm->ccm_flags = tsalt.inner.ccm_flags;
+ ccm->salt[0] = tsalt.inner.salt8[0];
+ ccm->salt[1] = tsalt.inner.salt8[1];
+ ccm->salt[2] = tsalt.inner.salt8[2];
+ ccm->iv = iv;
+ ccm->cnt = rte_cpu_to_be_32(1);
+}
+
+
/*
* RFC 4106, 5 AAD Construction
* spi and sqn should already be converted into network byte order.
@@ -86,6 +183,25 @@ aead_gcm_aad_fill(struct aead_gcm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
aad->align0 = 0;
}
+/*
+ * RFC 4309, 5 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_ccm_aad_fill(struct aead_ccm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
+ int esn)
+{
+ aad->spi = spi;
+ if (esn)
+ aad->sqn.u64 = sqn;
+ else {
+ aad->sqn.u32[0] = sqn_low32(sqn);
+ aad->sqn.u32[1] = 0;
+ }
+ aad->align0 = 0;
+}
+
static inline void
gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
{
@@ -93,6 +209,27 @@ gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
iv[1] = 0;
}
+
+/*
+ * RFC 7634, 2.1 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_chacha20_poly1305_aad_fill(struct aead_chacha20_poly1305_aad *aad,
+ rte_be32_t spi, rte_be64_t sqn,
+ int esn)
+{
+ aad->spi = spi;
+ if (esn)
+ aad->sqn.u64 = sqn;
+ else {
+ aad->sqn.u32[0] = sqn_low32(sqn);
+ aad->sqn.u32[1] = 0;
+ }
+ aad->align0 = 0;
+}
+
/*
* Helper routine to copy IV
* Right now we support only algorithms with IV length equals 0/8/16 bytes.
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index 2b1df6a032..d66c88f05d 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -63,6 +63,8 @@ inb_cop_prepare(struct rte_crypto_op *cop,
{
struct rte_crypto_sym_op *sop;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint64_t *ivc, *ivp;
uint32_t algo;
@@ -83,6 +85,24 @@ inb_cop_prepare(struct rte_crypto_op *cop,
sa->iv_ofs);
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ sop_aead_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ ccm = rte_crypto_op_ctod_offset(cop, struct aead_ccm_iv *,
+ sa->iv_ofs);
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ sop_aead_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ chacha20_poly1305 = rte_crypto_op_ctod_offset(cop,
+ struct aead_chacha20_poly1305_iv *,
+ sa->iv_ofs);
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CBC:
case ALGO_TYPE_3DES_CBC:
sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
@@ -91,6 +111,14 @@ inb_cop_prepare(struct rte_crypto_op *cop,
ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs);
copy_iv(ivc, ivp, sa->iv_len);
break;
+ case ALGO_TYPE_AES_GMAC:
+ sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+ sa->iv_ofs);
+ aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
sop_ciph_auth_prepare(sop, sa, icv, pofs, plen);
@@ -110,6 +138,8 @@ inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
uint32_t *pofs, uint32_t plen, void *iv)
{
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint64_t *ivp;
uint32_t clen;
@@ -120,9 +150,19 @@ inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
switch (sa->algo_type) {
case ALGO_TYPE_AES_GCM:
+ case ALGO_TYPE_AES_GMAC:
gcm = (struct aead_gcm_iv *)iv;
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ ccm = (struct aead_ccm_iv *)iv;
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ chacha20_poly1305 = (struct aead_chacha20_poly1305_iv *)iv;
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CBC:
case ALGO_TYPE_3DES_CBC:
copy_iv(iv, ivp, sa->iv_len);
@@ -175,6 +215,8 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
const union sym_op_data *icv)
{
struct aead_gcm_aad *aad;
+ struct aead_ccm_aad *caad;
+ struct aead_chacha20_poly1305_aad *chacha_aad;
/* insert SQN.hi between ESP trailer and ICV */
if (sa->sqh_len != 0)
@@ -184,9 +226,27 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
* fill AAD fields, if any (aad fields are placed after icv),
* right now we support only one AEAD algorithm: AES-GCM.
*/
- if (sa->aad_len != 0) {
- aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
- aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ switch (sa->algo_type) {
+ case ALGO_TYPE_AES_GCM:
+ if (sa->aad_len != 0) {
+ aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+ aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_AES_CCM:
+ if (sa->aad_len != 0) {
+ caad = (struct aead_ccm_aad *)(icv->va + sa->icv_len);
+ aead_ccm_aad_fill(caad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ if (sa->aad_len != 0) {
+ chacha_aad = (struct aead_chacha20_poly1305_aad *)
+ (icv->va + sa->icv_len);
+ aead_chacha20_poly1305_aad_fill(chacha_aad,
+ sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
}
}
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 1e181cf2ce..a3f77469c3 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -63,6 +63,8 @@ outb_cop_prepare(struct rte_crypto_op *cop,
{
struct rte_crypto_sym_op *sop;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint32_t algo;
@@ -80,6 +82,15 @@ outb_cop_prepare(struct rte_crypto_op *cop,
/* NULL case */
sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
break;
+ case ALGO_TYPE_AES_GMAC:
+ /* GMAC case */
+ sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+ sa->iv_ofs);
+ aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_GCM:
/* AEAD (AES_GCM) case */
sop_aead_prepare(sop, sa, icv, hlen, plen);
@@ -89,6 +100,26 @@ outb_cop_prepare(struct rte_crypto_op *cop,
sa->iv_ofs);
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ /* AEAD (AES_CCM) case */
+ sop_aead_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ ccm = rte_crypto_op_ctod_offset(cop, struct aead_ccm_iv *,
+ sa->iv_ofs);
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ /* AEAD (CHACHA20_POLY) case */
+ sop_aead_prepare(sop, sa, icv, hlen, plen);
+
+ /* fill AAD IV (located inside crypto op) */
+ chacha20_poly1305 = rte_crypto_op_ctod_offset(cop,
+ struct aead_chacha20_poly1305_iv *,
+ sa->iv_ofs);
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
/* Cipher-Auth (AES-CTR *) case */
sop_ciph_auth_prepare(sop, sa, icv, hlen, plen);
@@ -196,7 +227,9 @@ outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
const union sym_op_data *icv)
{
uint32_t *psqh;
- struct aead_gcm_aad *aad;
+ struct aead_gcm_aad *gaad;
+ struct aead_ccm_aad *caad;
+ struct aead_chacha20_poly1305_aad *chacha20_poly1305_aad;
/* insert SQN.hi between ESP trailer and ICV */
if (sa->sqh_len != 0) {
@@ -208,9 +241,29 @@ outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
* fill IV and AAD fields, if any (aad fields are placed after icv),
* right now we support only one AEAD algorithm: AES-GCM .
*/
+ switch (sa->algo_type) {
+ case ALGO_TYPE_AES_GCM:
if (sa->aad_len != 0) {
- aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
- aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+ gaad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+ aead_gcm_aad_fill(gaad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_AES_CCM:
+ if (sa->aad_len != 0) {
+ caad = (struct aead_ccm_aad *)(icv->va + sa->icv_len);
+ aead_ccm_aad_fill(caad, sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ if (sa->aad_len != 0) {
+ chacha20_poly1305_aad = (struct aead_chacha20_poly1305_aad *)
+ (icv->va + sa->icv_len);
+ aead_chacha20_poly1305_aad_fill(chacha20_poly1305_aad,
+ sa->spi, sqc, IS_ESN(sa));
+ }
+ break;
+ default:
+ break;
}
}
@@ -418,6 +471,8 @@ outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
{
uint64_t *ivp = iv;
struct aead_gcm_iv *gcm;
+ struct aead_ccm_iv *ccm;
+ struct aead_chacha20_poly1305_iv *chacha20_poly1305;
struct aesctr_cnt_blk *ctr;
uint32_t clen;
@@ -426,6 +481,15 @@ outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
gcm = iv;
aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
break;
+ case ALGO_TYPE_AES_CCM:
+ ccm = iv;
+ aead_ccm_iv_fill(ccm, ivp[0], sa->salt);
+ break;
+ case ALGO_TYPE_CHACHA20_POLY1305:
+ chacha20_poly1305 = iv;
+ aead_chacha20_poly1305_iv_fill(chacha20_poly1305,
+ ivp[0], sa->salt);
+ break;
case ALGO_TYPE_AES_CTR:
ctr = iv;
aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt);
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index e59189d215..720e0f365b 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -47,6 +47,15 @@ fill_crypto_xform(struct crypto_xform *xform, uint64_t type,
if (xfn != NULL)
return -EINVAL;
xform->aead = &xf->aead;
+
+ /* GMAC has only auth */
+ } else if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+ xf->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ if (xfn != NULL)
+ return -EINVAL;
+ xform->auth = &xf->auth;
+ xform->cipher = &xfn->cipher;
+
/*
* CIPHER+AUTH xforms are expected in strict order,
* depending on SA direction:
@@ -247,12 +256,13 @@ esp_inb_init(struct rte_ipsec_sa *sa)
sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
/*
- * for AEAD and NULL algorithms we can assume that
+ * for AEAD algorithms we can assume that
* auth and cipher offsets would be equal.
*/
switch (sa->algo_type) {
case ALGO_TYPE_AES_GCM:
- case ALGO_TYPE_NULL:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
sa->ctp.auth.raw = sa->ctp.cipher.raw;
break;
default:
@@ -294,6 +304,8 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
switch (algo_type) {
case ALGO_TYPE_AES_GCM:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
case ALGO_TYPE_AES_CTR:
case ALGO_TYPE_NULL:
sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr) +
@@ -305,15 +317,20 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr);
sa->ctp.cipher.length = sa->iv_len;
break;
+ case ALGO_TYPE_AES_GMAC:
+ sa->ctp.cipher.offset = 0;
+ sa->ctp.cipher.length = 0;
+ break;
}
/*
- * for AEAD and NULL algorithms we can assume that
+ * for AEAD algorithms we can assume that
* auth and cipher offsets would be equal.
*/
switch (algo_type) {
case ALGO_TYPE_AES_GCM:
- case ALGO_TYPE_NULL:
+ case ALGO_TYPE_AES_CCM:
+ case ALGO_TYPE_CHACHA20_POLY1305:
sa->ctp.auth.raw = sa->ctp.cipher.raw;
break;
default:
@@ -374,13 +391,39 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
sa->pad_align = IPSEC_PAD_AES_GCM;
sa->algo_type = ALGO_TYPE_AES_GCM;
break;
+ case RTE_CRYPTO_AEAD_AES_CCM:
+ /* RFC 4309 */
+ sa->aad_len = sizeof(struct aead_ccm_aad);
+ sa->icv_len = cxf->aead->digest_length;
+ sa->iv_ofs = cxf->aead->iv.offset;
+ sa->iv_len = sizeof(uint64_t);
+ sa->pad_align = IPSEC_PAD_AES_CCM;
+ sa->algo_type = ALGO_TYPE_AES_CCM;
+ break;
+ case RTE_CRYPTO_AEAD_CHACHA20_POLY1305:
+ /* RFC 7634 & 8439*/
+ sa->aad_len = sizeof(struct aead_chacha20_poly1305_aad);
+ sa->icv_len = cxf->aead->digest_length;
+ sa->iv_ofs = cxf->aead->iv.offset;
+ sa->iv_len = sizeof(uint64_t);
+ sa->pad_align = IPSEC_PAD_CHACHA20_POLY1305;
+ sa->algo_type = ALGO_TYPE_CHACHA20_POLY1305;
+ break;
default:
return -EINVAL;
}
+ } else if (cxf->auth->algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ /* RFC 4543 */
+ /* AES-GMAC is a special case of auth that needs IV */
+ sa->pad_align = IPSEC_PAD_AES_GMAC;
+ sa->iv_len = sizeof(uint64_t);
+ sa->icv_len = cxf->auth->digest_length;
+ sa->iv_ofs = cxf->auth->iv.offset;
+ sa->algo_type = ALGO_TYPE_AES_GMAC;
+
} else {
sa->icv_len = cxf->auth->digest_length;
sa->iv_ofs = cxf->cipher->iv.offset;
- sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
switch (cxf->cipher->algo) {
case RTE_CRYPTO_CIPHER_NULL:
@@ -414,6 +457,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
}
}
+ sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
sa->udata = prm->userdata;
sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi);
sa->salt = prm->ipsec_xform.salt;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 1bffe751f5..107ebd1519 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -19,7 +19,10 @@ enum {
IPSEC_PAD_AES_CBC = IPSEC_MAX_IV_SIZE,
IPSEC_PAD_AES_CTR = IPSEC_PAD_DEFAULT,
IPSEC_PAD_AES_GCM = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_AES_CCM = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_CHACHA20_POLY1305 = IPSEC_PAD_DEFAULT,
IPSEC_PAD_NULL = IPSEC_PAD_DEFAULT,
+ IPSEC_PAD_AES_GMAC = IPSEC_PAD_DEFAULT,
};
/* iv sizes for different algorithms */
@@ -67,6 +70,9 @@ enum sa_algo_type {
ALGO_TYPE_AES_CBC,
ALGO_TYPE_AES_CTR,
ALGO_TYPE_AES_GCM,
+ ALGO_TYPE_AES_CCM,
+ ALGO_TYPE_CHACHA20_POLY1305,
+ ALGO_TYPE_AES_GMAC,
ALGO_TYPE_MAX
};
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v10 3/9] security: add UDP params for IPsec NAT-T
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 0/9] new features for ipsec and security libraries Radu Nicolau
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 1/9] security: add ESN field to ipsec_xform Radu Nicolau
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 2/9] ipsec: add support for AEAD algorithms Radu Nicolau
@ 2021-10-14 16:03 ` Radu Nicolau
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 4/9] ipsec: add support for NAT-T Radu Nicolau
` (6 subsequent siblings)
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-14 16:03 UTC (permalink / raw)
To: Ray Kinsella, Akhil Goyal, Declan Doherty
Cc: dev, konstantin.ananyev, vladimir.medvedkin, bruce.richardson,
roy.fan.zhang, hemant.agrawal, anoobj, abhijit.sinha,
daniel.m.buckley, marchana, ktejasree, matan, Radu Nicolau
Add support for specifying UDP port params for UDP encapsulation option.
RFC3948 section-2.1 does not enforce using specific the UDP ports for
UDP-Encapsulated ESP Header
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 5 ++---
doc/guides/rel_notes/release_21_11.rst | 4 ++++
lib/security/rte_security.h | 7 +++++++
3 files changed, 13 insertions(+), 3 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index adec0a5677..a744fdb2c6 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -199,9 +199,8 @@ Deprecation Notices
pointer for the private data to the application which can be attached
to the packet while enqueuing.
-* security: The structure ``rte_security_ipsec_xform`` will be extended with
- multiple fields: source and destination port of UDP encapsulation,
- IPsec payload MSS (Maximum Segment Size).
+* security: The structure ``rte_security_ipsec_xform`` will be extended with:
+ new field: IPsec payload MSS (Maximum Segment Size).
* security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
will be updated with new fields to support new features like TSO in case of
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index ef078e756a..ed56c16d4b 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -282,6 +282,10 @@ ABI Changes
application to start from an arbitrary ESN value for debug and SA lifetime
enforcement purposes.
+* security: A new structure ``udp`` was added in structure
+ ``rte_security_ipsec_xform`` to allow setting the source and destination ports
+ for UDP encapsulated IPsec traffic.
+
* bbdev: Added capability related to more comprehensive CRC options,
shifting values of the ``enum rte_bbdev_op_ldpcdec_flag_bitmasks``.
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 764ce83bca..17d0e95412 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -128,6 +128,11 @@ struct rte_security_ipsec_tunnel_param {
};
};
+struct rte_security_ipsec_udp_param {
+ uint16_t sport;
+ uint16_t dport;
+};
+
/**
* IPsec Security Association option flags
*/
@@ -326,6 +331,8 @@ struct rte_security_ipsec_xform {
};
} esn;
/**< Extended Sequence Number */
+ struct rte_security_ipsec_udp_param udp;
+ /**< UDP parameters, ignored when udp_encap option not specified */
};
/**
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v10 4/9] ipsec: add support for NAT-T
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 0/9] new features for ipsec and security libraries Radu Nicolau
` (2 preceding siblings ...)
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 3/9] security: add UDP params for IPsec NAT-T Radu Nicolau
@ 2021-10-14 16:03 ` Radu Nicolau
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 5/9] mbuf: add IPsec ESP tunnel type Radu Nicolau
` (5 subsequent siblings)
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-14 16:03 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add support for the IPsec NAT-Traversal use case for Tunnel mode
packets.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/prog_guide/ipsec_lib.rst | 2 ++
doc/guides/rel_notes/release_21_11.rst | 1 +
lib/ipsec/esp_outb.c | 9 ++++++
lib/ipsec/rte_ipsec_sa.h | 9 +++++-
lib/ipsec/sa.c | 38 ++++++++++++++++++++++----
5 files changed, 52 insertions(+), 7 deletions(-)
diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
index 93e213bf36..af51ff8131 100644
--- a/doc/guides/prog_guide/ipsec_lib.rst
+++ b/doc/guides/prog_guide/ipsec_lib.rst
@@ -313,6 +313,8 @@ Supported features
* ESN and replay window.
+* NAT-T / UDP encapsulated ESP.
+
* algorithms: 3DES-CBC, AES-CBC, AES-CTR, AES-GCM, AES_CCM, CHACHA20_POLY1305,
AES_GMAC, HMAC-SHA1, NULL.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index ed56c16d4b..9b6591340a 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -162,6 +162,7 @@ New Features
* **IPsec library new features.**
* Added support for AEAD algorithms AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
+ * Added support for NAT-T / UDP encapsulated ESP
Removed Items
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index a3f77469c3..0e3314b358 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -5,6 +5,7 @@
#include <rte_ipsec.h>
#include <rte_esp.h>
#include <rte_ip.h>
+#include <rte_udp.h>
#include <rte_errno.h>
#include <rte_cryptodev.h>
@@ -185,6 +186,14 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* copy tunnel pkt header */
rte_memcpy(ph, sa->hdr, sa->hdr_len);
+ /* if UDP encap is enabled update the dgram_len */
+ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
+ struct rte_udp_hdr *udph = (struct rte_udp_hdr *)
+ (ph - sizeof(struct rte_udp_hdr));
+ udph->dgram_len = rte_cpu_to_be_16(mb->pkt_len - sqh_len -
+ sa->hdr_l3_off - sa->hdr_len);
+ }
+
/* update original and new ip header fields */
update_tun_outb_l3hdr(sa, ph + sa->hdr_l3_off, ph + hlen,
mb->pkt_len - sqh_len, sa->hdr_l3_off, sqn_low16(sqc));
diff --git a/lib/ipsec/rte_ipsec_sa.h b/lib/ipsec/rte_ipsec_sa.h
index cf51ad8338..3a22705055 100644
--- a/lib/ipsec/rte_ipsec_sa.h
+++ b/lib/ipsec/rte_ipsec_sa.h
@@ -78,6 +78,7 @@ struct rte_ipsec_sa_prm {
* - for TUNNEL outer IP version (IPv4/IPv6)
* - are SA SQN operations 'atomic'
* - ESN enabled/disabled
+ * - NAT-T UDP encapsulated (TUNNEL mode only)
* ...
*/
@@ -89,7 +90,8 @@ enum {
RTE_SATP_LOG2_SQN = RTE_SATP_LOG2_MODE + 2,
RTE_SATP_LOG2_ESN,
RTE_SATP_LOG2_ECN,
- RTE_SATP_LOG2_DSCP
+ RTE_SATP_LOG2_DSCP,
+ RTE_SATP_LOG2_NATT
};
#define RTE_IPSEC_SATP_IPV_MASK (1ULL << RTE_SATP_LOG2_IPV)
@@ -125,6 +127,11 @@ enum {
#define RTE_IPSEC_SATP_DSCP_DISABLE (0ULL << RTE_SATP_LOG2_DSCP)
#define RTE_IPSEC_SATP_DSCP_ENABLE (1ULL << RTE_SATP_LOG2_DSCP)
+#define RTE_IPSEC_SATP_NATT_MASK (1ULL << RTE_SATP_LOG2_NATT)
+#define RTE_IPSEC_SATP_NATT_DISABLE (0ULL << RTE_SATP_LOG2_NATT)
+#define RTE_IPSEC_SATP_NATT_ENABLE (1ULL << RTE_SATP_LOG2_NATT)
+
+
/**
* get type of given SA
* @return
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 720e0f365b..fa5a76cde1 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -5,6 +5,7 @@
#include <rte_ipsec.h>
#include <rte_esp.h>
#include <rte_ip.h>
+#include <rte_udp.h>
#include <rte_errno.h>
#include <rte_cryptodev.h>
@@ -217,6 +218,10 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
} else
return -EINVAL;
+ /* check for UDP encapsulation flag */
+ if (prm->ipsec_xform.options.udp_encap == 1)
+ tp |= RTE_IPSEC_SATP_NATT_ENABLE;
+
/* check for ESN flag */
if (prm->ipsec_xform.options.esn == 0)
tp |= RTE_IPSEC_SATP_ESN_DISABLE;
@@ -355,12 +360,22 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
sa->hdr_len = prm->tun.hdr_len;
sa->hdr_l3_off = prm->tun.hdr_l3_off;
+ memcpy(sa->hdr, prm->tun.hdr, prm->tun.hdr_len);
+
+ /* insert UDP header if UDP encapsulation is inabled */
+ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) {
+ struct rte_udp_hdr *udph = (struct rte_udp_hdr *)
+ &sa->hdr[prm->tun.hdr_len];
+ sa->hdr_len += sizeof(struct rte_udp_hdr);
+ udph->src_port = prm->ipsec_xform.udp.sport;
+ udph->dst_port = prm->ipsec_xform.udp.dport;
+ udph->dgram_cksum = 0;
+ }
+
/* update l2_len and l3_len fields for outbound mbuf */
sa->tx_offload.val = rte_mbuf_tx_offload(sa->hdr_l3_off,
sa->hdr_len - sa->hdr_l3_off, 0, 0, 0, 0, 0);
- memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
-
esp_outb_init(sa, sa->hdr_len);
}
@@ -372,7 +387,8 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
const struct crypto_xform *cxf)
{
static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
- RTE_IPSEC_SATP_MODE_MASK;
+ RTE_IPSEC_SATP_MODE_MASK |
+ RTE_IPSEC_SATP_NATT_MASK;
if (prm->ipsec_xform.options.ecn)
sa->tos_mask |= RTE_IPV4_HDR_ECN_MASK;
@@ -475,10 +491,16 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
esp_inb_init(sa);
break;
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4 |
+ RTE_IPSEC_SATP_NATT_ENABLE):
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6 |
+ RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
esp_outb_tun_init(sa, prm);
break;
+ case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
+ RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
esp_outb_init(sa, 0);
break;
@@ -551,9 +573,13 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
if (prm->ipsec_xform.proto != RTE_SECURITY_IPSEC_SA_PROTO_ESP)
return -EINVAL;
- if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL &&
- prm->tun.hdr_len > sizeof(sa->hdr))
- return -EINVAL;
+ if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
+ uint32_t hlen = prm->tun.hdr_len;
+ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE)
+ hlen += sizeof(struct rte_udp_hdr);
+ if (hlen > sizeof(sa->hdr))
+ return -EINVAL;
+ }
rc = fill_crypto_xform(&cxf, type, prm);
if (rc != 0)
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v10 5/9] mbuf: add IPsec ESP tunnel type
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 0/9] new features for ipsec and security libraries Radu Nicolau
` (3 preceding siblings ...)
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 4/9] ipsec: add support for NAT-T Radu Nicolau
@ 2021-10-14 16:03 ` Radu Nicolau
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 6/9] ipsec: add support for SA telemetry Radu Nicolau
` (4 subsequent siblings)
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-14 16:03 UTC (permalink / raw)
To: Olivier Matz
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, gakhil, anoobj,
declan.doherty, abhijit.sinha, daniel.m.buckley, marchana,
ktejasree, matan, Radu Nicolau
Add ESP tunnel type to the tunnel types list that can be specified
for TSO or checksum on the inner part of tunnel packets.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
lib/mbuf/rte_mbuf_core.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index d6f1679944..fdaaaf67f2 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -248,6 +248,7 @@ extern "C" {
#define PKT_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
#define PKT_TX_TUNNEL_VXLAN_GPE (0x6ULL << 45)
#define PKT_TX_TUNNEL_GTP (0x7ULL << 45)
+#define PKT_TX_TUNNEL_ESP (0x8ULL << 45)
/**
* Generic IP encapsulated tunnel type, used for TSO and checksum offload.
* It can be used for tunnels which are not standards or listed above.
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v10 6/9] ipsec: add support for SA telemetry
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 0/9] new features for ipsec and security libraries Radu Nicolau
` (4 preceding siblings ...)
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 5/9] mbuf: add IPsec ESP tunnel type Radu Nicolau
@ 2021-10-14 16:03 ` Radu Nicolau
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 7/9] ipsec: add support for initial SQN value Radu Nicolau
` (3 subsequent siblings)
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-14 16:03 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin, Ray Kinsella
Cc: dev, bruce.richardson, roy.fan.zhang, hemant.agrawal, gakhil,
anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Add telemetry support for ipsec SAs
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/prog_guide/ipsec_lib.rst | 7 +
doc/guides/rel_notes/release_21_11.rst | 1 +
lib/ipsec/esp_inb.c | 18 +-
lib/ipsec/esp_outb.c | 12 +-
lib/ipsec/ipsec_telemetry.c | 244 +++++++++++++++++++++++++
lib/ipsec/meson.build | 6 +-
lib/ipsec/rte_ipsec.h | 23 +++
lib/ipsec/sa.c | 10 +-
lib/ipsec/sa.h | 9 +
lib/ipsec/version.map | 9 +
10 files changed, 328 insertions(+), 11 deletions(-)
create mode 100644 lib/ipsec/ipsec_telemetry.c
diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
index af51ff8131..1bafdc608c 100644
--- a/doc/guides/prog_guide/ipsec_lib.rst
+++ b/doc/guides/prog_guide/ipsec_lib.rst
@@ -319,6 +319,13 @@ Supported features
AES_GMAC, HMAC-SHA1, NULL.
+Telemetry support
+------------------
+Telemetry support implements SA details and IPsec packet add data counters
+statistics. Per SA telemetry statistics can be enabled using
+``rte_ipsec_telemetry_sa_add`` and disabled using
+``rte_ipsec_telemetry_sa_del``. Note that these calls are not thread safe.
+
Limitations
-----------
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 9b6591340a..8286a6cee7 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -163,6 +163,7 @@ New Features
* Added support for AEAD algorithms AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
* Added support for NAT-T / UDP encapsulated ESP
+ * Added support for SA telemetry.
Removed Items
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index d66c88f05d..6fbe468a61 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -15,7 +15,7 @@
#include "misc.h"
#include "pad.h"
-typedef uint16_t (*esp_inb_process_t)(const struct rte_ipsec_sa *sa,
+typedef uint16_t (*esp_inb_process_t)(struct rte_ipsec_sa *sa,
struct rte_mbuf *mb[], uint32_t sqn[], uint32_t dr[], uint16_t num,
uint8_t sqh_len);
@@ -573,10 +573,10 @@ tun_process_step3(struct rte_mbuf *mb, uint64_t txof_msk, uint64_t txof_val)
* *process* function for tunnel packets
*/
static inline uint16_t
-tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
+tun_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
uint32_t sqn[], uint32_t dr[], uint16_t num, uint8_t sqh_len)
{
- uint32_t adj, i, k, tl;
+ uint32_t adj, i, k, tl, bytes;
uint32_t hl[num], to[num];
struct rte_esp_tail espt[num];
struct rte_mbuf *ml[num];
@@ -598,6 +598,7 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
process_step1(mb[i], tlen, &ml[i], &espt[i], &hl[i], &to[i]);
k = 0;
+ bytes = 0;
for (i = 0; i != num; i++) {
adj = hl[i] + cofs;
@@ -621,10 +622,13 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
tun_process_step3(mb[i], sa->tx_offload.msk,
sa->tx_offload.val);
k++;
+ bytes += mb[i]->pkt_len;
} else
dr[i - k] = i;
}
+ sa->statistics.count += k;
+ sa->statistics.bytes += bytes;
return k;
}
@@ -632,11 +636,11 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
* *process* function for tunnel packets
*/
static inline uint16_t
-trs_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
+trs_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
uint32_t sqn[], uint32_t dr[], uint16_t num, uint8_t sqh_len)
{
char *np;
- uint32_t i, k, l2, tl;
+ uint32_t i, k, l2, tl, bytes;
uint32_t hl[num], to[num];
struct rte_esp_tail espt[num];
struct rte_mbuf *ml[num];
@@ -656,6 +660,7 @@ trs_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
process_step1(mb[i], tlen, &ml[i], &espt[i], &hl[i], &to[i]);
k = 0;
+ bytes = 0;
for (i = 0; i != num; i++) {
tl = tlen + espt[i].pad_len;
@@ -674,10 +679,13 @@ trs_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
/* update mbuf's metadata */
trs_process_step3(mb[i]);
k++;
+ bytes += mb[i]->pkt_len;
} else
dr[i - k] = i;
}
+ sa->statistics.count += k;
+ sa->statistics.bytes += bytes;
return k;
}
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 0e3314b358..b6c72f58a4 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -606,7 +606,7 @@ uint16_t
esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
uint16_t num)
{
- uint32_t i, k, icv_len, *icv;
+ uint32_t i, k, icv_len, *icv, bytes;
struct rte_mbuf *ml;
struct rte_ipsec_sa *sa;
uint32_t dr[num];
@@ -615,6 +615,7 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
k = 0;
icv_len = sa->icv_len;
+ bytes = 0;
for (i = 0; i != num; i++) {
if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
@@ -625,10 +626,13 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
icv = rte_pktmbuf_mtod_offset(ml, void *,
ml->data_len - icv_len);
remove_sqh(icv, icv_len);
+ bytes += mb[i]->pkt_len;
k++;
} else
dr[i - k] = i;
}
+ sa->statistics.count += k;
+ sa->statistics.bytes += bytes;
/* handle unprocessed mbufs */
if (k != num) {
@@ -648,16 +652,20 @@ static inline void
inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- uint32_t i, ol_flags;
+ uint32_t i, ol_flags, bytes;
ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
+ bytes = 0;
for (i = 0; i != num; i++) {
mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
+ bytes += mb[i]->pkt_len;
if (ol_flags != 0)
rte_security_set_pkt_metadata(ss->security.ctx,
ss->security.ses, mb[i], NULL);
}
+ ss->sa->statistics.count += num;
+ ss->sa->statistics.bytes += bytes;
}
/*
diff --git a/lib/ipsec/ipsec_telemetry.c b/lib/ipsec/ipsec_telemetry.c
new file mode 100644
index 0000000000..713da75f38
--- /dev/null
+++ b/lib/ipsec/ipsec_telemetry.c
@@ -0,0 +1,244 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include <rte_ipsec.h>
+#include <rte_telemetry.h>
+#include <rte_malloc.h>
+#include "sa.h"
+
+
+struct ipsec_telemetry_entry {
+ LIST_ENTRY(ipsec_telemetry_entry) next;
+ const struct rte_ipsec_sa *sa;
+};
+static LIST_HEAD(ipsec_telemetry_head, ipsec_telemetry_entry)
+ ipsec_telemetry_list = LIST_HEAD_INITIALIZER();
+
+static int
+handle_telemetry_cmd_ipsec_sa_list(const char *cmd __rte_unused,
+ const char *params __rte_unused,
+ struct rte_tel_data *data)
+{
+ struct ipsec_telemetry_entry *entry;
+ rte_tel_data_start_array(data, RTE_TEL_U64_VAL);
+
+ LIST_FOREACH(entry, &ipsec_telemetry_list, next) {
+ const struct rte_ipsec_sa *sa = entry->sa;
+ rte_tel_data_add_array_u64(data, rte_be_to_cpu_32(sa->spi));
+ }
+
+ return 0;
+}
+
+/**
+ * Handle IPsec SA statistics telemetry request
+ *
+ * Return dict of SA's with dict of key/value counters
+ *
+ * {
+ * "SA_SPI_XX": {"count": 0, "bytes": 0, "errors": 0},
+ * "SA_SPI_YY": {"count": 0, "bytes": 0, "errors": 0}
+ * }
+ *
+ */
+static int
+handle_telemetry_cmd_ipsec_sa_stats(const char *cmd __rte_unused,
+ const char *params,
+ struct rte_tel_data *data)
+{
+ struct ipsec_telemetry_entry *entry;
+ const struct rte_ipsec_sa *sa;
+ uint32_t sa_spi = 0;
+
+ if (params) {
+ sa_spi = rte_cpu_to_be_32((uint32_t)strtoul(params, NULL, 0));
+ if (sa_spi == 0)
+ return -EINVAL;
+ }
+
+ rte_tel_data_start_dict(data);
+
+ LIST_FOREACH(entry, &ipsec_telemetry_list, next) {
+ char sa_name[64];
+ sa = entry->sa;
+ static const char *name_pkt_cnt = "count";
+ static const char *name_byte_cnt = "bytes";
+ static const char *name_error_cnt = "errors";
+ struct rte_tel_data *sa_data;
+
+ /* If user provided SPI only get telemetry for that SA */
+ if (sa_spi && (sa_spi != sa->spi))
+ continue;
+
+ /* allocate telemetry data struct for SA telemetry */
+ sa_data = rte_tel_data_alloc();
+ if (!sa_data)
+ return -ENOMEM;
+
+ rte_tel_data_start_dict(sa_data);
+
+ /* add telemetry key/values pairs */
+ rte_tel_data_add_dict_u64(sa_data, name_pkt_cnt,
+ sa->statistics.count);
+
+ rte_tel_data_add_dict_u64(sa_data, name_byte_cnt,
+ sa->statistics.bytes -
+ (sa->statistics.count * sa->hdr_len));
+
+ rte_tel_data_add_dict_u64(sa_data, name_error_cnt,
+ sa->statistics.errors.count);
+
+ /* generate telemetry label */
+ snprintf(sa_name, sizeof(sa_name), "SA_SPI_%i",
+ rte_be_to_cpu_32(sa->spi));
+
+ /* add SA telemetry to dictionary container */
+ rte_tel_data_add_dict_container(data, sa_name, sa_data, 0);
+ }
+
+ return 0;
+}
+
+static int
+handle_telemetry_cmd_ipsec_sa_details(const char *cmd __rte_unused,
+ const char *params,
+ struct rte_tel_data *data)
+{
+ struct ipsec_telemetry_entry *entry;
+ const struct rte_ipsec_sa *sa;
+ uint32_t sa_spi = 0;
+
+ if (params)
+ sa_spi = rte_cpu_to_be_32((uint32_t)strtoul(params, NULL, 0));
+ /* valid SPI needed */
+ if (sa_spi == 0)
+ return -EINVAL;
+
+
+ rte_tel_data_start_dict(data);
+
+ LIST_FOREACH(entry, &ipsec_telemetry_list, next) {
+ uint64_t mode;
+ sa = entry->sa;
+ if (sa_spi != sa->spi)
+ continue;
+
+ /* add SA configuration key/values pairs */
+ rte_tel_data_add_dict_string(data, "Type",
+ (sa->type & RTE_IPSEC_SATP_PROTO_MASK) ==
+ RTE_IPSEC_SATP_PROTO_AH ? "AH" : "ESP");
+
+ rte_tel_data_add_dict_string(data, "Direction",
+ (sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+ RTE_IPSEC_SATP_DIR_IB ? "Inbound" : "Outbound");
+
+ mode = sa->type & RTE_IPSEC_SATP_MODE_MASK;
+
+ if (mode == RTE_IPSEC_SATP_MODE_TRANS) {
+ rte_tel_data_add_dict_string(data, "Mode", "Transport");
+ } else {
+ rte_tel_data_add_dict_string(data, "Mode", "Tunnel");
+
+ if ((sa->type & RTE_IPSEC_SATP_NATT_MASK) ==
+ RTE_IPSEC_SATP_NATT_ENABLE) {
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ } else if (sa->type &
+ RTE_IPSEC_SATP_MODE_TUNLV6) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ }
+ } else {
+ if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ } else if (sa->type &
+ RTE_IPSEC_SATP_MODE_TUNLV6) {
+ rte_tel_data_add_dict_string(data,
+ "Tunnel-Type",
+ "IPv4-UDP");
+ }
+ }
+ }
+
+ rte_tel_data_add_dict_string(data,
+ "extended-sequence-number",
+ (sa->type & RTE_IPSEC_SATP_ESN_MASK) ==
+ RTE_IPSEC_SATP_ESN_ENABLE ?
+ "enabled" : "disabled");
+
+ if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+ RTE_IPSEC_SATP_DIR_IB)
+
+ if (sa->sqn.inb.rsn[sa->sqn.inb.rdidx])
+ rte_tel_data_add_dict_u64(data,
+ "sequence-number",
+ sa->sqn.inb.rsn[sa->sqn.inb.rdidx]->sqn);
+ else
+ rte_tel_data_add_dict_u64(data,
+ "sequence-number", 0);
+ else
+ rte_tel_data_add_dict_u64(data, "sequence-number",
+ sa->sqn.outb);
+
+ rte_tel_data_add_dict_string(data,
+ "explicit-congestion-notification",
+ (sa->type & RTE_IPSEC_SATP_ECN_MASK) ==
+ RTE_IPSEC_SATP_ECN_ENABLE ?
+ "enabled" : "disabled");
+
+ rte_tel_data_add_dict_string(data,
+ "copy-DSCP",
+ (sa->type & RTE_IPSEC_SATP_DSCP_MASK) ==
+ RTE_IPSEC_SATP_DSCP_ENABLE ?
+ "enabled" : "disabled");
+ }
+
+ return 0;
+}
+
+
+int
+rte_ipsec_telemetry_sa_add(const struct rte_ipsec_sa *sa)
+{
+ struct ipsec_telemetry_entry *entry = rte_zmalloc(NULL,
+ sizeof(struct ipsec_telemetry_entry), 0);
+ if (entry == NULL)
+ return -ENOMEM;
+ entry->sa = sa;
+ LIST_INSERT_HEAD(&ipsec_telemetry_list, entry, next);
+ return 0;
+}
+
+void
+rte_ipsec_telemetry_sa_del(const struct rte_ipsec_sa *sa)
+{
+ struct ipsec_telemetry_entry *entry;
+ LIST_FOREACH(entry, &ipsec_telemetry_list, next) {
+ if (sa == entry->sa) {
+ LIST_REMOVE(entry, next);
+ rte_free(entry);
+ return;
+ }
+ }
+}
+
+
+RTE_INIT(rte_ipsec_telemetry_init)
+{
+ rte_telemetry_register_cmd("/ipsec/sa/list",
+ handle_telemetry_cmd_ipsec_sa_list,
+ "Return list of IPsec SAs with telemetry enabled.");
+ rte_telemetry_register_cmd("/ipsec/sa/stats",
+ handle_telemetry_cmd_ipsec_sa_stats,
+ "Returns IPsec SA stastistics. Parameters: int sa_spi");
+ rte_telemetry_register_cmd("/ipsec/sa/details",
+ handle_telemetry_cmd_ipsec_sa_details,
+ "Returns IPsec SA configuration. Parameters: int sa_spi");
+}
+
diff --git a/lib/ipsec/meson.build b/lib/ipsec/meson.build
index 1497f573bb..ddb9ea1767 100644
--- a/lib/ipsec/meson.build
+++ b/lib/ipsec/meson.build
@@ -1,9 +1,11 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2018 Intel Corporation
-sources = files('esp_inb.c', 'esp_outb.c', 'sa.c', 'ses.c', 'ipsec_sad.c')
+sources = files('esp_inb.c', 'esp_outb.c',
+ 'sa.c', 'ses.c', 'ipsec_sad.c',
+ 'ipsec_telemetry.c')
headers = files('rte_ipsec.h', 'rte_ipsec_sa.h', 'rte_ipsec_sad.h')
indirect_headers += files('rte_ipsec_group.h')
-deps += ['mbuf', 'net', 'cryptodev', 'security', 'hash']
+deps += ['mbuf', 'net', 'cryptodev', 'security', 'hash', 'telemetry']
diff --git a/lib/ipsec/rte_ipsec.h b/lib/ipsec/rte_ipsec.h
index dd60d95915..5308f250a7 100644
--- a/lib/ipsec/rte_ipsec.h
+++ b/lib/ipsec/rte_ipsec.h
@@ -158,6 +158,29 @@ rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
return ss->pkt_func.process(ss, mb, num);
}
+
+/**
+ * Enable per SA telemetry for a specific SA.
+ * Note that this function is not thread safe
+ * @param sa
+ * Pointer to the *rte_ipsec_sa* object that will have telemetry enabled.
+ * @return
+ * 0 on success, negative value otherwise.
+ */
+__rte_experimental
+int
+rte_ipsec_telemetry_sa_add(const struct rte_ipsec_sa *sa);
+
+/**
+ * Disable per SA telemetry for a specific SA.
+ * Note that this function is not thread safe
+ * @param sa
+ * Pointer to the *rte_ipsec_sa* object that will have telemetry disabled.
+ */
+__rte_experimental
+void
+rte_ipsec_telemetry_sa_del(const struct rte_ipsec_sa *sa);
+
#include <rte_ipsec_group.h>
#ifdef __cplusplus
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index fa5a76cde1..bbe2fa3612 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -653,19 +653,25 @@ uint16_t
pkt_flag_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
- uint32_t i, k;
+ uint32_t i, k, bytes;
uint32_t dr[num];
RTE_SET_USED(ss);
k = 0;
+ bytes = 0;
for (i = 0; i != num; i++) {
- if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
+ if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
k++;
+ bytes += mb[i]->pkt_len;
+ }
else
dr[i - k] = i;
}
+ ss->sa->statistics.count += k;
+ ss->sa->statistics.bytes += bytes;
+
/* handle unprocessed mbufs */
if (k != num) {
rte_errno = EBADMSG;
diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
index 107ebd1519..6e59f18e16 100644
--- a/lib/ipsec/sa.h
+++ b/lib/ipsec/sa.h
@@ -132,6 +132,15 @@ struct rte_ipsec_sa {
struct replay_sqn *rsn[REPLAY_SQN_NUM];
} inb;
} sqn;
+ /* Statistics */
+ struct {
+ uint64_t count;
+ uint64_t bytes;
+ struct {
+ uint64_t count;
+ uint64_t authentication_failed;
+ } errors;
+ } statistics;
} __rte_cache_aligned;
diff --git a/lib/ipsec/version.map b/lib/ipsec/version.map
index ba8753eac4..0af27ffd60 100644
--- a/lib/ipsec/version.map
+++ b/lib/ipsec/version.map
@@ -19,3 +19,12 @@ DPDK_22 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ # added in 21.11
+ rte_ipsec_telemetry_sa_add;
+ rte_ipsec_telemetry_sa_del;
+
+};
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v10 7/9] ipsec: add support for initial SQN value
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 0/9] new features for ipsec and security libraries Radu Nicolau
` (5 preceding siblings ...)
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 6/9] ipsec: add support for SA telemetry Radu Nicolau
@ 2021-10-14 16:03 ` Radu Nicolau
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 8/9] doc: remove unneeded ipsec new field deprecation Radu Nicolau
` (2 subsequent siblings)
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-14 16:03 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, mdr, bruce.richardson, roy.fan.zhang, hemant.agrawal,
gakhil, anoobj, declan.doherty, abhijit.sinha, daniel.m.buckley,
marchana, ktejasree, matan, Radu Nicolau
Update IPsec library to support initial SQN value.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/rel_notes/release_21_11.rst | 1 +
lib/ipsec/sa.c | 18 +++++++++++-------
2 files changed, 12 insertions(+), 7 deletions(-)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 8286a6cee7..89ed92abd5 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -164,6 +164,7 @@ New Features
* Added support for AEAD algorithms AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
* Added support for NAT-T / UDP encapsulated ESP
* Added support for SA telemetry.
+ * Added support for setting a non default starting ESN value.
Removed Items
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index bbe2fa3612..9d5ffda627 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -294,11 +294,11 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
* Init ESP outbound specific things.
*/
static void
-esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
+esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen, uint64_t sqn)
{
uint8_t algo_type;
- sa->sqn.outb = 1;
+ sa->sqn.outb = sqn > 1 ? sqn : 1;
algo_type = sa->algo_type;
@@ -376,7 +376,7 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
sa->tx_offload.val = rte_mbuf_tx_offload(sa->hdr_l3_off,
sa->hdr_len - sa->hdr_l3_off, 0, 0, 0, 0, 0);
- esp_outb_init(sa, sa->hdr_len);
+ esp_outb_init(sa, sa->hdr_len, prm->ipsec_xform.esn.value);
}
/*
@@ -502,7 +502,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS |
RTE_IPSEC_SATP_NATT_ENABLE):
case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
- esp_outb_init(sa, 0);
+ esp_outb_init(sa, 0, prm->ipsec_xform.esn.value);
break;
}
@@ -513,15 +513,19 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
* helper function, init SA replay structure.
*/
static void
-fill_sa_replay(struct rte_ipsec_sa *sa, uint32_t wnd_sz, uint32_t nb_bucket)
+fill_sa_replay(struct rte_ipsec_sa *sa, uint32_t wnd_sz, uint32_t nb_bucket,
+ uint64_t sqn)
{
sa->replay.win_sz = wnd_sz;
sa->replay.nb_bucket = nb_bucket;
sa->replay.bucket_index_mask = nb_bucket - 1;
sa->sqn.inb.rsn[0] = (struct replay_sqn *)(sa + 1);
- if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+ sa->sqn.inb.rsn[0]->sqn = sqn;
+ if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM) {
sa->sqn.inb.rsn[1] = (struct replay_sqn *)
((uintptr_t)sa->sqn.inb.rsn[0] + rsn_size(nb_bucket));
+ sa->sqn.inb.rsn[1]->sqn = sqn;
+ }
}
int
@@ -601,7 +605,7 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
/* fill replay window related fields */
if (nb != 0)
- fill_sa_replay(sa, wsz, nb);
+ fill_sa_replay(sa, wsz, nb, prm->ipsec_xform.esn.value);
return sz;
}
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v10 8/9] doc: remove unneeded ipsec new field deprecation
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 0/9] new features for ipsec and security libraries Radu Nicolau
` (6 preceding siblings ...)
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 7/9] ipsec: add support for initial SQN value Radu Nicolau
@ 2021-10-14 16:03 ` Radu Nicolau
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 9/9] doc: remove unneeded security deprecation Radu Nicolau
2021-10-17 12:17 ` [dpdk-dev] [EXT] [PATCH v10 0/9] new features for ipsec and security libraries Akhil Goyal
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-14 16:03 UTC (permalink / raw)
To: Ray Kinsella
Cc: dev, konstantin.ananyev, vladimir.medvedkin, bruce.richardson,
roy.fan.zhang, hemant.agrawal, gakhil, anoobj, declan.doherty,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau
The deprecation notice regarding extendig rte_ipsec_sa_prm with a
new field hdr_l3_len is no longer applicable.
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 3 ---
1 file changed, 3 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index a744fdb2c6..e11db1bd4a 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -206,9 +206,6 @@ Deprecation Notices
will be updated with new fields to support new features like TSO in case of
protocol offload.
-* ipsec: The structure ``rte_ipsec_sa_prm`` will be extended with a new field
- ``hdr_l3_len`` to configure tunnel L3 header length.
-
* eventdev: The file ``rte_eventdev_pmd.h`` will be renamed to ``eventdev_driver.h``
to make the driver interface as internal and the structures ``rte_eventdev_data``,
``rte_eventdev`` and ``rte_eventdevs`` will be moved to a new file named
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [dpdk-dev] [PATCH v10 9/9] doc: remove unneeded security deprecation
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 0/9] new features for ipsec and security libraries Radu Nicolau
` (7 preceding siblings ...)
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 8/9] doc: remove unneeded ipsec new field deprecation Radu Nicolau
@ 2021-10-14 16:03 ` Radu Nicolau
2021-10-17 12:17 ` [dpdk-dev] [EXT] [PATCH v10 0/9] new features for ipsec and security libraries Akhil Goyal
9 siblings, 0 replies; 184+ messages in thread
From: Radu Nicolau @ 2021-10-14 16:03 UTC (permalink / raw)
To: Ray Kinsella
Cc: dev, konstantin.ananyev, vladimir.medvedkin, bruce.richardson,
roy.fan.zhang, hemant.agrawal, gakhil, anoobj, declan.doherty,
abhijit.sinha, daniel.m.buckley, marchana, ktejasree, matan,
Radu Nicolau
The new fields regarding TSO support were not implemented following
feedback, it was decided to implement TSO support by using existing
mbuf fields.
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 7 -------
1 file changed, 7 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index e11db1bd4a..9585f90af6 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -199,13 +199,6 @@ Deprecation Notices
pointer for the private data to the application which can be attached
to the packet while enqueuing.
-* security: The structure ``rte_security_ipsec_xform`` will be extended with:
- new field: IPsec payload MSS (Maximum Segment Size).
-
-* security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
- will be updated with new fields to support new features like TSO in case of
- protocol offload.
-
* eventdev: The file ``rte_eventdev_pmd.h`` will be renamed to ``eventdev_driver.h``
to make the driver interface as internal and the structures ``rte_eventdev_data``,
``rte_eventdev`` and ``rte_eventdevs`` will be moved to a new file named
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH v10 0/9] new features for ipsec and security libraries
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 0/9] new features for ipsec and security libraries Radu Nicolau
` (8 preceding siblings ...)
2021-10-14 16:03 ` [dpdk-dev] [PATCH v10 9/9] doc: remove unneeded security deprecation Radu Nicolau
@ 2021-10-17 12:17 ` Akhil Goyal
2021-10-18 9:06 ` Nicolau, Radu
9 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2021-10-17 12:17 UTC (permalink / raw)
To: Radu Nicolau
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, Anoob Joseph,
declan.doherty, abhijit.sinha, daniel.m.buckley,
Archana Muniganti, Tejasree Kondoj, matan
> Add support for:
> NAT-T/UDP encapsulation
> AES_CCM, CHACHA20_POLY1305 and AES_GMAC
> SA telemetry
> ESN with initial SQN value
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
>
> Radu Nicolau (9):
> security: add ESN field to ipsec_xform
> ipsec: add support for AEAD algorithms
> security: add UDP params for IPsec NAT-T
> ipsec: add support for NAT-T
> mbuf: add IPsec ESP tunnel type
> ipsec: add support for SA telemetry
> ipsec: add support for initial SQN value
> doc: remove unneeded ipsec new field deprecation
Can you specify why this field is not needed now?
> doc: remove unneeded security deprecation
Series Acked-by: Akhil Goyal <gakhil@marvell.com>
Modified release notes and patch titles while merging.
Applied to dpdk-next-crypto
Thanks.
>
> doc/guides/prog_guide/ipsec_lib.rst | 12 +-
> doc/guides/rel_notes/deprecation.rst | 11 --
> doc/guides/rel_notes/release_21_11.rst | 16 ++
> lib/ipsec/crypto.h | 137 ++++++++++++++
> lib/ipsec/esp_inb.c | 84 ++++++++-
> lib/ipsec/esp_outb.c | 91 ++++++++-
> lib/ipsec/ipsec_telemetry.c | 244 +++++++++++++++++++++++++
> lib/ipsec/meson.build | 6 +-
> lib/ipsec/rte_ipsec.h | 23 +++
> lib/ipsec/rte_ipsec_sa.h | 9 +-
> lib/ipsec/sa.c | 120 ++++++++++--
> lib/ipsec/sa.h | 15 ++
> lib/ipsec/version.map | 9 +
> lib/mbuf/rte_mbuf_core.h | 1 +
> lib/security/rte_security.h | 15 ++
> 15 files changed, 745 insertions(+), 48 deletions(-)
> create mode 100644 lib/ipsec/ipsec_telemetry.c
>
> --
>
> v2: fixed lib/ipsec/version.map updates to show correct version
> v3: fixed build error and corrected misspelled email address
> v4: add doxygen comments for the IPsec telemetry APIs
> update inline comments refering to the wrong RFC
> v5: update commit messages after feedback
> update the UDP encapsulation patch to actually use the configured ports
> v6: fix initial SQN value
> v7: reworked the patches after feedback
> v8: updated library doc, release notes and removed deprecation notices
> v9: reworked telemetry, tso and esn patches
> v10: removed TSO patch, addressed feedback
>
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH v10 0/9] new features for ipsec and security libraries
2021-10-17 12:17 ` [dpdk-dev] [EXT] [PATCH v10 0/9] new features for ipsec and security libraries Akhil Goyal
@ 2021-10-18 9:06 ` Nicolau, Radu
0 siblings, 0 replies; 184+ messages in thread
From: Nicolau, Radu @ 2021-10-18 9:06 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, mdr, konstantin.ananyev, vladimir.medvedkin,
bruce.richardson, roy.fan.zhang, hemant.agrawal, Anoob Joseph,
declan.doherty, abhijit.sinha, daniel.m.buckley,
Archana Muniganti, Tejasree Kondoj, matan
On 10/17/2021 1:17 PM, Akhil Goyal wrote:
>> Add support for:
>> NAT-T/UDP encapsulation
>> AES_CCM, CHACHA20_POLY1305 and AES_GMAC
>> SA telemetry
>> ESN with initial SQN value
>>
>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
>> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
>>
>> Radu Nicolau (9):
>> security: add ESN field to ipsec_xform
>> ipsec: add support for AEAD algorithms
>> security: add UDP params for IPsec NAT-T
>> ipsec: add support for NAT-T
>> mbuf: add IPsec ESP tunnel type
>> ipsec: add support for SA telemetry
>> ipsec: add support for initial SQN value
>> doc: remove unneeded ipsec new field deprecation
> Can you specify why this field is not needed now?
It was part of the TSO feature that is being reworked with no API changes.
>
>> doc: remove unneeded security deprecation
> Series Acked-by: Akhil Goyal <gakhil@marvell.com>
>
> Modified release notes and patch titles while merging.
> Applied to dpdk-next-crypto
>
> Thanks.
Thank you!
^ permalink raw reply [flat|nested] 184+ messages in thread