* [dpdk-dev] [PATCH v4 0/2] ipsec: add transmit segmentation offload support
@ 2021-10-27 13:03 Radu Nicolau
2021-10-27 13:03 ` [dpdk-dev] [PATCH v4 1/2] ipsec: add TSO support Radu Nicolau
2021-10-27 13:03 ` [dpdk-dev] [PATCH v4 2/2] examples/ipsec-secgw: add support for TCP TSO Radu Nicolau
0 siblings, 2 replies; 4+ messages in thread
From: Radu Nicolau @ 2021-10-27 13:03 UTC (permalink / raw)
Cc: dev, gakhil, anoobj, konstantin.ananyev, Radu Nicolau
Add transmit segmentation offload support to the IPsec library and IPsec
Security GW sample app.
These patches were split from previously sent patchsets with feedback addressed
https://patchwork.dpdk.org/project/dpdk/patch/20211013121331.300245-7-radu.nicolau@intel.com/
https://patchwork.dpdk.org/project/dpdk/patch/20211001095202.3343782-5-radu.nicolau@intel.com/
Radu Nicolau (2):
ipsec: add TSO support
examples/ipsec-secgw: add support for TCP TSO
doc/guides/prog_guide/ipsec_lib.rst | 2 +
doc/guides/rel_notes/release_21_11.rst | 5 +
doc/guides/sample_app_ug/ipsec_secgw.rst | 11 ++
examples/ipsec-secgw/ipsec-secgw.c | 4 +
examples/ipsec-secgw/ipsec.h | 1 +
examples/ipsec-secgw/ipsec_process.c | 22 ++++
examples/ipsec-secgw/sa.c | 25 +++-
lib/ipsec/esp_outb.c | 141 ++++++++++++++++++-----
8 files changed, 176 insertions(+), 35 deletions(-)
--
v2: addressed feedback and rebased to RC1
v3: added check for offload caps and removed duplicate code
v4: updated check for offload caps and reworked the sample app patch for TCP only
2.25.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* [dpdk-dev] [PATCH v4 1/2] ipsec: add TSO support
2021-10-27 13:03 [dpdk-dev] [PATCH v4 0/2] ipsec: add transmit segmentation offload support Radu Nicolau
@ 2021-10-27 13:03 ` Radu Nicolau
2021-10-27 13:03 ` [dpdk-dev] [PATCH v4 2/2] examples/ipsec-secgw: add support for TCP TSO Radu Nicolau
1 sibling, 0 replies; 4+ messages in thread
From: Radu Nicolau @ 2021-10-27 13:03 UTC (permalink / raw)
To: Konstantin Ananyev, Bernard Iremonger, Vladimir Medvedkin
Cc: dev, gakhil, anoobj, Radu Nicolau, Declan Doherty, Abhijit Sinha,
Daniel Martin Buckley, Fan Zhang
Add support for transmit segmentation offload to inline crypto processing
mode. This offload is not supported by other offload modes, as at a
minimum it requires inline crypto for IPsec to be supported on the
network interface.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/prog_guide/ipsec_lib.rst | 2 +
doc/guides/rel_notes/release_21_11.rst | 1 +
lib/ipsec/esp_outb.c | 141 +++++++++++++++++++------
3 files changed, 112 insertions(+), 32 deletions(-)
diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
index 52afdcda9f..0bdbdad1e4 100644
--- a/doc/guides/prog_guide/ipsec_lib.rst
+++ b/doc/guides/prog_guide/ipsec_lib.rst
@@ -315,6 +315,8 @@ Supported features
* NAT-T / UDP encapsulated ESP.
+* TSO (only for inline crypto mode)
+
* algorithms: 3DES-CBC, AES-CBC, AES-CTR, AES-GCM, AES_CCM, CHACHA20_POLY1305,
AES_GMAC, HMAC-SHA1, NULL.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 1ccac87b73..b5b5abadee 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -268,6 +268,7 @@ New Features
* Added support for NAT-T / UDP encapsulated ESP.
* Added support for SA telemetry.
* Added support for setting a non default starting ESN value.
+ * Added support for TSO in inline crypto mode.
* **Added multi-process support for testpmd.**
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 336d24a6af..b7a70fd001 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -18,7 +18,7 @@
typedef int32_t (*esp_outb_prepare_t)(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len);
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t tso);
/*
* helper function to fill crypto_sym op for cipher+auth algorithms.
@@ -139,7 +139,7 @@ outb_cop_prepare(struct rte_crypto_op *cop,
static inline int32_t
outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len)
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t tso)
{
uint32_t clen, hlen, l2len, pdlen, pdofs, plen, tlen;
struct rte_mbuf *ml;
@@ -157,11 +157,19 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* number of bytes to encrypt */
clen = plen + sizeof(*espt);
- clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
- /* pad length + esp tail */
- pdlen = clen - plen;
- tlen = pdlen + sa->icv_len + sqh_len;
+ if (!tso) {
+ clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+ /* pad length + esp tail */
+ pdlen = clen - plen;
+ tlen = pdlen + sa->icv_len + sqh_len;
+ } else {
+ /* We don't need to pad/align packet or append ICV length
+ * when using TSO offload
+ */
+ pdlen = clen - plen;
+ tlen = pdlen + sqh_len;
+ }
/* do append and prepend */
ml = rte_pktmbuf_lastseg(mb);
@@ -309,7 +317,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
/* try to update the packet itself */
rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv,
- sa->sqh_len);
+ sa->sqh_len, 0);
/* success, setup crypto op */
if (rc >= 0) {
outb_pkt_xprepare(sa, sqc, &icv);
@@ -336,7 +344,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
static inline int32_t
outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
- union sym_op_data *icv, uint8_t sqh_len)
+ union sym_op_data *icv, uint8_t sqh_len, uint8_t tso)
{
uint8_t np;
uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen;
@@ -358,11 +366,19 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
/* number of bytes to encrypt */
clen = plen + sizeof(*espt);
- clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
- /* pad length + esp tail */
- pdlen = clen - plen;
- tlen = pdlen + sa->icv_len + sqh_len;
+ if (!tso) {
+ clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+ /* pad length + esp tail */
+ pdlen = clen - plen;
+ tlen = pdlen + sa->icv_len + sqh_len;
+ } else {
+ /* We don't need to pad/align packet or append ICV length
+ * when using TSO offload
+ */
+ pdlen = clen - plen;
+ tlen = pdlen + sqh_len;
+ }
/* do append and insert */
ml = rte_pktmbuf_lastseg(mb);
@@ -452,7 +468,7 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
/* try to update the packet itself */
rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv,
- sa->sqh_len);
+ sa->sqh_len, 0);
/* success, setup crypto op */
if (rc >= 0) {
outb_pkt_xprepare(sa, sqc, &icv);
@@ -549,7 +565,7 @@ cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss,
gen_iv(ivbuf[k], sqc);
/* try to update the packet itself */
- rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len);
+ rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len, 0);
/* success, proceed with preparations */
if (rc >= 0) {
@@ -668,6 +684,31 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
ss->sa->statistics.bytes += bytes;
}
+
+static inline int
+esn_outb_nb_segments(struct rte_mbuf *m)
+{
+ if (m->ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) {
+ uint16_t pkt_l3len = m->pkt_len - m->l2_len;
+ uint16_t segments =
+ (m->tso_segsz > 0 && pkt_l3len > m->tso_segsz) ?
+ (pkt_l3len + m->tso_segsz - 1) / m->tso_segsz : 1;
+ return segments;
+ }
+ return 1; /* no TSO */
+}
+
+/* Compute how many packets can be sent before overflow occurs */
+static inline uint16_t
+esn_outb_nb_valid_packets(uint16_t num, uint32_t n_sqn, uint16_t nb_segs[])
+{
+ uint16_t i;
+ uint32_t seg_cnt = 0;
+ for (i = 0; i < num && seg_cnt < n_sqn; i++)
+ seg_cnt += nb_segs[i];
+ return i - 1;
+}
+
/*
* process group of ESP outbound tunnel packets destined for
* INLINE_CRYPTO type of device.
@@ -677,29 +718,47 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
int32_t rc;
- uint32_t i, k, n;
+ uint32_t i, k, nb_segs_total, n_sqn;
uint64_t sqn;
rte_be64_t sqc;
struct rte_ipsec_sa *sa;
union sym_op_data icv;
uint64_t iv[IPSEC_MAX_IV_QWORD];
uint32_t dr[num];
+ uint16_t nb_segs[num];
sa = ss->sa;
+ nb_segs_total = 0;
+ /* Calculate number of segments */
+ for (i = 0; i != num; i++) {
+ nb_segs[i] = esn_outb_nb_segments(mb[i]);
+ nb_segs_total += nb_segs[i];
+ }
- n = num;
- sqn = esn_outb_update_sqn(sa, &n);
- if (n != num)
+ n_sqn = nb_segs_total;
+ sqn = esn_outb_update_sqn(sa, &n_sqn);
+ if (n_sqn != nb_segs_total) {
rte_errno = EOVERFLOW;
+ /* if there are segmented packets find out how many can be
+ * sent until overflow occurs
+ */
+ if (nb_segs_total > num) /* there is at least 1 */
+ num = esn_outb_nb_valid_packets(num, n_sqn, nb_segs);
+ else
+ num = n_sqn; /* no segmented packets */
+ }
k = 0;
- for (i = 0; i != n; i++) {
+ for (i = 0; i != num; i++) {
- sqc = rte_cpu_to_be_64(sqn + i);
+ sqc = rte_cpu_to_be_64(sqn);
gen_iv(iv, sqc);
+ sqn += nb_segs[i];
/* try to update the packet itself */
- rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
+ rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0,
+ (mb[i]->ol_flags &
+ (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) != 0);
k += (rc >= 0);
@@ -711,8 +770,8 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
}
/* copy not processed mbufs beyond good ones */
- if (k != n && k != 0)
- move_bad_mbufs(mb, dr, n, n - k);
+ if (k != num && k != 0)
+ move_bad_mbufs(mb, dr, num, num - k);
inline_outb_mbuf_prepare(ss, mb, k);
return k;
@@ -727,29 +786,47 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
struct rte_mbuf *mb[], uint16_t num)
{
int32_t rc;
- uint32_t i, k, n;
+ uint32_t i, k, nb_segs_total, n_sqn;
uint64_t sqn;
rte_be64_t sqc;
struct rte_ipsec_sa *sa;
union sym_op_data icv;
uint64_t iv[IPSEC_MAX_IV_QWORD];
uint32_t dr[num];
+ uint16_t nb_segs[num];
sa = ss->sa;
+ nb_segs_total = 0;
+ /* Calculate number of segments */
+ for (i = 0; i != num; i++) {
+ nb_segs[i] = esn_outb_nb_segments(mb[i]);
+ nb_segs_total += nb_segs[i];
+ }
- n = num;
- sqn = esn_outb_update_sqn(sa, &n);
- if (n != num)
+ n_sqn = nb_segs_total;
+ sqn = esn_outb_update_sqn(sa, &n_sqn);
+ if (n_sqn != nb_segs_total) {
rte_errno = EOVERFLOW;
+ /* if there are segmented packets find out how many can be
+ * sent until overflow occurs
+ */
+ if (nb_segs_total > num) /* there is at least 1 */
+ num = esn_outb_nb_valid_packets(num, n_sqn, nb_segs);
+ else
+ num = n_sqn; /* no segmented packets */
+ }
k = 0;
- for (i = 0; i != n; i++) {
+ for (i = 0; i != num; i++) {
- sqc = rte_cpu_to_be_64(sqn + i);
+ sqc = rte_cpu_to_be_64(sqn);
gen_iv(iv, sqc);
+ sqn += nb_segs[i];
/* try to update the packet itself */
- rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
+ rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0,
+ (mb[i]->ol_flags &
+ (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) != 0);
k += (rc >= 0);
@@ -761,8 +838,8 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
}
/* copy not processed mbufs beyond good ones */
- if (k != n && k != 0)
- move_bad_mbufs(mb, dr, n, n - k);
+ if (k != num && k != 0)
+ move_bad_mbufs(mb, dr, num, num - k);
inline_outb_mbuf_prepare(ss, mb, k);
return k;
--
2.25.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* [dpdk-dev] [PATCH v4 2/2] examples/ipsec-secgw: add support for TCP TSO
2021-10-27 13:03 [dpdk-dev] [PATCH v4 0/2] ipsec: add transmit segmentation offload support Radu Nicolau
2021-10-27 13:03 ` [dpdk-dev] [PATCH v4 1/2] ipsec: add TSO support Radu Nicolau
@ 2021-10-27 13:03 ` Radu Nicolau
2021-10-28 11:33 ` Ananyev, Konstantin
1 sibling, 1 reply; 4+ messages in thread
From: Radu Nicolau @ 2021-10-27 13:03 UTC (permalink / raw)
To: Radu Nicolau, Akhil Goyal; +Cc: dev, anoobj, konstantin.ananyev, Declan Doherty
Add support to allow user to specific MSS for TCP TSO offload on a per SA
basis. MSS configuration in the context of IPsec is only supported for
outbound SA's in the context of an inline IPsec Crypto offload.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
---
doc/guides/rel_notes/release_21_11.rst | 4 ++++
doc/guides/sample_app_ug/ipsec_secgw.rst | 11 +++++++++++
examples/ipsec-secgw/ipsec-secgw.c | 4 ++++
examples/ipsec-secgw/ipsec.h | 1 +
examples/ipsec-secgw/ipsec_process.c | 22 +++++++++++++++++++++
examples/ipsec-secgw/sa.c | 25 +++++++++++++++++++++---
6 files changed, 64 insertions(+), 3 deletions(-)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index b5b5abadee..35ececc3f2 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -306,6 +306,10 @@ New Features
* Pcapng format with timestamps and meta-data.
* Fixes packet capture with stripped VLAN tags.
+* **IPsec Security Gateway sample application new features.**
+
+ * Added support for TSO (only for inline crypto TCP packets)
+
Removed Items
-------------
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 782574dd39..639d309a6e 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -720,6 +720,17 @@ where each options means:
* *udp-encap*
+ ``<mss>``
+
+ * Maximum segment size for TSO offload, available for egress SAs only.
+
+ * Optional: Yes, TSO offload not set by default
+
+ * Syntax:
+
+ * *mss N* N is the segment size in bytes
+
+
Example SA rules:
.. code-block:: console
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 4bdf99b62b..5fcf424efe 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -398,6 +398,10 @@ prepare_one_packet(struct rte_mbuf *pkt, struct ipsec_traffic *t)
pkt->l2_len = 0;
pkt->l3_len = sizeof(*iph4);
pkt->packet_type |= RTE_PTYPE_L3_IPV4;
+ if (pkt->packet_type & RTE_PTYPE_L4_TCP)
+ pkt->l4_len = sizeof(struct rte_tcp_hdr);
+ else if (pkt->packet_type & RTE_PTYPE_L4_UDP)
+ pkt->l4_len = sizeof(struct rte_udp_hdr);
} else if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6)) {
int next_proto;
size_t l3len, ext_len;
diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h
index 8405c48171..2c3640833d 100644
--- a/examples/ipsec-secgw/ipsec.h
+++ b/examples/ipsec-secgw/ipsec.h
@@ -137,6 +137,7 @@ struct ipsec_sa {
enum rte_security_ipsec_sa_direction direction;
uint8_t udp_encap;
uint16_t portid;
+ uint16_t mss;
uint8_t fdir_qid;
uint8_t fdir_flag;
diff --git a/examples/ipsec-secgw/ipsec_process.c b/examples/ipsec-secgw/ipsec_process.c
index 5012e1a6a4..bb56e97ad7 100644
--- a/examples/ipsec-secgw/ipsec_process.c
+++ b/examples/ipsec-secgw/ipsec_process.c
@@ -222,6 +222,28 @@ prep_process_group(void *sa, struct rte_mbuf *mb[], uint32_t cnt)
for (j = 0; j != cnt; j++) {
priv = get_priv(mb[j]);
priv->sa = sa;
+ /* setup TSO related fields if TSO enabled*/
+ if (priv->sa->mss) {
+ /* TCP only */
+ uint32_t ptype = mb[j]->packet_type;
+ if (ptype & (RTE_PTYPE_L4_TCP == 0))
+ continue;
+
+ mb[j]->tso_segsz = priv->sa->mss;
+ if ((IS_TUNNEL(priv->sa->flags))) {
+ mb[j]->outer_l3_len = mb[j]->l3_len;
+ mb[j]->outer_l2_len = mb[j]->l2_len;
+ mb[j]->ol_flags |=
+ (RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+ RTE_MBUF_F_TX_TUNNEL_ESP);
+ }
+ mb[j]->ol_flags |= (RTE_MBUF_F_TX_TCP_SEG |
+ RTE_MBUF_F_TX_TCP_CKSUM);
+ if (RTE_ETH_IS_IPV4_HDR(ptype))
+ mb[j]->ol_flags |= RTE_MBUF_F_TX_OUTER_IPV4;
+ else
+ mb[j]->ol_flags |= RTE_MBUF_F_TX_OUTER_IPV6;
+ }
}
}
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index 88dd30464f..97f265cc7b 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -677,6 +677,16 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
continue;
}
+ if (strcmp(tokens[ti], "mss") == 0) {
+ INCREMENT_TOKEN_INDEX(ti, n_tokens, status);
+ if (status->status < 0)
+ return;
+ rule->mss = atoi(tokens[ti]);
+ if (status->status < 0)
+ return;
+ continue;
+ }
+
if (strcmp(tokens[ti], "fallback") == 0) {
struct rte_ipsec_session *fb;
@@ -970,7 +980,7 @@ sa_create(const char *name, int32_t socket_id, uint32_t nb_sa)
}
static int
-check_eth_dev_caps(uint16_t portid, uint32_t inbound)
+check_eth_dev_caps(uint16_t portid, uint32_t inbound, uint32_t tso)
{
struct rte_eth_dev_info dev_info;
int retval;
@@ -999,6 +1009,12 @@ check_eth_dev_caps(uint16_t portid, uint32_t inbound)
"hardware TX IPSec offload is not supported\n");
return -EINVAL;
}
+ if (tso && (dev_info.tx_offload_capa &
+ RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) {
+ RTE_LOG(WARNING, PORT,
+ "hardware TCP TSO offload is not supported\n");
+ return -EINVAL;
+ }
}
return 0;
}
@@ -1127,7 +1143,7 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
if (ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL ||
ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) {
- if (check_eth_dev_caps(sa->portid, inbound))
+ if (check_eth_dev_caps(sa->portid, inbound, sa->mss))
return -EINVAL;
}
@@ -1638,8 +1654,11 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads,
if ((rule_type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO ||
rule_type ==
RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
- && rule->portid == port_id)
+ && rule->portid == port_id) {
*tx_offloads |= RTE_ETH_TX_OFFLOAD_SECURITY;
+ if (rule->mss)
+ *tx_offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
+ }
}
return 0;
}
--
2.25.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] [PATCH v4 2/2] examples/ipsec-secgw: add support for TCP TSO
2021-10-27 13:03 ` [dpdk-dev] [PATCH v4 2/2] examples/ipsec-secgw: add support for TCP TSO Radu Nicolau
@ 2021-10-28 11:33 ` Ananyev, Konstantin
0 siblings, 0 replies; 4+ messages in thread
From: Ananyev, Konstantin @ 2021-10-28 11:33 UTC (permalink / raw)
To: Nicolau, Radu, Akhil Goyal; +Cc: dev, anoobj, Doherty, Declan
> Add support to allow user to specific MSS for TCP TSO offload on a per SA
> basis. MSS configuration in the context of IPsec is only supported for
> outbound SA's in the context of an inline IPsec Crypto offload.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> ---
> doc/guides/rel_notes/release_21_11.rst | 4 ++++
> doc/guides/sample_app_ug/ipsec_secgw.rst | 11 +++++++++++
> examples/ipsec-secgw/ipsec-secgw.c | 4 ++++
> examples/ipsec-secgw/ipsec.h | 1 +
> examples/ipsec-secgw/ipsec_process.c | 22 +++++++++++++++++++++
> examples/ipsec-secgw/sa.c | 25 +++++++++++++++++++++---
> 6 files changed, 64 insertions(+), 3 deletions(-)
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index b5b5abadee..35ececc3f2 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -306,6 +306,10 @@ New Features
> * Pcapng format with timestamps and meta-data.
> * Fixes packet capture with stripped VLAN tags.
>
> +* **IPsec Security Gateway sample application new features.**
> +
> + * Added support for TSO (only for inline crypto TCP packets)
> +
>
> Removed Items
> -------------
> diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
> index 782574dd39..639d309a6e 100644
> --- a/doc/guides/sample_app_ug/ipsec_secgw.rst
> +++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
> @@ -720,6 +720,17 @@ where each options means:
>
> * *udp-encap*
>
> + ``<mss>``
> +
> + * Maximum segment size for TSO offload, available for egress SAs only.
> +
> + * Optional: Yes, TSO offload not set by default
> +
> + * Syntax:
> +
> + * *mss N* N is the segment size in bytes
> +
> +
> Example SA rules:
>
> .. code-block:: console
> diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
> index 4bdf99b62b..5fcf424efe 100644
> --- a/examples/ipsec-secgw/ipsec-secgw.c
> +++ b/examples/ipsec-secgw/ipsec-secgw.c
> @@ -398,6 +398,10 @@ prepare_one_packet(struct rte_mbuf *pkt, struct ipsec_traffic *t)
> pkt->l2_len = 0;
> pkt->l3_len = sizeof(*iph4);
> pkt->packet_type |= RTE_PTYPE_L3_IPV4;
> + if (pkt->packet_type & RTE_PTYPE_L4_TCP)
> + pkt->l4_len = sizeof(struct rte_tcp_hdr);
> + else if (pkt->packet_type & RTE_PTYPE_L4_UDP)
> + pkt->l4_len = sizeof(struct rte_udp_hdr);
> } else if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6)) {
> int next_proto;
> size_t l3len, ext_len;
> diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h
> index 8405c48171..2c3640833d 100644
> --- a/examples/ipsec-secgw/ipsec.h
> +++ b/examples/ipsec-secgw/ipsec.h
> @@ -137,6 +137,7 @@ struct ipsec_sa {
> enum rte_security_ipsec_sa_direction direction;
> uint8_t udp_encap;
> uint16_t portid;
> + uint16_t mss;
> uint8_t fdir_qid;
> uint8_t fdir_flag;
>
> diff --git a/examples/ipsec-secgw/ipsec_process.c b/examples/ipsec-secgw/ipsec_process.c
> index 5012e1a6a4..bb56e97ad7 100644
> --- a/examples/ipsec-secgw/ipsec_process.c
> +++ b/examples/ipsec-secgw/ipsec_process.c
> @@ -222,6 +222,28 @@ prep_process_group(void *sa, struct rte_mbuf *mb[], uint32_t cnt)
> for (j = 0; j != cnt; j++) {
> priv = get_priv(mb[j]);
> priv->sa = sa;
> + /* setup TSO related fields if TSO enabled*/
> + if (priv->sa->mss) {
> + /* TCP only */
> + uint32_t ptype = mb[j]->packet_type;
> + if (ptype & (RTE_PTYPE_L4_TCP == 0))
It should be:
if ((ptype & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_TCP)
...
With that fixed:
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> + continue;
> +
> + mb[j]->tso_segsz = priv->sa->mss;
> + if ((IS_TUNNEL(priv->sa->flags))) {
> + mb[j]->outer_l3_len = mb[j]->l3_len;
> + mb[j]->outer_l2_len = mb[j]->l2_len;
> + mb[j]->ol_flags |=
> + (RTE_MBUF_F_TX_OUTER_IP_CKSUM |
> + RTE_MBUF_F_TX_TUNNEL_ESP);
> + }
> + mb[j]->ol_flags |= (RTE_MBUF_F_TX_TCP_SEG |
> + RTE_MBUF_F_TX_TCP_CKSUM);
> + if (RTE_ETH_IS_IPV4_HDR(ptype))
> + mb[j]->ol_flags |= RTE_MBUF_F_TX_OUTER_IPV4;
> + else
> + mb[j]->ol_flags |= RTE_MBUF_F_TX_OUTER_IPV6;
> + }
> }
> }
>
> diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
> index 88dd30464f..97f265cc7b 100644
> --- a/examples/ipsec-secgw/sa.c
> +++ b/examples/ipsec-secgw/sa.c
> @@ -677,6 +677,16 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
> continue;
> }
>
> + if (strcmp(tokens[ti], "mss") == 0) {
> + INCREMENT_TOKEN_INDEX(ti, n_tokens, status);
> + if (status->status < 0)
> + return;
> + rule->mss = atoi(tokens[ti]);
> + if (status->status < 0)
> + return;
> + continue;
> + }
> +
> if (strcmp(tokens[ti], "fallback") == 0) {
> struct rte_ipsec_session *fb;
>
> @@ -970,7 +980,7 @@ sa_create(const char *name, int32_t socket_id, uint32_t nb_sa)
> }
>
> static int
> -check_eth_dev_caps(uint16_t portid, uint32_t inbound)
> +check_eth_dev_caps(uint16_t portid, uint32_t inbound, uint32_t tso)
> {
> struct rte_eth_dev_info dev_info;
> int retval;
> @@ -999,6 +1009,12 @@ check_eth_dev_caps(uint16_t portid, uint32_t inbound)
> "hardware TX IPSec offload is not supported\n");
> return -EINVAL;
> }
> + if (tso && (dev_info.tx_offload_capa &
> + RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) {
> + RTE_LOG(WARNING, PORT,
> + "hardware TCP TSO offload is not supported\n");
> + return -EINVAL;
> + }
> }
> return 0;
> }
> @@ -1127,7 +1143,7 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
>
> if (ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL ||
> ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) {
> - if (check_eth_dev_caps(sa->portid, inbound))
> + if (check_eth_dev_caps(sa->portid, inbound, sa->mss))
> return -EINVAL;
> }
>
> @@ -1638,8 +1654,11 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads,
> if ((rule_type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO ||
> rule_type ==
> RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
> - && rule->portid == port_id)
> + && rule->portid == port_id) {
> *tx_offloads |= RTE_ETH_TX_OFFLOAD_SECURITY;
> + if (rule->mss)
> + *tx_offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
> + }
> }
> return 0;
> }
> --
> 2.25.1
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2021-10-28 11:33 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-27 13:03 [dpdk-dev] [PATCH v4 0/2] ipsec: add transmit segmentation offload support Radu Nicolau
2021-10-27 13:03 ` [dpdk-dev] [PATCH v4 1/2] ipsec: add TSO support Radu Nicolau
2021-10-27 13:03 ` [dpdk-dev] [PATCH v4 2/2] examples/ipsec-secgw: add support for TCP TSO Radu Nicolau
2021-10-28 11:33 ` Ananyev, Konstantin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).