From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 46C5DA0C47; Tue, 12 Oct 2021 14:42:43 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CD91941172; Tue, 12 Oct 2021 14:42:42 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 934714116C for ; Tue, 12 Oct 2021 14:42:41 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10134"; a="207244030" X-IronPort-AV: E=Sophos;i="5.85,367,1624345200"; d="scan'208";a="207244030" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2021 05:42:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,367,1624345200"; d="scan'208";a="460349460" Received: from orsmsx601.amr.corp.intel.com ([10.22.229.14]) by orsmga002.jf.intel.com with ESMTP; 12 Oct 2021 05:42:40 -0700 Received: from orsmsx601.amr.corp.intel.com (10.22.229.14) by ORSMSX601.amr.corp.intel.com (10.22.229.14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.12; Tue, 12 Oct 2021 05:42:39 -0700 Received: from ORSEDG601.ED.cps.intel.com (10.7.248.6) by orsmsx601.amr.corp.intel.com (10.22.229.14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.12 via Frontend Transport; Tue, 12 Oct 2021 05:42:39 -0700 Received: from NAM10-BN7-obe.outbound.protection.outlook.com (104.47.70.100) by edgegateway.intel.com (134.134.137.102) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2242.12; Tue, 12 Oct 2021 05:42:39 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=nVlftCRdXTIlZyQ3y2KCEjkd7U/A5v7/LeMGp7ACDbIbSGPz0351M3ROQQL2rAwgZea/v02SCPsRqR+NL25e3CdE8gb+KaluUF6/WWeFQMP58Pwb8oAn4am0bxnHvqUgh6SkS4IqV8teheEj0gDgvwEth//sGki8nnTUT1tjUbOFmZR/5bkwHUgVUXN/Hic0ZcS2ASs2Uu3qjxReCtoJls+I7Yjq5RczeAn5aij1dDl+Cve2GTwVyPtsmPfWHq/FgjLDKjbK0cjxNhUPoSvWuxWL0G+swRcSbv1HkJwZR2xc9R+hQw766C3XmHBbo66xB5wA2wJavn5WpbIPPo4dbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ITu+8WDxbGKWOIW6E5Xsjkwj4ABRAFT7yqFXgKuXbsM=; b=e9IiErkQe6/+yspzPwjuXCuFafxUrzoeZjeXJ0PgwH31OOsGOjcckzQFz0swFDxzspn7Qr12HxfOttNWONlRW9ShDUYVRfl4DLwsp6CYSXTbi8hsBg3npum59DmrTvWYN4pl/r18pH8uj4TbUf/e344phemAMKNjqMnRmNch0r19o7n+tlH/Bq07ZRcC1rEDC10Fbclh8eQKGggR8zZlcwkk2MaCr18u5MQqfnyNYzzmivqDjIiuBooqO79QzsUnXFydm9tN6StLZ3yNO+vk/nXQ4g2vw9E8cyzNN/Yq6HGvc0h5PMGK9FS7JmOiHlNkdLf+e5fqPZ8eE0lZKVv2UA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ITu+8WDxbGKWOIW6E5Xsjkwj4ABRAFT7yqFXgKuXbsM=; b=nCXz5GvpL5FTtSKDL/wAuuwuCVcxcARmTAYvpH7D3J3piDPeQKkMlSXbUWoqEkMd/iVGbcOYBdmD89EusWpKZKLiXq8wUyBuP19BMx8XdGemxvzTfWgFuNcI6MDEKXToJU1jfORaUNyCX4bLgVNcC9R4P/+3TUdPuuW7wkZaiJo= Received: from DM6PR11MB4491.namprd11.prod.outlook.com (2603:10b6:5:204::19) by DM5PR11MB0009.namprd11.prod.outlook.com (2603:10b6:4:69::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4587.25; Tue, 12 Oct 2021 12:42:35 +0000 Received: from DM6PR11MB4491.namprd11.prod.outlook.com ([fe80::740e:126e:c785:c8fd]) by DM6PR11MB4491.namprd11.prod.outlook.com ([fe80::740e:126e:c785:c8fd%4]) with mapi id 15.20.4587.026; Tue, 12 Oct 2021 12:42:35 +0000 From: "Ananyev, Konstantin" To: "Nicolau, Radu" , "Iremonger, Bernard" , "Medvedkin, Vladimir" CC: "dev@dpdk.org" , "mdr@ashroe.eu" , "Richardson, Bruce" , "Zhang, Roy Fan" , "hemant.agrawal@nxp.com" , "gakhil@marvell.com" , "anoobj@marvell.com" , "Doherty, Declan" , "Sinha, Abhijit" , "Buckley, Daniel M" , "marchana@marvell.com" , "ktejasree@marvell.com" , "matan@nvidia.com" Thread-Topic: [PATCH v8 06/10] ipsec: add transmit segmentation offload support Thread-Index: AQHXvpT8g9Om7b1J7EOQg9S2rUCgL6vPP1zw Date: Tue, 12 Oct 2021 12:42:35 +0000 Message-ID: References: <20210713133542.3550525-1-radu.nicolau@intel.com> <20211011112945.2876-1-radu.nicolau@intel.com> <20211011112945.2876-7-radu.nicolau@intel.com> In-Reply-To: <20211011112945.2876-7-radu.nicolau@intel.com> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-reaction: no-action dlp-version: 11.6.200.16 authentication-results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=intel.com; x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 2abbcdaf-bc94-472e-cf33-08d98d7dc478 x-ms-traffictypediagnostic: DM5PR11MB0009: x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:8882; x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: TCNdCEi65L7+dIKsq1Ym8kjJ2c5CyZSeuctZtnOJ2nsY5EH7/LiBFKgsAoqPFNVfvsSRYtbM4w47mfrrhoLD+ZFywraWSFIbdpyi7mo6L1P0VOwMF9IlPzdgNX6rFEmUUOh8+ttaHqpfBvlT7q25CoXL5KMUA8HMaA80lPJJnaKNJc8c5maW/9/nRqwKuGW3qrHsMiRI4e+Zpy/lqUXLcKqEd+Y8vKM7WH8099s6ch4VfvCfNN1cst1kUBNBUYBoB3IHzq8i4IQmOJeVQOuODtaBYQXrs8txGRdXvvRWo5GS9fjWuxU0NQ9wCIUi77BTOb0tuwLucirgJi6RL0bXqEOnUI8B+iriaXivCFm7yZYwcFKZLDIZ7m2jurw1Gi03xacXFsKPt4AznDYfBxJ7Lszst6I9BUs0Pz7UDUVsR6ePu5CqGIoDCUvVisP5lysVRqXH02tbBbH26GNR5DMd5mFkf6NWadX48pVRLeH4YcVhY1uwNQcGgWraeS2MTuBjDGP0twQ+l66cX9XE648RzDiTdL2wrxDkQ0TaxuMAMYJVVEdFsER8oGonW4DPopnFaMPquYfeHqHyVfupGdteYkWYoJ5K9KqtYzLVZaDty3ujUz2MwrZo0IUcOaNarXxQpFlfwPhzpKhSEbcvCDV2FXR4Vf1bW8BRxIFu+5s6oBx7zsnVqsaZoVaC0HCztOQJwnG6F3cfIspQcyHL17BYRQ== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM6PR11MB4491.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(366004)(71200400001)(33656002)(6636002)(122000001)(38100700002)(5660300002)(86362001)(8936002)(8676002)(2906002)(9686003)(55016002)(38070700005)(6506007)(83380400001)(52536014)(64756008)(66476007)(66446008)(110136005)(55236004)(66556008)(54906003)(7696005)(4326008)(508600001)(316002)(26005)(186003)(66946007)(76116006); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?MxR/BFUSO6iLKdRWIBuhFVBG32UBN0CH/SfTYa6AFT2JiSRTMGRWQw7vFioe?= =?us-ascii?Q?3WxkTtj1oV1+4xMRfSqMQJz8qTHDNr4oSBNj6KEV1Gqpz1sH8wdl55Y0ZfuW?= =?us-ascii?Q?6gsZBk19sjgoJZsSCYRM+naOEY9WEQdQiZhkyn59p42CfpgI7AA9LbCQf/Ew?= =?us-ascii?Q?1YZ/gTXhpyeTj7EkVvwgZ2xZFqq63Mi7bh/AZptGDYrt6gy/t/2nuUJ5yf4f?= =?us-ascii?Q?l1xS4pn3wYt5qSUKobVb3zAvJiv+XI4jo99RfuWbSkSSn990vohrgF4JllgR?= =?us-ascii?Q?83xX9CAd2cxQlJXbLy6Ci89nI3JgBx808Y5b89sAk02EnMo1xxpIgxBw9FEU?= =?us-ascii?Q?g5etK2ty0okSlueSU0q+SDi7Fb780jXW03xnPDBEJxpDKuXpU0lEpkBdhuL6?= =?us-ascii?Q?Uq6fab02Fl5ZOj7XUM0C1sOS2bxXM9Q7HX3kU1zyBxtgB72WTl6m237cZRto?= =?us-ascii?Q?nuODjYnr0h47F2w9IvtpNxVWcEhtwl1oYKMx3j6IDPlnc5Trvg9WRrvf3bmQ?= =?us-ascii?Q?Lt+ZkZCizAN2vMXIKEN4ek3dFV5Hj5rTlOuX5o+F81Y9fAj+pPYvBtBgyEBF?= =?us-ascii?Q?YK3oSrl79y6YgquTMFO/hemveuyw2RRs9I9Isg/z01Bo4YtadWsGZSzp5m6R?= =?us-ascii?Q?4OE2pxivO8fVzkisKZmCcm2dS6Cf8Vj9xvvIvBmheWZPn45i/v+U7XbqGdXZ?= =?us-ascii?Q?NQx0QOdoXGg5aY+iQlPUdMWhXTyL/qZLTI2lWZO5Pcn3DX6PmrrE/i5V1G3+?= =?us-ascii?Q?HM77FOB6IODa38KvRsvCLuECwbPVxdFJYiXrNbGD+zFsKtFEp5HghN7GaB9e?= =?us-ascii?Q?hBKE/y2gw2cnQx1V/lGwE+0MD3jX1MhzqJTBB458gBgmTNVm3zj3QFYE/A7H?= =?us-ascii?Q?E2SlcYqe/XPPRePwvqwMCAaKmQZPM9pZvxvoVb5FediaKtv0pBUFvYFJ/L15?= =?us-ascii?Q?tuZtfkplfTYZBBbze30vusnaCStAakuYd2glbFkNiIOr9Yq2QSEp6eibv4/1?= =?us-ascii?Q?UtMBUwh1hukGtksvTA6VFDI2vS7yZL7MgW6mRU+mjx3Fgmr2L+sgjawdRm2f?= =?us-ascii?Q?pbgJhyk4lvlBqZCmKl4uDQ4FZIo9+DrfiYsBx+heFi6HZi5KxHPDjDVxcOgZ?= =?us-ascii?Q?MLMPFgpwqTOyeLhh9i9YOtejs4T5Po4idMJkmwbOvLqoYbdF8DWPDdnnPWqc?= =?us-ascii?Q?+Ks+kpezx2m5caVtwpqs7kcgEHDMLwhMHTAjGGAuwwoYFfxFC28ccDEkxXfT?= =?us-ascii?Q?tbTZXvuBcAyTvN+kDWvVqrzcUpDgonBZbedE4yzScho1C8eC4MzFu0j8y+3h?= =?us-ascii?Q?qjtuEExYfbP4Djkodospg3aI?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: DM6PR11MB4491.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 2abbcdaf-bc94-472e-cf33-08d98d7dc478 X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Oct 2021 12:42:35.4916 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: 1YrC8+nrpgZF1IGE2yz3dlTUIMA/Nxq8KCedLSba561aoXgTc2hJyU6qhgf9B9QZ1GQ5P9RQ9U+xU0Y7bwsi5IFVqhhYBCX2dczjHnkeANk= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR11MB0009 X-OriginatorOrg: intel.com Subject: Re: [dpdk-dev] [PATCH v8 06/10] ipsec: add transmit segmentation offload support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" =20 > Add support for transmit segmentation offload to inline crypto processing > mode. This offload is not supported by other offload modes, as at a > minimum it requires inline crypto for IPsec to be supported on the > network interface. >=20 > Signed-off-by: Declan Doherty > Signed-off-by: Radu Nicolau > Signed-off-by: Abhijit Sinha > Signed-off-by: Daniel Martin Buckley > Acked-by: Fan Zhang > --- > doc/guides/prog_guide/ipsec_lib.rst | 2 + > doc/guides/rel_notes/release_21_11.rst | 1 + > lib/ipsec/esp_outb.c | 119 +++++++++++++++++++++---- > 3 files changed, 103 insertions(+), 19 deletions(-) >=20 > diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/= ipsec_lib.rst > index af51ff8131..fc0af5eadb 100644 > --- a/doc/guides/prog_guide/ipsec_lib.rst > +++ b/doc/guides/prog_guide/ipsec_lib.rst > @@ -315,6 +315,8 @@ Supported features >=20 > * NAT-T / UDP encapsulated ESP. >=20 > +* TSO support (only for inline crypto mode) > + > * algorithms: 3DES-CBC, AES-CBC, AES-CTR, AES-GCM, AES_CCM, CHACHA20_PO= LY1305, > AES_GMAC, HMAC-SHA1, NULL. >=20 > diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_note= s/release_21_11.rst > index 73a566eaca..77535ace36 100644 > --- a/doc/guides/rel_notes/release_21_11.rst > +++ b/doc/guides/rel_notes/release_21_11.rst > @@ -138,6 +138,7 @@ New Features >=20 > * Added support for AEAD algorithms AES_CCM, CHACHA20_POLY1305 and AES= _GMAC. > * Added support for NAT-T / UDP encapsulated ESP > + * Added support TSO offload support; only supported for inline crypto = mode. >=20 >=20 > Removed Items > diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c > index 0e3314b358..df7d3e8645 100644 > --- a/lib/ipsec/esp_outb.c > +++ b/lib/ipsec/esp_outb.c > @@ -147,6 +147,7 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be6= 4_t sqc, > struct rte_esp_tail *espt; > char *ph, *pt; > uint64_t *iv; > + uint8_t tso =3D !!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG)); Why not simply=20 int tso =3D mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG)) !=3D 0; ? As this functionality is supported only for inbound, it is better to pass t= so as a aparamer, Instead of checking it here. Then for lookaside and cpu invocations it will always be zero, for inline i= t would be determinied by packet flags.=20 >=20 > /* calculate extra header space required */ > hlen =3D sa->hdr_len + sa->iv_len + sizeof(*esph); > @@ -157,11 +158,20 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_b= e64_t sqc, >=20 > /* number of bytes to encrypt */ > clen =3D plen + sizeof(*espt); > - clen =3D RTE_ALIGN_CEIL(clen, sa->pad_align); > + > + /* We don't need to pad/ailgn packet when using TSO offload */ > + if (likely(!tso)) I don't think we really do likely/unlikely here. Especially if it will be a constant for non-inline cases. > + clen =3D RTE_ALIGN_CEIL(clen, sa->pad_align); > + >=20 > /* pad length + esp tail */ > pdlen =3D clen - plen; > - tlen =3D pdlen + sa->icv_len + sqh_len; > + > + /* We don't append ICV length when using TSO offload */ > + if (likely(!tso)) > + tlen =3D pdlen + sa->icv_len + sqh_len; > + else > + tlen =3D pdlen + sqh_len; >=20 > /* do append and prepend */ > ml =3D rte_pktmbuf_lastseg(mb); > @@ -346,6 +356,7 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be6= 4_t sqc, > char *ph, *pt; > uint64_t *iv; > uint32_t l2len, l3len; > + uint8_t tso =3D !!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG)); Same thoughts as for _tun_ counterpart. >=20 > l2len =3D mb->l2_len; > l3len =3D mb->l3_len; > @@ -358,11 +369,19 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_b= e64_t sqc, >=20 > /* number of bytes to encrypt */ > clen =3D plen + sizeof(*espt); > - clen =3D RTE_ALIGN_CEIL(clen, sa->pad_align); > + > + /* We don't need to pad/ailgn packet when using TSO offload */ > + if (likely(!tso)) > + clen =3D RTE_ALIGN_CEIL(clen, sa->pad_align); >=20 > /* pad length + esp tail */ > pdlen =3D clen - plen; > - tlen =3D pdlen + sa->icv_len + sqh_len; > + > + /* We don't append ICV length when using TSO offload */ > + if (likely(!tso)) > + tlen =3D pdlen + sa->icv_len + sqh_len; > + else > + tlen =3D pdlen + sqh_len; >=20 > /* do append and insert */ > ml =3D rte_pktmbuf_lastseg(mb); > @@ -660,6 +679,29 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_sess= ion *ss, > } > } >=20 > +/* check if packet will exceed MSS and segmentation is required */ > +static inline int > +esn_outb_nb_segments(struct rte_mbuf *m) { DPDK codying style pls. > + uint16_t segments =3D 1; > + uint16_t pkt_l3len =3D m->pkt_len - m->l2_len; > + > + /* Only support segmentation for UDP/TCP flows */ > + if (!(m->packet_type & (RTE_PTYPE_L4_UDP | RTE_PTYPE_L4_TCP))) For ptypes it is not a bit flag, it should be something like: pt =3D m->packet_type & RTE_PTYPE_L4_MASK; if (pt =3D=3D RTE_PTYPE_L4_UDP || pt =3D=3D RTE_PTYPE_L4_TCP) {...} BTW, ptype is usually used for RX path. If you expect user to setup it up on TX path - it has to be documented in f= ormal API comments. > + return segments; > + > + if (m->tso_segsz > 0 && pkt_l3len > m->tso_segsz) { > + segments =3D pkt_l3len / m->tso_segsz; > + if (segments * m->tso_segsz < pkt_l3len) > + segments++; Why not simply: segments =3D (pkt_l3len >=3D m->tso_segsz) ? 1 : (pkt_l3len + m->tso_segsz= - 1) / m->tso_segsz;=20 ? > + if (m->packet_type & RTE_PTYPE_L4_TCP) > + m->ol_flags |=3D (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM); > + else > + m->ol_flags |=3D (PKT_TX_UDP_SEG | PKT_TX_UDP_CKSUM); > + } > + > + return segments; > +} > + > /* > * process group of ESP outbound tunnel packets destined for > * INLINE_CRYPTO type of device. > @@ -669,24 +711,36 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_= session *ss, > struct rte_mbuf *mb[], uint16_t num) > { > int32_t rc; > - uint32_t i, k, n; > + uint32_t i, k, nb_sqn =3D 0, nb_sqn_alloc; > uint64_t sqn; > rte_be64_t sqc; > struct rte_ipsec_sa *sa; > union sym_op_data icv; > uint64_t iv[IPSEC_MAX_IV_QWORD]; > uint32_t dr[num]; > + uint16_t nb_segs[num]; >=20 > sa =3D ss->sa; >=20 > - n =3D num; > - sqn =3D esn_outb_update_sqn(sa, &n); > - if (n !=3D num) > + for (i =3D 0; i !=3D num; i++) { > + nb_segs[i] =3D esn_outb_nb_segments(mb[i]); > + nb_sqn +=3D nb_segs[i]; > + /* setup offload fields for TSO */ > + if (nb_segs[i] > 1) { > + mb[i]->ol_flags |=3D (PKT_TX_OUTER_IPV4 | > + PKT_TX_OUTER_IP_CKSUM | Hmm..., why did you deicde it would be always ipv4 packet? Why it definitely needs outer ip cksum? > + PKT_TX_TUNNEL_ESP); Another question, why some flags you setup in esn_outb_nb_segments(), others here? > + mb[i]->outer_l3_len =3D mb[i]->l3_len; Not sure I understand that part: l3_len will be provided to us by user and will be inner l3_len, while, we do add our own l3 hdr, which will become outer l3, no? > + } > + } > + > + nb_sqn_alloc =3D nb_sqn; > + sqn =3D esn_outb_update_sqn(sa, &nb_sqn_alloc); > + if (nb_sqn_alloc !=3D nb_sqn) > rte_errno =3D EOVERFLOW; >=20 > k =3D 0; > - for (i =3D 0; i !=3D n; i++) { > - > + for (i =3D 0; i !=3D num; i++) { You can't expect that nb_sqn_alloc =3D=3D nb_sqn. You need to handle EOVERFLOW here properly.=20 > sqc =3D rte_cpu_to_be_64(sqn + i); > gen_iv(iv, sqc); >=20 > @@ -700,11 +754,18 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_= session *ss, > dr[i - k] =3D i; > rte_errno =3D -rc; > } > + > + /** > + * If packet is using tso, increment sqn by the number of > + * segments for packet > + */ > + if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG)) > + sqn +=3D nb_segs[i] - 1; I think instead of that you can just: sqn +=3D nb_segs[i]; and then above: - sqc =3D rte_cpu_to_be_64(sqn + i); + sqc =3D rte_cpu_to_be_64(sqn); > } >=20 > /* copy not processed mbufs beyond good ones */ > - if (k !=3D n && k !=3D 0) > - move_bad_mbufs(mb, dr, n, n - k); > + if (k !=3D num && k !=3D 0) > + move_bad_mbufs(mb, dr, num, num - k); Same as above - you can't just assume there would be no failures with SQN allocation. =20 > inline_outb_mbuf_prepare(ss, mb, k); > return k; Similar thoughts for _trs_ counterpart. Honestly considering amount of changes introduced, I'd would like to see a = new test-case for it. Otherwise it is really hard to be sure that it does work as expected. Can you add a new test-case for it to examples/ipsec-secgw/test? > @@ -719,23 +780,36 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_= session *ss, > struct rte_mbuf *mb[], uint16_t num) > { > int32_t rc; > - uint32_t i, k, n; > + uint32_t i, k, nb_sqn, nb_sqn_alloc; > uint64_t sqn; > rte_be64_t sqc; > struct rte_ipsec_sa *sa; > union sym_op_data icv; > uint64_t iv[IPSEC_MAX_IV_QWORD]; > uint32_t dr[num]; > + uint16_t nb_segs[num]; >=20 > sa =3D ss->sa; >=20 > - n =3D num; > - sqn =3D esn_outb_update_sqn(sa, &n); > - if (n !=3D num) > + /* Calculate number of sequence numbers required */ > + for (i =3D 0, nb_sqn =3D 0; i !=3D num; i++) { > + nb_segs[i] =3D esn_outb_nb_segments(mb[i]); > + nb_sqn +=3D nb_segs[i]; > + /* setup offload fields for TSO */ > + if (nb_segs[i] > 1) { > + mb[i]->ol_flags |=3D (PKT_TX_OUTER_IPV4 | > + PKT_TX_OUTER_IP_CKSUM); > + mb[i]->outer_l3_len =3D mb[i]->l3_len; > + } > + } > + > + nb_sqn_alloc =3D nb_sqn; > + sqn =3D esn_outb_update_sqn(sa, &nb_sqn_alloc); > + if (nb_sqn_alloc !=3D nb_sqn) > rte_errno =3D EOVERFLOW; >=20 > k =3D 0; > - for (i =3D 0; i !=3D n; i++) { > + for (i =3D 0; i !=3D num; i++) { >=20 > sqc =3D rte_cpu_to_be_64(sqn + i); > gen_iv(iv, sqc); > @@ -750,11 +824,18 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_= session *ss, > dr[i - k] =3D i; > rte_errno =3D -rc; > } > + > + /** > + * If packet is using tso, increment sqn by the number of > + * segments for packet > + */ > + if (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG)) > + sqn +=3D nb_segs[i] - 1; > } >=20 > /* copy not processed mbufs beyond good ones */ > - if (k !=3D n && k !=3D 0) > - move_bad_mbufs(mb, dr, n, n - k); > + if (k !=3D num && k !=3D 0) > + move_bad_mbufs(mb, dr, num, num - k); >=20 > inline_outb_mbuf_prepare(ss, mb, k); > return k; > -- > 2.25.1