From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 77BEBA0C4B; Thu, 14 Oct 2021 16:44:28 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F2FD24112E; Thu, 14 Oct 2021 16:44:27 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id F113840E50 for ; Thu, 14 Oct 2021 16:44:25 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10136"; a="208492658" X-IronPort-AV: E=Sophos;i="5.85,372,1624345200"; d="scan'208";a="208492658" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Oct 2021 07:44:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,372,1624345200"; d="scan'208";a="717734817" Received: from orsmsx603.amr.corp.intel.com ([10.22.229.16]) by fmsmga005.fm.intel.com with ESMTP; 14 Oct 2021 07:44:24 -0700 Received: from orsmsx612.amr.corp.intel.com (10.22.229.25) by ORSMSX603.amr.corp.intel.com (10.22.229.16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.12; Thu, 14 Oct 2021 07:43:31 -0700 Received: from orsmsx607.amr.corp.intel.com (10.22.229.20) by ORSMSX612.amr.corp.intel.com (10.22.229.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.12; Thu, 14 Oct 2021 07:43:02 -0700 Received: from ORSEDG602.ED.cps.intel.com (10.7.248.7) by orsmsx607.amr.corp.intel.com (10.22.229.20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.12 via Frontend Transport; Thu, 14 Oct 2021 07:43:02 -0700 Received: from NAM12-BN8-obe.outbound.protection.outlook.com (104.47.55.172) by edgegateway.intel.com (134.134.137.103) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2242.12; Thu, 14 Oct 2021 07:42:42 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=L1MNt9eKs3x2MZEljtUrFUCq88uWYnvVUMgNx9S+WtmT6sN6kFuJOMyaLnaE82OPBMQDCovqhwesFTj3QL8sCmfOjA1SqK0Y4NaAFOr2sQRvGKnAQU+ACUiUPoIKBjU95No+hrOv19Qh/ASKaS5ZS0NmYu+7zeWz5nzoGZnQsweIkAjWVRMmcMznzti+p4cCqKdzOsQWm65KlJ/GjNPBZcxRsccDi2fS9iPzummUgf9GiGcpUQPrZK/HWKsUlfI5C62POodBePwo+YhRuHddRAnoGhXuVCiqAm03RpT6wlX/Fs1dPYqCq6pq154AA8iu8iCeTH6Q5EO6G5lVrqpmjg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=8VarVn9iqW2xuLXjMog0vIfelT+tVWkw703cOSUuRl0=; b=BkXosvy4J0NMvrA83g1uzQq36sS8ZFXAUhy5I8b9xeTDkTv8mVTsT5YfwGQ0iYqWR8Suq9KULgO/7KfAovDNBcA6o1YURD8ZAdgoLO2o3qrKVU4zjgG5+wFklibEvh9v+lZaKBpNtriDPrCf8vZW6DFl4lPk2WxR73p6HfritP6Hve0StpUUM7k5pShCwfod2bmOMh4DiypzH/d7Ik3j9i6U6JPrlzo0lE0iiY5OnxPY9jl1z8cu76ngPUQ1UOsqtoCOy2XHhatN+btgiVcWFZmalVhJwuiU5bdmdRiPQqTyqN5opbMe45xwHdpq7t7jeqEOrNZUqXESHNWqDWgbTg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=8VarVn9iqW2xuLXjMog0vIfelT+tVWkw703cOSUuRl0=; b=OC5VUhLQIrf7APg8Ubm7rwo/2tQxBYdLhiHOxGEl6qgaXc5FDwrtWgof4mQu44UjoQqKIgjcvoEYV4MeaGj8fPaSkylaTPS8lrzracGPFvCfmAlgNlsHaOodFnG9q3csa3gkpUSCOKiFrsc5/Y9JwSUEGf8yToN+MN1RuPqgVoA= Received: from DM6PR11MB4491.namprd11.prod.outlook.com (2603:10b6:5:204::19) by DM6PR11MB3532.namprd11.prod.outlook.com (2603:10b6:5:70::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4587.25; Thu, 14 Oct 2021 14:42:08 +0000 Received: from DM6PR11MB4491.namprd11.prod.outlook.com ([fe80::740e:126e:c785:c8fd]) by DM6PR11MB4491.namprd11.prod.outlook.com ([fe80::740e:126e:c785:c8fd%4]) with mapi id 15.20.4608.016; Thu, 14 Oct 2021 14:42:08 +0000 From: "Ananyev, Konstantin" To: "Nicolau, Radu" , "Iremonger, Bernard" , "Medvedkin, Vladimir" CC: "dev@dpdk.org" , "mdr@ashroe.eu" , "Richardson, Bruce" , "Zhang, Roy Fan" , "hemant.agrawal@nxp.com" , "gakhil@marvell.com" , "anoobj@marvell.com" , "Doherty, Declan" , "Sinha, Abhijit" , "Buckley, Daniel M" , "marchana@marvell.com" , "ktejasree@marvell.com" , "matan@nvidia.com" Thread-Topic: [PATCH v9 06/10] ipsec: add transmit segmentation offload support Thread-Index: AQHXwC17tz8vjxLGt0ixXbi1bZJqjKvSirTA Date: Thu, 14 Oct 2021 14:42:08 +0000 Message-ID: References: <20210713133542.3550525-1-radu.nicolau@intel.com> <20211013121331.300245-1-radu.nicolau@intel.com> <20211013121331.300245-7-radu.nicolau@intel.com> In-Reply-To: <20211013121331.300245-7-radu.nicolau@intel.com> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-reaction: no-action dlp-version: 11.6.200.16 authentication-results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=intel.com; x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 4dd4820b-0d89-44f9-b26c-08d98f20ccce x-ms-traffictypediagnostic: DM6PR11MB3532: x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:9508; x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: AbswEJVMRbakKxiHzIUCdBUTHrfm9jOdQGd6UhnTIyGlr3mUxxQByT/bN4zddQMMXY9F+x47shMMx5p8c6lzqAiaZidxD5T2zUfI6Q4U1/ExX+CwxopFWWKATVZHGJZ3+Qh/xJpiQumN/9lqrMoT4XsDLYKXaf9ZKFy55Fu9deFGWCjacNe2LHE8TUvEgNqXQ/ef9mGQ9iiqv07MJ8VdgID/rVGW6QyJ+3QVBLoz0/e2XDgl/YdGQ9p5LOCDqn8bMbmxQWrD86ojs4giqhHln8eP0rmGgawL0loaVxlbjlu1a9qeYUytH6czgvBV8zxcofUDF+6kW9A90JC8cIDqpbke0PUzfskvSYqfbVPQtFcp3UbfH/eoYLkrnQZ/jQP0NXmFuQnMM/VKBeSDMZ30TSwKmTiFfvmwOgyLNYiBcxhAF545io8trRx93BUJiuTgIygEMB7UNXSpJ/7yNNzN7qOeDSUyKckZ/x9jfIQSAwN95Demt9pr/vykvU04bTuVOJmf4ZYNGNQTqiM3Y7CGt6Ubq/tKXgfPYA0S8JD9qM+KBNmKiGRrNnBL2fu+i8IFA63ltAsmV7sEYX04nRiaRDemVYb3c1Kc8yGn1sMXkZ++rMXr69dNvmoCM46ZeNNR0k/JXbkp+GdsO5vrMnLliFnq8hpionrrwY07FfMOHca3nSMFjAld8j0AugtG6kv9umIVnX/xIHKe6IgT8W7KUw== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM6PR11MB4491.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(366004)(316002)(186003)(55236004)(6506007)(6636002)(54906003)(7696005)(86362001)(26005)(83380400001)(38070700005)(110136005)(71200400001)(2906002)(8936002)(4326008)(508600001)(76116006)(66476007)(5660300002)(33656002)(122000001)(66946007)(9686003)(55016002)(64756008)(52536014)(66446008)(8676002)(30864003)(82960400001)(38100700002)(66556008); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?HOEFB8NwIRlbwBGgzLxXb9Ff1gKVQG96KjscukHq5Arxq7+SWn3A5SbaKKWD?= =?us-ascii?Q?X7udLRC380HW5gGsHyYJ48xVXbVRRFzIeGoHYL2pbTsvs0ajYyCfoaYl9mar?= =?us-ascii?Q?2DhV+EaQ7S887dE/X6SaKJ17YZuBmH01S1StDL1qFuORH21ceUAM1Uckt3Uz?= =?us-ascii?Q?c8JoetdXxXcf65Q6GG7Ns8VN/tm9/YBMnx//gcZRxSdaVT4Zv9tVScPeIyd1?= =?us-ascii?Q?TIL5wUMhFpyRr4VDQj5+SyQHp9za3xiMZun9/Q3cFQQUSiQz/9yXVYJK/VGX?= =?us-ascii?Q?Liabkt3j9QwKiQdq+SHtQclgZnmncw2kXPxcOBC6ztAGPe7/HjMLEcR69qk5?= =?us-ascii?Q?D8MsuIyEUqmZyJF+nLhLJcvXLNhxYYh8xnAp3n1aVu+YaC0+AziUuKNOhUyY?= =?us-ascii?Q?0rFdfd/1QaIkuO/VneKvDD0ibBGc+H7/YvvsmwXJ4umlv6leJDyQAtXTIN1S?= =?us-ascii?Q?IVG2Cp9fDnd6SJjL+LcbWobLzeLYSwMWWxfHa8Uf52f2prFV/sMN3OfSnwpi?= =?us-ascii?Q?VIeKK9ng80puINVO7tKEErT/cPt+pWq95kzZmsJ8TPheN4D8LYUMFIdLVdRn?= =?us-ascii?Q?AAj328TGDVBqro0FRYFYr1TQfkVFexnpnGaIhNt+m/hCkIVutcIZK6952/4W?= =?us-ascii?Q?UNQSBP56JWAJAJ90f5VneM7gf7Tz8dFIxwCe+yErEArPRYlU4uflcP2ei9uU?= =?us-ascii?Q?NLsqXCFXE4NRRlU7+OzJIsrKKuxs+PtQpGi2fYq8c5LMCist+LsJJZh3cSp7?= =?us-ascii?Q?WwYxMgpYDYxIvEIpNlJRQH6rM3sfdF2vBm0DvnkGbX8Z6GLTm4MGCIq0oxSQ?= =?us-ascii?Q?ot1phs7RqjWW+lBjSbzCt/nQN2MVFH8mgqFoDy9ux7KfE49ZeyGQI24vmJmk?= =?us-ascii?Q?XrngjS+XVHgbo43gbfElVPrIw2JecvTucYEjY6X++HXd7gcpRvSWsN3Ef0r9?= =?us-ascii?Q?Zn4FrU9f0cCB0ItOMK2JQUTrjup6+bOL6Fp/VjZh7YS5HQoNBQICrQiIJmjC?= =?us-ascii?Q?YPctlycRc1+nC0CHVlRM+UW5/02rxOdZrGkMfq2Tgdzmjq0zJ4tFsKAj2oqZ?= =?us-ascii?Q?QzvXD3AGurNAu3YmjPBM6RbY27JpgBiOmbUjX89O+racWmqvM2CXqLjLJmsK?= =?us-ascii?Q?NMxrjkTkPNLP0uhh1ZF7J52XF0X+q4IunKOSAk+l2imi/zD8Uay02vfjGuT4?= =?us-ascii?Q?wuDiDQ4JDmyVAYNXd1MFBbBWvlS5C4pjHYdNBjGagJ1KkFSLVah5xy5sp3Vg?= =?us-ascii?Q?aMPJAVJ6/oh1K8O5Y/P6UB7/qH87qRthJ5x9U3/eEV9DwuT0a/j6qp2ZjI10?= =?us-ascii?Q?beF0VmgDos9urFTFrUj0NTZr?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: DM6PR11MB4491.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 4dd4820b-0d89-44f9-b26c-08d98f20ccce X-MS-Exchange-CrossTenant-originalarrivaltime: 14 Oct 2021 14:42:08.5970 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: Hf6b9OsDSAA2zDi9DM8yJeiV22i6PRVi1cqNvsJpC5xUNHc6RQtPd7jM9J4WPtY6Cb2zuF+XOU8sy2HEUksfhurFKB5xtRNlssE+Yp1OsXo= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR11MB3532 X-OriginatorOrg: intel.com Subject: Re: [dpdk-dev] [PATCH v9 06/10] ipsec: add transmit segmentation offload support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > > Add support for transmit segmentation offload to inline crypto processi= ng > mode. This offload is not supported by other offload modes, as at a > minimum it requires inline crypto for IPsec to be supported on the > network interface. Thanks for rework. It looks much better to me now, but still few more comments. Konstantin =20 > Signed-off-by: Declan Doherty > Signed-off-by: Radu Nicolau > Signed-off-by: Abhijit Sinha > Signed-off-by: Daniel Martin Buckley > Acked-by: Fan Zhang > --- > doc/guides/prog_guide/ipsec_lib.rst | 2 + > doc/guides/rel_notes/release_21_11.rst | 1 + > lib/ipsec/esp_outb.c | 120 +++++++++++++++++++------ > 3 files changed, 97 insertions(+), 26 deletions(-) >=20 > diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/= ipsec_lib.rst > index af51ff8131..fc0af5eadb 100644 > --- a/doc/guides/prog_guide/ipsec_lib.rst > +++ b/doc/guides/prog_guide/ipsec_lib.rst > @@ -315,6 +315,8 @@ Supported features >=20 > * NAT-T / UDP encapsulated ESP. >=20 > +* TSO support (only for inline crypto mode) > + > * algorithms: 3DES-CBC, AES-CBC, AES-CTR, AES-GCM, AES_CCM, CHACHA20_PO= LY1305, > AES_GMAC, HMAC-SHA1, NULL. >=20 > diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_note= s/release_21_11.rst > index e9fb169d44..0a9c71d92e 100644 > --- a/doc/guides/rel_notes/release_21_11.rst > +++ b/doc/guides/rel_notes/release_21_11.rst > @@ -158,6 +158,7 @@ New Features >=20 > * Added support for AEAD algorithms AES_CCM, CHACHA20_POLY1305 and AES= _GMAC. > * Added support for NAT-T / UDP encapsulated ESP > + * Added support TSO offload support; only supported for inline crypto = mode. >=20 >=20 > Removed Items > diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c > index 0e3314b358..d327c32a38 100644 > --- a/lib/ipsec/esp_outb.c > +++ b/lib/ipsec/esp_outb.c > @@ -18,7 +18,7 @@ >=20 > typedef int32_t (*esp_outb_prepare_t)(struct rte_ipsec_sa *sa, rte_be64_= t sqc, > const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb, > - union sym_op_data *icv, uint8_t sqh_len); > + union sym_op_data *icv, uint8_t sqh_len, uint8_t tso); >=20 > /* > * helper function to fill crypto_sym op for cipher+auth algorithms. > @@ -139,7 +139,7 @@ outb_cop_prepare(struct rte_crypto_op *cop, > static inline int32_t > outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, > const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb, > - union sym_op_data *icv, uint8_t sqh_len) > + union sym_op_data *icv, uint8_t sqh_len, uint8_t tso) > { > uint32_t clen, hlen, l2len, pdlen, pdofs, plen, tlen; > struct rte_mbuf *ml; > @@ -157,11 +157,20 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_b= e64_t sqc, >=20 > /* number of bytes to encrypt */ > clen =3D plen + sizeof(*espt); > - clen =3D RTE_ALIGN_CEIL(clen, sa->pad_align); > + > + /* We don't need to pad/align packet when using TSO offload */ > + if (!tso) > + clen =3D RTE_ALIGN_CEIL(clen, sa->pad_align); > + >=20 > /* pad length + esp tail */ > pdlen =3D clen - plen; > - tlen =3D pdlen + sa->icv_len + sqh_len; > + > + /* We don't append ICV length when using TSO offload */ > + if (!tso) > + tlen =3D pdlen + sa->icv_len + sqh_len; > + else > + tlen =3D pdlen + sqh_len; >=20 > /* do append and prepend */ > ml =3D rte_pktmbuf_lastseg(mb); > @@ -309,7 +318,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *= ss, struct rte_mbuf *mb[], >=20 > /* try to update the packet itself */ > rc =3D outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, > - sa->sqh_len); > + sa->sqh_len, 0); > /* success, setup crypto op */ > if (rc >=3D 0) { > outb_pkt_xprepare(sa, sqc, &icv); > @@ -336,7 +345,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *= ss, struct rte_mbuf *mb[], > static inline int32_t > outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, > const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb, > - union sym_op_data *icv, uint8_t sqh_len) > + union sym_op_data *icv, uint8_t sqh_len, uint8_t tso) > { > uint8_t np; > uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen; > @@ -358,11 +367,19 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_b= e64_t sqc, >=20 > /* number of bytes to encrypt */ > clen =3D plen + sizeof(*espt); > - clen =3D RTE_ALIGN_CEIL(clen, sa->pad_align); > + > + /* We don't need to pad/align packet when using TSO offload */ > + if (!tso) > + clen =3D RTE_ALIGN_CEIL(clen, sa->pad_align); >=20 > /* pad length + esp tail */ > pdlen =3D clen - plen; > - tlen =3D pdlen + sa->icv_len + sqh_len; > + > + /* We don't append ICV length when using TSO offload */ > + if (!tso) > + tlen =3D pdlen + sa->icv_len + sqh_len; > + else > + tlen =3D pdlen + sqh_len; >=20 > /* do append and insert */ > ml =3D rte_pktmbuf_lastseg(mb); > @@ -452,7 +469,7 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *= ss, struct rte_mbuf *mb[], >=20 > /* try to update the packet itself */ > rc =3D outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, > - sa->sqh_len); > + sa->sqh_len, 0); > /* success, setup crypto op */ > if (rc >=3D 0) { > outb_pkt_xprepare(sa, sqc, &icv); > @@ -549,7 +566,7 @@ cpu_outb_pkt_prepare(const struct rte_ipsec_session *= ss, > gen_iv(ivbuf[k], sqc); >=20 > /* try to update the packet itself */ > - rc =3D prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len); > + rc =3D prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len, 0); >=20 > /* success, proceed with preparations */ > if (rc >=3D 0) { > @@ -660,6 +677,20 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_sess= ion *ss, > } > } >=20 > + > +static inline int > +esn_outb_nb_segments(struct rte_mbuf *m) > +{ > + if (m->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG)) { > + uint16_t pkt_l3len =3D m->pkt_len - m->l2_len; > + uint16_t segments =3D > + (m->tso_segsz > 0 && pkt_l3len > m->tso_segsz) ? > + (pkt_l3len + m->tso_segsz - 1) / m->tso_segsz : 1; > + return segments; > + } > + return 1; /* no TSO */ > +} > + > /* > * process group of ESP outbound tunnel packets destined for > * INLINE_CRYPTO type of device. > @@ -669,29 +700,47 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_= session *ss, > struct rte_mbuf *mb[], uint16_t num) > { > int32_t rc; > - uint32_t i, k, n; > + uint32_t i, k, n, nb_sqn; > uint64_t sqn; > rte_be64_t sqc; > struct rte_ipsec_sa *sa; > union sym_op_data icv; > uint64_t iv[IPSEC_MAX_IV_QWORD]; > uint32_t dr[num]; > + uint16_t nb_segs[num]; >=20 > sa =3D ss->sa; > + nb_sqn =3D 0; > + for (i =3D 0; i !=3D num; i++) { > + nb_segs[i] =3D esn_outb_nb_segments(mb[i]); > + nb_sqn +=3D nb_segs[i]; > + /* setup outer l2 and l3 len for TSO */ > + if (nb_segs[i] > 1) { > + if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) > + mb[i]->outer_l3_len =3D > + sizeof(struct rte_ipv4_hdr); > + else > + mb[i]->outer_l3_len =3D > + sizeof(struct rte_ipv6_hdr); > + mb[i]->outer_l2_len =3D mb[i]->l2_len; I still don't understand your logic beyond setting these fields here. How it looks to me: It is a tunnel mode, so ipsec lib appends it's tunnel header. In normal case (non-TSO) it sets up l2_len and l3_len that are stored insi= de sa->tx_offload (for non-TSO case we don't care about inner/outer case and have to setup ou= ter fields or set TX_PKT_OUTER flags). Now for TSO we do need to do that, right? So as I understand: sa->tx_offload.l2_len will become mb->outer_l2_len sa->tx_offload.l3_len will become mb->outer_l3_len mb->l2_len should be set to zero mb->l3_len, mb->l4_len, mb->tso_segsz should remain the same (ipsec lib shouldn't modify them).=20 Please correct me, if I missed something here. Also note that right now we setup mbuf tx_offload way below=20 these lines - at outb_tun_pkt_prepare(). So probably these changes has to be adjusted after that function call. =20 } > + } >=20 > - n =3D num; > + n =3D nb_sqn; > sqn =3D esn_outb_update_sqn(sa, &n); > - if (n !=3D num) > + if (n !=3D nb_sqn) > rte_errno =3D EOVERFLOW; >=20 > k =3D 0; > - for (i =3D 0; i !=3D n; i++) { > + for (i =3D 0; i !=3D num; i++) { As I stated that in previous mail, you can't just assume that n =3D=3D num = always. That way you just ignores SQN overflow error you get above. The proper way - would be to find for how many full packets you have valid SQN value and set 'n' to it. I know it is an extra pain for TSO mode, but I don't see any better way her= e. =20 >=20 > - sqc =3D rte_cpu_to_be_64(sqn + i); > + sqc =3D rte_cpu_to_be_64(sqn); > gen_iv(iv, sqc); > + sqn +=3D nb_segs[i]; >=20 > /* try to update the packet itself */ > - rc =3D outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0); > + rc =3D outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0, > + nb_segs[i] > 1); I don't think we have to make decision based on number of segments. Even if whole packet will fit into one TCP segment, TX_TCP_SEG is still set= for it, so HW/PMD expects data in different format. Probably it should be based on flags value, something like: mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG). =20 >=20 > k +=3D (rc >=3D 0); >=20 > @@ -703,8 +752,8 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_se= ssion *ss, > } >=20 > /* copy not processed mbufs beyond good ones */ > - if (k !=3D n && k !=3D 0) > - move_bad_mbufs(mb, dr, n, n - k); > + if (k !=3D num && k !=3D 0) > + move_bad_mbufs(mb, dr, num, num - k); >=20 > inline_outb_mbuf_prepare(ss, mb, k); > return k; > @@ -719,29 +768,48 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_= session *ss, > struct rte_mbuf *mb[], uint16_t num) > { > int32_t rc; > - uint32_t i, k, n; > + uint32_t i, k, n, nb_sqn; > uint64_t sqn; > rte_be64_t sqc; > struct rte_ipsec_sa *sa; > union sym_op_data icv; > uint64_t iv[IPSEC_MAX_IV_QWORD]; > uint32_t dr[num]; > + uint16_t nb_segs[num]; >=20 > sa =3D ss->sa; > + nb_sqn =3D 0; > + /* Calculate number of sequence numbers required */ > + for (i =3D 0; i !=3D num; i++) { > + nb_segs[i] =3D esn_outb_nb_segments(mb[i]); > + nb_sqn +=3D nb_segs[i]; > + /* setup outer l2 and l3 len for TSO */ > + if (nb_segs[i] > 1) { > + if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) > + mb[i]->outer_l3_len =3D > + sizeof(struct rte_ipv4_hdr); > + else > + mb[i]->outer_l3_len =3D > + sizeof(struct rte_ipv6_hdr); Again, that just doesn't look right to me. > + mb[i]->outer_l2_len =3D mb[i]->l2_len; For transport mode actually I am not sure how mb tx_offload fields has to b= e setuped... Do we still need to setup outer fields, considering that we are not adding = new IP header here? > + } > + } >=20 > - n =3D num; > + n =3D nb_sqn; > sqn =3D esn_outb_update_sqn(sa, &n); > - if (n !=3D num) > + if (n !=3D nb_sqn) > rte_errno =3D EOVERFLOW; >=20 > k =3D 0; > - for (i =3D 0; i !=3D n; i++) { > + for (i =3D 0; i !=3D num; i++) { Same story as for tunnel, we can't just ignore an error here. >=20 > - sqc =3D rte_cpu_to_be_64(sqn + i); > + sqc =3D rte_cpu_to_be_64(sqn); > gen_iv(iv, sqc); > + sqn +=3D nb_segs[i]; >=20 > /* try to update the packet itself */ > - rc =3D outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0); > + rc =3D outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0, > + nb_segs[i] > 1); Same thoughts as for tunnel mode. >=20 > k +=3D (rc >=3D 0); >=20 > @@ -753,8 +821,8 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_se= ssion *ss, > } >=20 > /* copy not processed mbufs beyond good ones */ > - if (k !=3D n && k !=3D 0) > - move_bad_mbufs(mb, dr, n, n - k); > + if (k !=3D num && k !=3D 0) > + move_bad_mbufs(mb, dr, num, num - k); >=20 > inline_outb_mbuf_prepare(ss, mb, k); > return k; > -- > 2.25.1