From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id D8C71A0C43;
	Tue, 28 Sep 2021 17:21:11 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 57A40410D7;
	Tue, 28 Sep 2021 17:21:11 +0200 (CEST)
Received: from mga06.intel.com (mga06.intel.com [134.134.136.31])
 by mails.dpdk.org (Postfix) with ESMTP id A187540E3C
 for <dev@dpdk.org>; Tue, 28 Sep 2021 17:21:08 +0200 (CEST)
X-IronPort-AV: E=McAfee;i="6200,9189,10121"; a="285742086"
X-IronPort-AV: E=Sophos;i="5.85,329,1624345200"; d="scan'208";a="285742086"
Received: from fmsmga007.fm.intel.com ([10.253.24.52])
 by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 28 Sep 2021 08:14:20 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.85,329,1624345200"; d="scan'208";a="478730871"
Received: from fmsmsx602.amr.corp.intel.com ([10.18.126.82])
 by fmsmga007.fm.intel.com with ESMTP; 28 Sep 2021 08:14:20 -0700
Received: from fmsmsx612.amr.corp.intel.com (10.18.126.92) by
 fmsmsx602.amr.corp.intel.com (10.18.126.82) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2242.12; Tue, 28 Sep 2021 08:14:19 -0700
Received: from fmsmsx611.amr.corp.intel.com (10.18.126.91) by
 fmsmsx612.amr.corp.intel.com (10.18.126.92) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2242.12; Tue, 28 Sep 2021 08:14:19 -0700
Received: from fmsedg601.ED.cps.intel.com (10.1.192.135) by
 fmsmsx611.amr.corp.intel.com (10.18.126.91) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2242.12 via Frontend Transport; Tue, 28 Sep 2021 08:14:19 -0700
Received: from NAM02-SN1-obe.outbound.protection.outlook.com (104.47.57.46) by
 edgegateway.intel.com (192.55.55.70) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2242.12; Tue, 28 Sep 2021 08:14:17 -0700
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kp4hcAxH5ytEZN+nV4K8bmD9t3U3rx9TpXyoip6cAgPAPZYlAGswvudyPdX/o56Nnrv3VKFWyud5zvehX0MNu/CHk/B5GWlNzjZ20cXFDN3wC7LNwDHpNyiFsHFMQNhZLmLqaPHdSCGvW9aiPWvkYhnVfz27I0tc4XgcBRnSjivvCa7aOxgYIwjXlKHv4+uyfxBI9ilfS1XepbP0Mmseg8sTPv40tB1aQFKJ8D4SPVjXAb1cuv6wA/gi+P/zp0RuetJ06LVjKRT6xPGc4QOGmcG5F7R2zJJiOS5Ve4MZRtLOidnVp2wnKwSww9jNwyPdV7du3GCjbmqCve7z+WFBHQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; 
 bh=PAYX1zvn8YB/V+o5fBPpEYqftblNfyM1uOqyEXniK3c=;
 b=JMchqJH6fVAui6Iij2hg9XWzsy5V9LjfGJRptNmhkHp5evB4leB9Z2D0uCh6UqwCPNkfsYGlfvUg3+lin7NZypnTvdgVA8fmL2LzV68rcF03Mw78crN+hkdbVnuV6y1rfhQezpUfXfapn5547Gf3hnYH8NQaIaJKnTCzOpv8sQBi0HRcIiO4ZF5pyML4b15DqfP4bJSSt+CaULDbIz8GtmkTb5pFxHfHyPpVBjfC2pSJSy6vbcyWOWDVezbo0+hzLc5sdk92IYssc0LcaJz6dCld7cg99t0l8qUmo2SHs1wqB2G5kBy8e8qJk5ZvZOdW9ovyNm0JBR7kJVrFmwUPDg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; 
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PAYX1zvn8YB/V+o5fBPpEYqftblNfyM1uOqyEXniK3c=;
 b=aHUDvdPZnNn58mtfsMOieCytxlCvTvhmYO8NKNKG+fl35CsByMEsV25WJpqOwlA/t3He00T555emIhglnhOqypJHb64nkdw0E9cqnOrIrI2YUmG33lz7GH1Mj46FypuRnFIxHPmPbAjBKUD3g9aDFFvCsGyFX7DRFBAvhReF+Ek=
Authentication-Results: nvidia.com; dkim=none (message not signed)
 header.d=none;nvidia.com; dmarc=none action=none header.from=intel.com;
Received: from SJ0PR11MB4879.namprd11.prod.outlook.com (2603:10b6:a03:2da::8)
 by BY5PR11MB4086.namprd11.prod.outlook.com (2603:10b6:a03:187::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.18; Tue, 28 Sep
 2021 15:14:10 +0000
Received: from SJ0PR11MB4879.namprd11.prod.outlook.com
 ([fe80::4d2f:6c7:757c:8e93]) by SJ0PR11MB4879.namprd11.prod.outlook.com
 ([fe80::4d2f:6c7:757c:8e93%5]) with mapi id 15.20.4544.021; Tue, 28 Sep 2021
 15:14:10 +0000
To: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>, "Iremonger, Bernard"
 <bernard.iremonger@intel.com>, "Medvedkin, Vladimir"
 <vladimir.medvedkin@intel.com>
CC: "dev@dpdk.org" <dev@dpdk.org>, "mdr@ashroe.eu" <mdr@ashroe.eu>,
 "Richardson, Bruce" <bruce.richardson@intel.com>, "Zhang, Roy Fan"
 <roy.fan.zhang@intel.com>, "hemant.agrawal@nxp.com" <hemant.agrawal@nxp.com>, 
 "gakhil@marvell.com" <gakhil@marvell.com>, "anoobj@marvell.com"
 <anoobj@marvell.com>, "Doherty, Declan" <declan.doherty@intel.com>, "Sinha,
 Abhijit" <abhijit.sinha@intel.com>, "Buckley, Daniel M"
 <daniel.m.buckley@intel.com>, "marchana@marvell.com" <marchana@marvell.com>,
 "ktejasree@marvell.com" <ktejasree@marvell.com>, "matan@nvidia.com"
 <matan@nvidia.com>
References: <20210713133542.3550525-1-radu.nicolau@intel.com>
 <20210917091747.1528262-1-radu.nicolau@intel.com>
 <20210917091747.1528262-7-radu.nicolau@intel.com>
 <DM6PR11MB4491DB80BC7221B375722F7E9AA39@DM6PR11MB4491.namprd11.prod.outlook.com>
From: "Nicolau, Radu" <radu.nicolau@intel.com>
Message-ID: <7895670d-4b2f-c083-5203-3831f7157972@intel.com>
Date: Tue, 28 Sep 2021 16:14:02 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Firefox/78.0 Thunderbird/78.14.0
In-Reply-To: <DM6PR11MB4491DB80BC7221B375722F7E9AA39@DM6PR11MB4491.namprd11.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: DB6PR07CA0068.eurprd07.prod.outlook.com
 (2603:10a6:6:2a::30) To SJ0PR11MB4879.namprd11.prod.outlook.com
 (2603:10b6:a03:2da::8)
MIME-Version: 1.0
Received: from [192.168.1.12] (109.255.186.106) by
 DB6PR07CA0068.eurprd07.prod.outlook.com (2603:10a6:6:2a::30) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4566.8 via Frontend Transport; Tue, 28 Sep 2021 15:14:07 +0000
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 10ad2474-e9ff-4b28-b22a-08d982929f74
X-MS-TrafficTypeDiagnostic: BY5PR11MB4086:
X-LD-Processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR11MB4086FA2211E04CE145C9EC4A90A89@BY5PR11MB4086.namprd11.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: sf5+UDbAY32sD7YWpt2anwskSZ+j+Smdxh6bJoeIOXT/iUpwR2cvnRICKB0eFghBV0mg7hjg7PE0i6HAq/yoR9CQL31m2tleXas2UHRv6ETX3uWqCUIemP25D+MWEmgJIGt2vHsyrQA8kfK2Yw2BRr5enjEC2U2A7NO+EulqT8gaoes4HhudERR2TNX0LIRkGAUITg9EAVbjoUFOeWMN2lc82VyMoFlq5jEYb1en915al7klgjHesgRWHHzHNrcOu+e/K/nhdjIwVRG4R27FTSLyUILT6zW9Uf/1EeAI3SvYFLN+c9p9f/ooxYSDJl5tV/9tEnmzhJl1Q4sYmgXPNq8qCZHS3r8Bjqf3vMnYdHlcIp9Lfti/f0orF9Ulkt/6x2FyfWbafo0ex2fnvEXW0V6uhMCZlWXM7sD4HpXJQPrCIY4ajDVjIEVXXpSCl5sJwh/84LIMdsVDZKNXld/oQat5l5ONKVUFpT8pWpA5muK921ywqDjG6LKG0xZnJLE/AIxH32dIcnBTYsQbJokau458QIrCpxcz8t49x8j5ovo3r/Qmu1dooZrHkhlf1mwzeCF7awDLJn90H8u56JVMcCl8dccV0UVlsLMEYr1+crs5eLYo2iAliJKPvCIF1xMpcYTw7xasuAk4x7VJMySSg6FfbNMnAz9o7ANlaXqE9NqzfqOaD4pRm8VArnBwvEYjdvBT9ZMiCxbdsvdh8lfk89+wfB8ltsIHYGdWVlbVt04=
X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:SJ0PR11MB4879.namprd11.prod.outlook.com; PTR:; CAT:NONE;
 SFS:(4636009)(366004)(8936002)(2906002)(8676002)(36756003)(38100700002)(4326008)(6486002)(5660300002)(30864003)(956004)(186003)(508600001)(54906003)(83380400001)(86362001)(2616005)(26005)(6666004)(110136005)(6636002)(16576012)(31686004)(66946007)(31696002)(55236004)(53546011)(66476007)(316002)(66556008)(43740500002)(45980500001);
 DIR:OUT; SFP:1102; 
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?bnVXSmk1Zmd1ZGVzN3IyOFF1MWZNY3dDZnJXTnNGTzJFZ0xFd3FXaWNtZFVG?=
 =?utf-8?B?MURlYW93Y3Zqa0lYWXplbFF4QjdsWXNBNk5FcXk4UDFVQ1RLV3drYTdEUlFB?=
 =?utf-8?B?Y1N1R1VOdjhQU2M3YUhGc1JNc0xnVjR5a1M1NU5FeFNLZ1Y5RzVkTTlHYlJD?=
 =?utf-8?B?TDYxemVTbURYYkRwMW50cXFGNjVuTkFqRkNORU1EVFlwZTZBZ0psc0k4SFRQ?=
 =?utf-8?B?WWRaWjJveTBCMEs4dVRyY2tNZi9yMkpZbGpocmRndWNrUytyQ3hlRWpRMmJK?=
 =?utf-8?B?diszb1VIcnZyUnYwY0M0a3lsNWx0Zllwd2JpeFdLOGtYcHp5N2lUQkFjd3ZO?=
 =?utf-8?B?RjhzR2JEcVNBQXNXWnRBNVFYSzZLT2U5OFdBYXFZcFJ0TTE2b1FZV3BRNTV1?=
 =?utf-8?B?R2E4c2dBL0xBaXlYd1BuMkt0ZDRlRENZZnhKUGFWWVdaZVhod2pqNzJVTmV6?=
 =?utf-8?B?UHkzTVRNMEliRzdjYlloeERkUzdJQ3JJemYyVVo1cDF4dG43b3ZkSmU4MnUy?=
 =?utf-8?B?MTBnSlVMaURGZW5Nc0Y4UWNwM3l3eVIvRHJTN1B4eFFkc2R2WmppLzZObG1V?=
 =?utf-8?B?eWlLV3JmVmZKWk9UVTJRVnFzZHFNVmRKbkR6ZythcGVmSnBHdUVmakZJOFMy?=
 =?utf-8?B?c3R0YkZ3MXFXaUZ6T1RvU045RFNXdHpMU0dmdjY2enk3OHFvaEJLNGgxTlNB?=
 =?utf-8?B?d001SlBOY0oyTG1rSXBMNUdQeEZGZ3Q0YlR5ZjdQb2pEc3Z6ZU5GaG83a212?=
 =?utf-8?B?NUNpY25zU0U4Q3lpY3JsdEpVWGdiR1pDSFZQZHV2azZNSVN3TFUwUm1yYmFL?=
 =?utf-8?B?V05tNUNha1F4WXFMblJpaUpmYnBaRGNmUnIrNGFCVXJFZ1JwdGNiSHU5S09Z?=
 =?utf-8?B?TmJJeXN4N01vQWpFU09GNDZHTzd4SSt1cEc2YjRmSFg5REtwcFp6eEQ0cCtn?=
 =?utf-8?B?TjErMjkzeDM0TUNEVGVnb2c1SjJ4UGM0MmlWR1pWUUFVd2tPdkx4SEZERk5H?=
 =?utf-8?B?V1h2MStiZi9kd3dNNmtyVWs3S2R4dkZsME1LM0hEeEVqeEoxdUJGbytaQUM5?=
 =?utf-8?B?bzI3Q1QyampCa0dXSEFEMXZwZkQzMk1yeXJib2RvRUgxaDlRNk5mSksvaHQw?=
 =?utf-8?B?czQyWHorR1gxWTB0LzBjckhFaXJCajNXd2YyUlJoSTBndG96NUZiM2pUdEdJ?=
 =?utf-8?B?VnNPS0E2SThKQW5oK080TmdVcE0rQ1dSSUovQTBRS3piU3pwTVZRZnBocXdE?=
 =?utf-8?B?UW1xbXFNWE9FNk5vTkNrVVZNaXdWeHhCZGtLZ0tzNFVmYlhkNGVua2djdjND?=
 =?utf-8?B?YnhaeVhLRXhadGIzd3Rzek5BWjVMeWQzaGxBU04zU2kxbGJoUU5WUSs0eU9C?=
 =?utf-8?B?S1JId0lkRUt0b2FoTnRBS2hCN1JQdjN1bm1pZ0t2TUVXNXlqaVVUUm4wSi83?=
 =?utf-8?B?RVhXRVdHN1lRdjRudzdGZVA2bGVMZ29CWmJ2dnNkQ1F3SVRUbTRYUXhXbzUx?=
 =?utf-8?B?OXdQM2dwMFh5bXZrSm1wT1E3a3BkbE15d21FNlRPRStBRjJxOVdIS3oxU0k5?=
 =?utf-8?B?eHcvS2VlUFo0T1FobUxERzlucGRzWnhhNmh6OEw2QTB1SHRxN2xKTFNBNHUy?=
 =?utf-8?B?ay9ubG1ZYVExaU9nSVYxNFdkMTdqbEdkSnhUUjNlU3RqSGlhaVMyK3gxZ08z?=
 =?utf-8?B?Zlo5V2NmeHhtcWNuY25sZTVQYkNOWlYrdXdPS3R5K2N0R3JiVU1MTEhiaXVa?=
 =?utf-8?Q?vDC+7tBFGkipPkk1HLG1ovm7bOvakyzSb/Po8yK?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 10ad2474-e9ff-4b28-b22a-08d982929f74
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR11MB4879.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Sep 2021 15:14:10.3210 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3/mtu2IKLE1QpOgGXBG7apIQQlePpju9sH8TmT/RLYeoYlXUM9ICOuetF3BLZoi3JSJVnoSqLmAZDX1qAgETGg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR11MB4086
X-OriginatorOrg: intel.com
Subject: Re: [dpdk-dev] [PATCH v6 06/10] ipsec: add transmit segmentation
 offload support
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>


On 9/23/2021 3:09 PM, Ananyev, Konstantin wrote:
>
>> Add support for transmit segmentation offload to inline crypto processing
>> mode. This offload is not supported by other offload modes, as at a
>> minimum it requires inline crypto for IPsec to be supported on the
>> network interface.
>>
>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
>> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
>> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
>> ---
>>   lib/ipsec/esp_inb.c  |   4 +-
>>   lib/ipsec/esp_outb.c | 115 +++++++++++++++++++++++++++++++++++--------
>>   lib/ipsec/iph.h      |  10 +++-
>>   lib/ipsec/sa.c       |   6 +++
>>   lib/ipsec/sa.h       |   4 ++
>>   5 files changed, 114 insertions(+), 25 deletions(-)
>>
>> diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
>> index d66c88f05d..a6ab8fbdd5 100644
>> --- a/lib/ipsec/esp_inb.c
>> +++ b/lib/ipsec/esp_inb.c
>> @@ -668,8 +668,8 @@ trs_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
>>   			/* modify packet's layout */
>>   			np = trs_process_step2(mb[i], ml[i], hl[i], cofs,
>>   				to[i], tl, sqn + k);
>> -			update_trs_l3hdr(sa, np + l2, mb[i]->pkt_len,
>> -				l2, hl[i] - l2, espt[i].next_proto);
>> +			update_trs_l34hdrs(sa, np + l2, mb[i]->pkt_len,
>> +				l2, hl[i] - l2, espt[i].next_proto, 0);
>>
>>   			/* update mbuf's metadata */
>>   			trs_process_step3(mb[i]);
>> diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
>> index a3f77469c3..9fc7075796 100644
>> --- a/lib/ipsec/esp_outb.c
>> +++ b/lib/ipsec/esp_outb.c
>> @@ -2,6 +2,8 @@
>>    * Copyright(c) 2018-2020 Intel Corporation
>>    */
>>
>> +#include <math.h>
>> +
>>   #include <rte_ipsec.h>
>>   #include <rte_esp.h>
>>   #include <rte_ip.h>
>> @@ -156,11 +158,20 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
>>
>>   	/* number of bytes to encrypt */
>>   	clen = plen + sizeof(*espt);
>> -	clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
>> +
>> +	/* We don't need to pad/ailgn packet when using TSO offload */
>> +	if (likely(!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))))
>> +		clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
>> +
> Here and everywhere:
> It doesn't look nice that we have to pollute generic functions with
> checking TSO specific flags all over the place.
> Can we probably have a specific prepare/process function for inline+tso case?
> As we do have for cpu and inline cases right now.
> Or just update inline version?
I looked at doing this but unless I copy these 2 functions I can't move 
this out.
>
>>   	/* pad length + esp tail */
>>   	pdlen = clen - plen;
>> -	tlen = pdlen + sa->icv_len + sqh_len;
>> +
>> +	/* We don't append ICV length when using TSO offload */
>> +	if (likely(!(mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))))
>> +		tlen = pdlen + sa->icv_len + sqh_len;
>> +	else
>> +		tlen = pdlen + sqh_len;
>>
>>   	/* do append and prepend */
>>   	ml = rte_pktmbuf_lastseg(mb);
>> @@ -337,6 +348,7 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
>>   	char *ph, *pt;
>>   	uint64_t *iv;
>>   	uint32_t l2len, l3len;
>> +	uint8_t tso = mb->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG) ? 1 : 0;
>>
>>   	l2len = mb->l2_len;
>>   	l3len = mb->l3_len;
>> @@ -349,11 +361,19 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
>>
>>   	/* number of bytes to encrypt */
>>   	clen = plen + sizeof(*espt);
>> -	clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
>> +
>> +	/* We don't need to pad/ailgn packet when using TSO offload */
>> +	if (likely(!tso))
>> +		clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
>>
>>   	/* pad length + esp tail */
>>   	pdlen = clen - plen;
>> -	tlen = pdlen + sa->icv_len + sqh_len;
>> +
>> +	/* We don't append ICV length when using TSO offload */
>> +	if (likely(!tso))
>> +		tlen = pdlen + sa->icv_len + sqh_len;
>> +	else
>> +		tlen = pdlen + sqh_len;
>>
>>   	/* do append and insert */
>>   	ml = rte_pktmbuf_lastseg(mb);
>> @@ -375,8 +395,8 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
>>   	insert_esph(ph, ph + hlen, uhlen);
>>
>>   	/* update ip  header fields */
>> -	np = update_trs_l3hdr(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
>> -			l3len, IPPROTO_ESP);
>> +	np = update_trs_l34hdrs(sa, ph + l2len, mb->pkt_len - sqh_len, l2len,
>> +			l3len, IPPROTO_ESP, tso);
>>
>>   	/* update spi, seqn and iv */
>>   	esph = (struct rte_esp_hdr *)(ph + uhlen);
>> @@ -651,6 +671,33 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
>>   	}
>>   }
>>
>> +/* check if packet will exceed MSS and segmentation is required */
>> +static inline int
>> +esn_outb_nb_segments(const struct rte_ipsec_sa *sa, struct rte_mbuf *m) {
>> +	uint16_t segments = 1;
>> +	uint16_t pkt_l3len = m->pkt_len - m->l2_len;
>> +
>> +	/* Only support segmentation for UDP/TCP flows */
>> +	if (!(m->packet_type & (RTE_PTYPE_L4_UDP | RTE_PTYPE_L4_TCP)))
>> +		return segments;
>> +
>> +	if (sa->tso.enabled && pkt_l3len > sa->tso.mss) {
>> +		segments = ceil((float)pkt_l3len / sa->tso.mss);
> Float calculations in the middle of data-path?
> Just to calculate roundup?
> Doesn't look good to me at all.
It doesn't look good to me either - I will rework it.
>
>> +
>> +		if  (m->packet_type & RTE_PTYPE_L4_TCP) {
>> +			m->ol_flags |= (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM);
> That's really strange - why ipsec library will set PKT_TX_TCP_SEG unconditionally?
> That should be responsibility of the upper layer, I think.
> In the lib we should only check was tso requested for that packet or not.
> Same for UDP.
These are under an if(TSO) condition.
>
>> +			m->l4_len = sizeof(struct rte_tcp_hdr);
> Hmm, how do we know there are no TCP options present for that packet?
> Wouldn't it be better to expect user to provide proper l4_len for such packets?
You're right, I will update it.

>
>> +		} else {
>> +			m->ol_flags |= (PKT_TX_UDP_SEG | PKT_TX_UDP_CKSUM);
>> +			m->l4_len = sizeof(struct rte_udp_hdr);
>> +		}
>> +
>> +		m->tso_segsz = sa->tso.mss;
>> +	}
>> +
>> +	return segments;
>> +}
>> +
>>   /*
>>    * process group of ESP outbound tunnel packets destined for
>>    * INLINE_CRYPTO type of device.
>> @@ -660,24 +707,29 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
>>   	struct rte_mbuf *mb[], uint16_t num)
>>   {
>>   	int32_t rc;
>> -	uint32_t i, k, n;
>> +	uint32_t i, k, nb_sqn = 0, nb_sqn_alloc;
>>   	uint64_t sqn;
>>   	rte_be64_t sqc;
>>   	struct rte_ipsec_sa *sa;
>>   	union sym_op_data icv;
>>   	uint64_t iv[IPSEC_MAX_IV_QWORD];
>>   	uint32_t dr[num];
>> +	uint16_t nb_segs[num];
>>
>>   	sa = ss->sa;
>>
>> -	n = num;
>> -	sqn = esn_outb_update_sqn(sa, &n);
>> -	if (n != num)
>> +	for (i = 0; i != num; i++) {
>> +		nb_segs[i] = esn_outb_nb_segments(sa, mb[i]);
>> +		nb_sqn += nb_segs[i];
>> +	}
>> +
>> +	nb_sqn_alloc = nb_sqn;
>> +	sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
>> +	if (nb_sqn_alloc != nb_sqn)
>>   		rte_errno = EOVERFLOW;
>>
>>   	k = 0;
>> -	for (i = 0; i != n; i++) {
>> -
>> +	for (i = 0; i != num; i++) {
>>   		sqc = rte_cpu_to_be_64(sqn + i);
>>   		gen_iv(iv, sqc);
>>
>> @@ -691,11 +743,18 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
>>   			dr[i - k] = i;
>>   			rte_errno = -rc;
>>   		}
>> +
>> +		/**
>> +		 * If packet is using tso, increment sqn by the number of
>> +		 * segments for	packet
>> +		 */
>> +		if  (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
>> +			sqn += nb_segs[i] - 1;
>>   	}
>>
>>   	/* copy not processed mbufs beyond good ones */
>> -	if (k != n && k != 0)
>> -		move_bad_mbufs(mb, dr, n, n - k);
>> +	if (k != num && k != 0)
>> +		move_bad_mbufs(mb, dr, num, num - k);
>>
>>   	inline_outb_mbuf_prepare(ss, mb, k);
>>   	return k;
>> @@ -710,23 +769,30 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
>>   	struct rte_mbuf *mb[], uint16_t num)
>>   {
>>   	int32_t rc;
>> -	uint32_t i, k, n;
>> +	uint32_t i, k, nb_sqn, nb_sqn_alloc;
>>   	uint64_t sqn;
>>   	rte_be64_t sqc;
>>   	struct rte_ipsec_sa *sa;
>>   	union sym_op_data icv;
>>   	uint64_t iv[IPSEC_MAX_IV_QWORD];
>>   	uint32_t dr[num];
>> +	uint16_t nb_segs[num];
>>
>>   	sa = ss->sa;
>>
>> -	n = num;
>> -	sqn = esn_outb_update_sqn(sa, &n);
>> -	if (n != num)
>> +	/* Calculate number of sequence numbers required */
>> +	for (i = 0, nb_sqn = 0; i != num; i++) {
>> +		nb_segs[i] = esn_outb_nb_segments(sa, mb[i]);
>> +		nb_sqn += nb_segs[i];
>> +	}
>> +
>> +	nb_sqn_alloc = nb_sqn;
>> +	sqn = esn_outb_update_sqn(sa, &nb_sqn_alloc);
>> +	if (nb_sqn_alloc != nb_sqn)
>>   		rte_errno = EOVERFLOW;
>>
>>   	k = 0;
>> -	for (i = 0; i != n; i++) {
>> +	for (i = 0; i != num; i++) {
>>
>>   		sqc = rte_cpu_to_be_64(sqn + i);
>>   		gen_iv(iv, sqc);
>> @@ -741,11 +807,18 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
>>   			dr[i - k] = i;
>>   			rte_errno = -rc;
>>   		}
>> +
>> +		/**
>> +		 * If packet is using tso, increment sqn by the number of
>> +		 * segments for	packet
>> +		 */
>> +		if  (mb[i]->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))
>> +			sqn += nb_segs[i] - 1;
>>   	}
>>
>>   	/* copy not processed mbufs beyond good ones */
>> -	if (k != n && k != 0)
>> -		move_bad_mbufs(mb, dr, n, n - k);
>> +	if (k != num && k != 0)
>> +		move_bad_mbufs(mb, dr, num, num - k);
>>
>>   	inline_outb_mbuf_prepare(ss, mb, k);
>>   	return k;
>> diff --git a/lib/ipsec/iph.h b/lib/ipsec/iph.h
>> index 861f16905a..2d223199ac 100644
>> --- a/lib/ipsec/iph.h
>> +++ b/lib/ipsec/iph.h
>> @@ -6,6 +6,8 @@
>>   #define _IPH_H_
>>
>>   #include <rte_ip.h>
>> +#include <rte_udp.h>
>> +#include <rte_tcp.h>
>>
>>   /**
>>    * @file iph.h
>> @@ -39,8 +41,8 @@ insert_esph(char *np, char *op, uint32_t hlen)
>>
>>   /* update original ip header fields for transport case */
>>   static inline int
>> -update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
>> -		uint32_t l2len, uint32_t l3len, uint8_t proto)
>> +update_trs_l34hdrs(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
>> +		uint32_t l2len, uint32_t l3len, uint8_t proto, uint8_t tso)
> Hmm... why to change name of the function?
>
>>   {
>>   	int32_t rc;
>>
>> @@ -51,6 +53,10 @@ update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
>>   		v4h = p;
>>   		rc = v4h->next_proto_id;
>>   		v4h->next_proto_id = proto;
>> +		if (tso) {
>> +			v4h->hdr_checksum = 0;
>> +			v4h->total_length = 0;
> total_len will be overwritten unconditionally at next line below.
>
> Another question - why it is necessary?
> Is it HW specific requirment or ... ?
It looks wrong I will rewrite this.
>
>
>> +		}
>>   		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
>
>>   	/* IPv6 */
>>   	} else {
>> diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
>> index 720e0f365b..2ecbbce0a4 100644
>> --- a/lib/ipsec/sa.c
>> +++ b/lib/ipsec/sa.c
>> @@ -565,6 +565,12 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
>>   	sa->type = type;
>>   	sa->size = sz;
>>
>> +
>> +	if (prm->ipsec_xform.options.tso == 1) {
>> +		sa->tso.enabled = 1;
>> +		sa->tso.mss = prm->ipsec_xform.mss;
>> +	}
>> +
>>   	/* check for ESN flag */
>>   	sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
>>   		UINT32_MAX : UINT64_MAX;
>> diff --git a/lib/ipsec/sa.h b/lib/ipsec/sa.h
>> index 107ebd1519..5e237f3525 100644
>> --- a/lib/ipsec/sa.h
>> +++ b/lib/ipsec/sa.h
>> @@ -113,6 +113,10 @@ struct rte_ipsec_sa {
>>   	uint8_t iv_len;
>>   	uint8_t pad_align;
>>   	uint8_t tos_mask;
>> +	struct {
>> +		uint8_t enabled:1;
>> +		uint16_t mss;
>> +	} tso;
> Wouldn't one field be enough?
> uint16_t tso_mss;
> And it it is zero, then tso is disabled.
> In fact, do we need it at all?
> Wouldn't it be better to request user to fill mbuf->tso_segsz properly for us?

We added an option to rte_security_ipsec_sa_options to allow the user to 
enable TSO per SA and specify the MSS in the sessions parameters.

We can request user to fill mbuf->tso_segsz, but with this patch we are 
doing it for the user.

>
>>   	/* template for tunnel header */
>>   	uint8_t hdr[IPSEC_MAX_HDR_SIZE];
>> --
>> 2.25.1