From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 37BBAA034F; Tue, 12 Oct 2021 00:00:10 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 056824067C; Tue, 12 Oct 2021 00:00:10 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id 941B240151 for ; Tue, 12 Oct 2021 00:00:07 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10134"; a="213926697" X-IronPort-AV: E=Sophos;i="5.85,365,1624345200"; d="scan'208";a="213926697" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Oct 2021 15:00:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,365,1624345200"; d="scan'208";a="658821775" Received: from orsmsx606.amr.corp.intel.com ([10.22.229.19]) by orsmga005.jf.intel.com with ESMTP; 11 Oct 2021 15:00:03 -0700 Received: from orsmsx602.amr.corp.intel.com (10.22.229.15) by ORSMSX606.amr.corp.intel.com (10.22.229.19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.12; Mon, 11 Oct 2021 15:00:03 -0700 Received: from orsedg603.ED.cps.intel.com (10.7.248.4) by orsmsx602.amr.corp.intel.com (10.22.229.15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.12 via Frontend Transport; Mon, 11 Oct 2021 15:00:03 -0700 Received: from NAM04-MW2-obe.outbound.protection.outlook.com (104.47.73.170) by edgegateway.intel.com (134.134.137.100) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2242.12; Mon, 11 Oct 2021 15:00:02 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WqoVu9Np/vhZ54XnNPG6baqI6KWL1vsHmhXeuYw1t9uf5TqZkxWZ9s1/7phzBq22GtBubzisvrpn6rqUjy7hMrndZgHzhxevOin+dGL/HDyz0OT1Q/51w+Cblr/8W0F2uYq6lDVjtOHdr+yBDxrhHXXHSxoHUs6LBik3nD9PAzr+13YAPewZNUejgSSTLNBwGymYG6Re0OEicyFtxeZfB4KX2U7QB1U3dfRFQn/kAonrrhOw2+ACh/loH6Fu9+zjo2LmVuMVpTIMCPaaXdhJodPU27nwuNrflDawtfReajphCkTjp9EXmG8H9XmSLGzBLE1xemWxBUe2jiUR5mUVEw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=IzXe6IY8g4TYpLlNkTQy2Q9D91fmMISJzcVyQMHrxRw=; b=CZxmWMFKbGNyDWMq8hHH1zBkHyB2sri3lTbTiKrY+/FBXj6EhQUVUfIui5BWjiZTjx6TemTo2IFd8UM0NZ4li8sXfBXloFyLQJMP3/BEO9qamjuGGYxbocEm4I3GlWqiBS5qsim+fHzG6QB1CknqBooXUqUizeMbX5gtsRYbR2zDYfZfMmuSgtLYKr6bicBKzWr5uJQZf9z3skSzbszNIfnjsmP5LuM1jvpoDrC/9WaBQ4wIyUwyEcFxLrCXNbm5BxLjawhrs8rr/sAGo0zq+S2aXu5MkmzL8oMMYEDzxANh2Ja5NEhijFo96Ha1iBSOhk/ReJIVfz+qDDyASjPgMg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=IzXe6IY8g4TYpLlNkTQy2Q9D91fmMISJzcVyQMHrxRw=; b=dX2RJo9g3Hg0I8nduBWZJPYyw0ImAFLKj2TCmYko+Oj0AdrTeO67IKttsjzVpICuUTqcpzccKQ/TV+b4dVrXCK5OMZnWE68O9fx1v43ZLQYFhoT2wPUYesr8lOWZq+5++WNstKLG39AjWgYH+zV2QEn8y4FUGUg3aSUhoxF7Mis= Authentication-Results: nvidia.com; dkim=none (message not signed) header.d=none;nvidia.com; dmarc=none action=none header.from=intel.com; Received: from PH0PR11MB5000.namprd11.prod.outlook.com (2603:10b6:510:41::19) by PH0PR11MB5112.namprd11.prod.outlook.com (2603:10b6:510:3b::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4587.24; Mon, 11 Oct 2021 21:59:51 +0000 Received: from PH0PR11MB5000.namprd11.prod.outlook.com ([fe80::bd7d:29be:3342:632c]) by PH0PR11MB5000.namprd11.prod.outlook.com ([fe80::bd7d:29be:3342:632c%5]) with mapi id 15.20.4587.026; Mon, 11 Oct 2021 21:59:51 +0000 Message-ID: Date: Mon, 11 Oct 2021 22:59:34 +0100 Content-Language: en-US To: Matan Azrad , Jerin Jacob , "Xiaoyun Li" , Chas Williams , "Min Hu (Connor)" , Hemant Agrawal , Sachin Saxena , Qi Zhang , Xiao Wang , Slava Ovsiienko , "Harman Kalra" , Maciej Czekaj , Ray Kinsella , Bernard Iremonger , "Konstantin Ananyev" , Kiran Kumar K , Nithin Dabilpuram , "David Hunt" , John McNamara , "Bruce Richardson" , Igor Russkikh , Steven Webster , "Matt Peters" , Somalapuram Amaranath , Rasesh Mody , Shahed Shaikh , "Ajit Khaparde" , Somnath Kotur , Sunil Kumar Kori , Satha Rao , Rahul Lakkireddy , Haiyue Wang , Marcin Wojtas , "Michal Krawczyk" , Shai Brandes , "Evgeny Schemeilin" , Igor Chauskin , "Gagandeep Singh" , John Daley , Hyong Youb Kim , Ziyang Xuan , Xiaoyun Wang , Guoyang Zhou , "Yisen Zhuang" , Lijun Ou , Beilei Xing , Jingjing Wu , Qiming Yang , Andrew Boyer , Rosen Xu , Shijith Thotton , Srisivasubramanian Srinivasan , Zyta Szpak , Liron Himi , Heinrich Kuhn , Devendra Singh Rawat , Andrew Rybchenko , Keith Wiles , Jiawen Wu , Jian Wang , Maxime Coquelin , Chenbo Xia , Nicolas Chautru , Harry van Haaren , Cristian Dumitrescu , Radu Nicolau , Akhil Goyal , Tomasz Kantecki , Declan Doherty , "Pavan Nikhilesh" , Kirill Rybalchenko , Jasvinder Singh , NBU-Contact-Thomas Monjalon , Dekel Peled CC: "dev@dpdk.org" References: <20211001143624.3744505-1-ferruh.yigit@intel.com> <20211005171653.3700067-1-ferruh.yigit@intel.com> From: Ferruh Yigit X-User: ferruhy In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-ClientProxiedBy: DB6PR0202CA0008.eurprd02.prod.outlook.com (2603:10a6:4:29::18) To PH0PR11MB5000.namprd11.prod.outlook.com (2603:10b6:510:41::19) MIME-Version: 1.0 Received: from [192.168.0.206] (37.228.236.146) by DB6PR0202CA0008.eurprd02.prod.outlook.com (2603:10a6:4:29::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4587.21 via Frontend Transport; Mon, 11 Oct 2021 21:59:38 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: fed417a1-8061-40c7-1abb-08d98d0272f2 X-MS-TrafficTypeDiagnostic: PH0PR11MB5112: X-LD-Processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7691; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: R3xgxcBfjDYR3FihqIt0LuwiteE2L6rIgKKBRT8blSQnOF3IkO7mF8p/4iLwmJljaWOMYYlGV+vdFWlDkpa39B2SDqwekhhZV4+aJHSWhMP6s89ZGqrVBsMSIH5tC+w8n2i8p6RxrtmQdqPtPuJP0bOI4wnNsE3nRiBoicm/rI5TYV1BBmvWBu8fel59fw9tWwQz5LAVOHx9AwNkZNOVGE7t/VuPdm57gaVMf0JMFCbeRIG4HYZVB18jgxXp1dCBVdq5QezW1hUPToSYd8mkDVwvkb9tnKxWY4zZWgJqVNbfEBE+pmLOv6azdURXVgSh2PyrZBrDb+jwF/EZzVA1ahfuFYWIEjcWV0xaP3V02UKpKBxodHYV5jvXWsFK5Mbe0mkRwV8BH32HRp6gzSul54iUT92w+dBuTEFrOaenHebeFmxy4nxfh9ZZcvajzm2IGpFTKS356uvPOeCMQyzeqq83x8Dmx0Z5HrTJy5bYJvsU/f8xvHLudgdNpckpF9Bu5qPccjQ3ZgOHeIthtsdG+hmvtyAf+SWstMOdmFxeYok6MOOj+eQ6QA0FpRP6HOQNBZRS1LqU/B0BdB+GmNRu1+mMpJVz2jkqdGS0SECb2XdobvCp61yuttKSjvfIVh0WiUCZi8XKzAAvzX/xQMtbB5tlgFbvk52qpyVQrlAt4K/whLgh1UFazxmi3gal+RKqO1Tn9//tvAhtWxshAelWzMGqkdEL62el2Uia54iEmcs= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:PH0PR11MB5000.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(366004)(26005)(110136005)(16576012)(83380400001)(6486002)(508600001)(956004)(2906002)(316002)(5660300002)(30864003)(1191002)(31696002)(2616005)(31686004)(66476007)(66946007)(4326008)(44832011)(7406005)(7416002)(7366002)(66556008)(53546011)(36756003)(6666004)(86362001)(38100700002)(186003)(8936002)(8676002)(921005)(45980500001); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?dklEVlZlMTF3L1JmMVkvLzNxbVphSzhKb3M5c2RrZzFrMmsweUxOeUo3OEZK?= =?utf-8?B?N0wvNWRLeXBiS0RPYmNCS09FOUpoa0VNQitNcGVyVG12WWYxTUlhOE91ejhM?= =?utf-8?B?WFdGbzZQMDlNUUtlSmtkUmduVTdLVGZsTExsVlB5SUJjSnRWNGx6MmcrNTRj?= =?utf-8?B?Uy83OGJFUkUrMWxwV3BMdWNPQjJBWVpmOTVWUVh1L0p0azNoZVNOaUxzY0F6?= =?utf-8?B?SjRFTlkwTkJQODRnL0NHUUlSZ2RkR21wc01JZkNaNStKNVVvV094QXBjSmtH?= =?utf-8?B?QTFzakRuTExEeFBodmpVTGM3dGMrTUY0V04wY3ZwaDA2UWY5eUloVENRTWZ1?= =?utf-8?B?RHA0TTFXQWFhS2FMZnlGNCtZeE5tRm14K1BOMlhBL3pURlR4RFg1ZkdGTlFF?= =?utf-8?B?UlZqVXRhR2JEQytRcTlEOHBTVG1DZVdJb1ZSQ0EvMnVlaXp4RVdUV05kV3U4?= =?utf-8?B?TjN1QmpHUVZpQ280MzJxZVgrbnVuVEtCMmRQTGljTlA5T0M2cHorTGMxUFlT?= =?utf-8?B?ZjlGRUJIRG9FTGs4NHUyS29ZMG91WDB0c3crNmdSVitUc1VCUGZxZ09hc0FB?= =?utf-8?B?UGxyZE9ZcFcySXh5UUM3UVlXb1lEQkg3anhGTERQN2lqdi9taUkweHV0T1cx?= =?utf-8?B?R2xPYWxGZmg3MTFxUzVNRjZJMENlc0g1L0RTdllLMTYyN2ZJemkvWTNxdGVM?= =?utf-8?B?ZXJocy9BQkpTanlrWkZ4TC9PWnZrcEFCWkhXeEdpWC81TURMbjZtSnFJbHJr?= =?utf-8?B?dnVzSE01dVhlQ2RCUThXQjYzcjZ3REJVVUlyaUZ5VzJhbnFRRkNQQU1jT0x0?= =?utf-8?B?WStuU29XUVdLV01tWUlubzQ3aGFmU3VsTVpRUkhhQ2NWMmZIcXJ2RHJBMkNE?= =?utf-8?B?eHY0Wm9hQ0hmakpDbUY2TWQwUlpVdFlwV2czRWhwQXd0T0JTeXlmTmZndWdB?= =?utf-8?B?bTF1SUdGK0FkenBpZURSUFYza3Y2SVl5YnJsNnpUWkMxTWozbEoyay9pbCti?= =?utf-8?B?bXJoVWhheXlIQWtQblZYNGt0Z3JUOGwrSFZkWFdBYmhzM3VWRC9zNjRyMEVp?= =?utf-8?B?WWZIWDBXenU0eldxK2VZWlY2aTNyV1ZzS2pReUdlRWowM1g4c29zWWJodUVP?= =?utf-8?B?bEp3cFJsb2xwUHZJQTZzN1JSS0x4ZnQrR25nUC9sOVNHT2lrYUY0UVBURVVO?= =?utf-8?B?S05UUE1kQk9ucGxOdWZkQXNpR3R3eVZEY0gxVjl5OGVIM1ljck5rbU5UcUFF?= =?utf-8?B?RHIzcHBuMkgzcjFnd2Rkc1JjRjFGbHRtaEVvc1ltSlZFbC83cjhWdjh0eWhO?= =?utf-8?B?SFJ3ZnJHK1hQRnN5MzVLT01hYVVQQnk1MGJkbGxneXBNaVhsMDBQalA5MXpY?= =?utf-8?B?Y2ZkZlZuWC81VWJHUnRtY2sydDdzakVwbmZIa1dzeWttMkZReGNEbDByUkg5?= =?utf-8?B?cjhLdWJUanhGL2R2TFBzR1RCNmtuaVJrZHRnSWx2ajZteng4dGV0L2xwdVBw?= =?utf-8?B?Um5KT3J4c1pqUkhSbFNqVHYxS296VlZDbzNRbVY3K3A5R1Qvbzh6Qi93a2g2?= =?utf-8?B?SmFpakQvVEd5UGMrcllLcjl3MCtmK0taYXRyYkd5cnYrSWM2enQ2dHRiZXJ5?= =?utf-8?B?bG5xdUc4bzVVWWprQ3F6TVI5c3BqZnZkN3FHaERQcFBFUitGUkJHZzlEZVZa?= =?utf-8?B?Vy9YaUw4Z2h6azNJbTBFdjB1MFQ5d09UZzFnZERETWFUd29wTjNBVjZtcVUw?= =?utf-8?Q?RJ9lwKe2RWfsPvF9vpCRW0d7Bg4OATiAoccDxEa?= X-MS-Exchange-CrossTenant-Network-Message-Id: fed417a1-8061-40c7-1abb-08d98d0272f2 X-MS-Exchange-CrossTenant-AuthSource: PH0PR11MB5000.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Oct 2021 21:59:51.3589 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: uZozBOTVivndeNib+wwj1mL2KqqJTjz/pwmpyAg7osqDtlLIJaqBEDKNZMeso/TSmkxpO/dksKsubHumHY1wvQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR11MB5112 X-OriginatorOrg: intel.com Subject: Re: [dpdk-dev] [PATCH v4 1/6] ethdev: fix max Rx packet length X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 10/10/2021 7:30 AM, Matan Azrad wrote: > > Hi Ferruh > > From: Ferruh Yigit >> There is a confusion on setting max Rx packet length, this patch aims to >> clarify it. >> >> 'rte_eth_dev_configure()' API accepts max Rx packet size via >> 'uint32_t max_rx_pkt_len' field of the config struct 'struct >> rte_eth_conf'. >> >> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result >> stored into '(struct rte_eth_dev)->data->mtu'. >> >> These two APIs are related but they work in a disconnected way, they >> store the set values in different variables which makes hard to figure >> out which one to use, also having two different method for a related >> functionality is confusing for the users. >> >> Other issues causing confusion is: >> * maximum transmission unit (MTU) is payload of the Ethernet frame. And >> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is >> Ethernet frame overhead, and this overhead may be different from >> device to device based on what device supports, like VLAN and QinQ. >> * 'max_rx_pkt_len' is only valid when application requested jumbo frame, >> which adds additional confusion and some APIs and PMDs already >> discards this documented behavior. >> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory >> field, this adds configuration complexity for application. >> >> As solution, both APIs gets MTU as parameter, and both saves the result >> in same variable '(struct rte_eth_dev)->data->mtu'. For this >> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent >> from jumbo frame. >> >> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user >> request and it should be used only within configure function and result >> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point >> both application and PMD uses MTU from this variable. >> >> When application doesn't provide an MTU during 'rte_eth_dev_configure()' >> default 'RTE_ETHER_MTU' value is used. >> >> Additional clarification done on scattered Rx configuration, in >> relation to MTU and Rx buffer size. >> MTU is used to configure the device for physical Rx/Tx size limitation, >> Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer >> size as Rx buffer size. >> PMDs compare MTU against Rx buffer size to decide enabling scattered Rx >> or not. If scattered Rx is not supported by device, MTU bigger than Rx >> buffer size should fail. > > Should it be compared also against max_lro_pkt_size for the SCATTER enabling by the PMD? > I kept the LRO related code same, the Rx packet length change patch already become complex, LRO related changes can be done later instead of making this set more confusing. It would be great if you and Dekel can work on it as you introduced the 'max_lro_pkt_size' in ethdev. > What do you think about enabling SCATTER by the API instead of making the comparison in each PMD? > Not sure if we can do that, as far as I can see there is no enforcement on the Rx buffer size but PMDs select it. >> Signed-off-by: Ferruh Yigit > > > > Please see more below regarding SCATTER. > >> diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c >> index 978cbb8201ea..4a5cfd22aa71 100644 >> --- a/drivers/net/mlx4/mlx4_rxq.c >> +++ b/drivers/net/mlx4/mlx4_rxq.c >> @@ -753,6 +753,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, >> uint16_t idx, uint16_t desc, >> int ret; >> uint32_t crc_present; >> uint64_t offloads; >> + uint32_t max_rx_pktlen; >> >> offloads = conf->offloads | dev->data->dev_conf.rxmode.offloads; >> >> @@ -828,13 +829,11 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, >> uint16_t idx, uint16_t desc, >> }; >> /* Enable scattered packets support for this queue if necessary. */ >> MLX4_ASSERT(mb_len >= RTE_PKTMBUF_HEADROOM); >> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= >> - (mb_len - RTE_PKTMBUF_HEADROOM)) { >> + max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + >> RTE_ETHER_CRC_LEN; >> + if (max_rx_pktlen <= (mb_len - RTE_PKTMBUF_HEADROOM)) { >> ; >> } else if (offloads & DEV_RX_OFFLOAD_SCATTER) { >> - uint32_t size = >> - RTE_PKTMBUF_HEADROOM + >> - dev->data->dev_conf.rxmode.max_rx_pkt_len; >> + uint32_t size = RTE_PKTMBUF_HEADROOM + max_rx_pktlen; >> uint32_t sges_n; >> >> /* >> @@ -846,21 +845,19 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, >> uint16_t idx, uint16_t desc, >> /* Make sure sges_n did not overflow. */ >> size = mb_len * (1 << rxq->sges_n); >> size -= RTE_PKTMBUF_HEADROOM; >> - if (size < dev->data->dev_conf.rxmode.max_rx_pkt_len) { >> + if (size < max_rx_pktlen) { >> rte_errno = EOVERFLOW; >> ERROR("%p: too many SGEs (%u) needed to handle" >> " requested maximum packet size %u", >> (void *)dev, >> - 1 << sges_n, >> - dev->data->dev_conf.rxmode.max_rx_pkt_len); >> + 1 << sges_n, max_rx_pktlen); >> goto error; >> } >> } else { >> WARN("%p: the requested maximum Rx packet size (%u) is" >> " larger than a single mbuf (%u) and scattered" >> " mode has not been requested", >> - (void *)dev, >> - dev->data->dev_conf.rxmode.max_rx_pkt_len, >> + (void *)dev, max_rx_pktlen, >> mb_len - RTE_PKTMBUF_HEADROOM); >> } > > If, by definition, SCATTER should be enabled implicitly by the PMD according to the comparison you wrote above, maybe this check for SCATTER offload is not needed. > This behavior is not documented and not clear, some PMDs enable scattered Rx implicitly some doesn't. It looks like we need a clarification patch for scattered Rx too. For this patch I added scatter related info on the commit log to clarify the reasoning of the change. PMD behavior not changed. > Also, it can be documented on SCATTER offload precisely the parameters that are used for the comparison and that it is for capability only and no need to configure it. > We are having same question in a few other offloads, should we take user configuration strictly and fail, or should we adjust config to requested values. Like if PMD supports scattered Rx and requested MTU is bigger than Rx buffer size, should PMD enable scattered Rx itself or fails. We should first clarify this and later fix documentation and driver in a separate patch. > Also, for the case of multi RX mempools configuration, it can be implicitly understood by the PMDs to enable SCATTER and no need to check that in PMD/API. > Yes, multi Rx mempools is something else to take into account for the scattered Rx config. > What do you think? > >> DEBUG("%p: maximum number of segments per packet: %u", >> diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c >> index abd8ce798986..6f4f351222d3 100644 >> --- a/drivers/net/mlx5/mlx5_rxq.c >> +++ b/drivers/net/mlx5/mlx5_rxq.c >> @@ -1330,10 +1330,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, >> uint16_t idx, uint16_t desc, >> uint64_t offloads = conf->offloads | >> dev->data->dev_conf.rxmode.offloads; >> unsigned int lro_on_queue = !!(offloads & >> DEV_RX_OFFLOAD_TCP_LRO); >> - unsigned int max_rx_pkt_len = lro_on_queue ? >> + unsigned int max_rx_pktlen = lro_on_queue ? >> dev->data->dev_conf.rxmode.max_lro_pkt_size : >> - dev->data->dev_conf.rxmode.max_rx_pkt_len; >> - unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len + >> + dev->data->mtu + (unsigned int)RTE_ETHER_HDR_LEN + >> + RTE_ETHER_CRC_LEN; >> + unsigned int non_scatter_min_mbuf_size = max_rx_pktlen + >> RTE_PKTMBUF_HEADROOM; >> unsigned int max_lro_size = 0; >> unsigned int first_mb_free_size = mb_len - RTE_PKTMBUF_HEADROOM; >> @@ -1372,7 +1373,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t >> idx, uint16_t desc, >> * needed to handle max size packets, replace zero length >> * with the buffer length from the pool. >> */ >> - tail_len = max_rx_pkt_len; >> + tail_len = max_rx_pktlen; >> do { >> struct mlx5_eth_rxseg *hw_seg = >> &tmpl->rxq.rxseg[tmpl->rxq.rxseg_n]; >> @@ -1410,7 +1411,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t >> idx, uint16_t desc, >> "port %u too many SGEs (%u) needed to handle" >> " requested maximum packet size %u, the maximum" >> " supported are %u", dev->data->port_id, >> - tmpl->rxq.rxseg_n, max_rx_pkt_len, >> + tmpl->rxq.rxseg_n, max_rx_pktlen, >> MLX5_MAX_RXQ_NSEG); >> rte_errno = ENOTSUP; >> goto error; >> @@ -1435,7 +1436,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t >> idx, uint16_t desc, >> DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not" >> " configured and no enough mbuf space(%u) to contain " >> "the maximum RX packet length(%u) with head-room(%u)", >> - dev->data->port_id, idx, mb_len, max_rx_pkt_len, >> + dev->data->port_id, idx, mb_len, max_rx_pktlen, >> RTE_PKTMBUF_HEADROOM); >> rte_errno = ENOSPC; >> goto error; > > Also, here for the SCATTER check. Here, it is even an error. > >> @@ -1454,7 +1455,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t >> idx, uint16_t desc, >> * following conditions are met: >> * - MPRQ is enabled. >> * - The number of descs is more than the number of strides. >> - * - max_rx_pkt_len plus overhead is less than the max size >> + * - max_rx_pktlen plus overhead is less than the max size >> * of a stride or mprq_stride_size is specified by a user. >> * Need to make sure that there are enough strides to encap >> * the maximum packet size in case mprq_stride_size is set. >> @@ -1478,7 +1479,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t >> idx, uint16_t desc, >> !!(offloads & DEV_RX_OFFLOAD_SCATTER); >> tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size, >> config->mprq.max_memcpy_len); >> - max_lro_size = RTE_MIN(max_rx_pkt_len, >> + max_lro_size = RTE_MIN(max_rx_pktlen, >> (1u << tmpl->rxq.strd_num_n) * >> (1u << tmpl->rxq.strd_sz_n)); >> DRV_LOG(DEBUG, >> @@ -1487,9 +1488,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t >> idx, uint16_t desc, >> dev->data->port_id, idx, >> tmpl->rxq.strd_num_n, tmpl->rxq.strd_sz_n); >> } else if (tmpl->rxq.rxseg_n == 1) { >> - MLX5_ASSERT(max_rx_pkt_len <= first_mb_free_size); >> + MLX5_ASSERT(max_rx_pktlen <= first_mb_free_size); >> tmpl->rxq.sges_n = 0; >> - max_lro_size = max_rx_pkt_len; >> + max_lro_size = max_rx_pktlen; >> } else if (offloads & DEV_RX_OFFLOAD_SCATTER) { >> unsigned int sges_n; >> >> @@ -1511,13 +1512,13 @@ mlx5_rxq_new(struct rte_eth_dev *dev, >> uint16_t idx, uint16_t desc, >> "port %u too many SGEs (%u) needed to handle" >> " requested maximum packet size %u, the maximum" >> " supported are %u", dev->data->port_id, >> - 1 << sges_n, max_rx_pkt_len, >> + 1 << sges_n, max_rx_pktlen, >> 1u << MLX5_MAX_LOG_RQ_SEGS); >> rte_errno = ENOTSUP; >> goto error; >> } >> tmpl->rxq.sges_n = sges_n; >> - max_lro_size = max_rx_pkt_len; >> + max_lro_size = max_rx_pktlen; >> } >> if (config->mprq.enabled && !mlx5_rxq_mprq_enabled(&tmpl->rxq)) >> DRV_LOG(WARNING, > > >