From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 99B87A054F; Wed, 7 Sep 2022 13:24:44 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 416A340143; Wed, 7 Sep 2022 13:24:44 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2074.outbound.protection.outlook.com [40.107.244.74]) by mails.dpdk.org (Postfix) with ESMTP id 4BFD3400D6 for ; Wed, 7 Sep 2022 13:24:42 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=je8OtbcDO4MyjDIAbfNjVC7XDCw+IZRSJeY4g/RiTQrxywZ7tCB0pETYGFjWlMz1iFb5Hm6qxkq93YOhZWDDhKFgnGkDlN+adpsYjHtrRs+761Qum0+UlRpVGffmiHht9aXz+udYCfbIBbeJRW421DK62wNrptBGcFrZ5TEyVxhVlpubLzHfQOq9hYZKz7PhPH1EgnTubY8h6OvSCQayi61wFYGVKgSiiMzttR8xcrzzbtj2IFVjL9sPOx8leWPFA4JMy35KdG4e/FIoY99GCP+QGqun6plHZh6TYgAuto6kKQHBuhSdDo7nLdwtGQlUqdaafd/sw1UVTrtMTdKP5g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=DVRGEJDMjLq0bxkhZ9FP5U2lQ2/yKB/w5ZiNbw1KvgE=; b=WMvzhEG9UZL+5n+Rimv9B5reK33d0M55Vy0sskl15BRxpRw7ohQfkpppNVNtJcLw6+OuL9oR2GXrKW6ywCXS2fdnhh+Z6F8RnACgyLOqlx213UQ6vGbwgKs/x8r3ZgzPrpZw2YoYe6sv3aA0f38sbR2bU2gUet5rdCRhRZaZFCVpegCnyO2/Qb4mqEytZuZbaExQDYdZgTTuOegYzwfEt30D18XBcW/4SoKAyuaDSUSO7gkVFWDQ6ef1kMsyKvoJA3/v3Q91BoUmd2ZG8BSCI5RtQKl2vBEoeL61TyqBUESzw+g+88TNuQ0DG3eYB9kXulBCNR61nLHXGxDpEBdcNA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.80.198) smtp.rcpttodomain=marvell.com smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=DVRGEJDMjLq0bxkhZ9FP5U2lQ2/yKB/w5ZiNbw1KvgE=; b=USu13Q2TA+AeC/tQsbr/AcQ5DZZOJArhaKJGqz6HZWp6aJ8xnweY5JTqbze/yj6DvpJNxnsBkPhI4QucC5s+oZ5CvhbZ3ZktyDEZJr79oHNn8i+M7u5xMp7YxHTGuM2vBz2Yed9rvQKOqdqU9DWPwGIBg05IT3uBgemhup0qS+4= Received: from DM6PR07CA0057.namprd07.prod.outlook.com (2603:10b6:5:74::34) by BL0PR02MB3763.namprd02.prod.outlook.com (2603:10b6:207:48::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5588.12; Wed, 7 Sep 2022 11:24:39 +0000 Received: from DM3NAM02FT021.eop-nam02.prod.protection.outlook.com (2603:10b6:5:74:cafe::de) by DM6PR07CA0057.outlook.office365.com (2603:10b6:5:74::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5612.14 via Frontend Transport; Wed, 7 Sep 2022 11:24:39 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.80.198) smtp.mailfrom=xilinx.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.80.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.80.198; helo=xir-pvapexch01.xlnx.xilinx.com; pr=C Received: from xir-pvapexch01.xlnx.xilinx.com (149.199.80.198) by DM3NAM02FT021.mail.protection.outlook.com (10.13.4.249) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5612.13 via Frontend Transport; Wed, 7 Sep 2022 11:24:38 +0000 Received: from xir-pvapexch01.xlnx.xilinx.com (172.21.17.15) by xir-pvapexch01.xlnx.xilinx.com (172.21.17.15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 7 Sep 2022 12:24:25 +0100 Received: from smtp.xilinx.com (172.21.105.198) by xir-pvapexch01.xlnx.xilinx.com (172.21.17.15) with Microsoft SMTP Server id 15.1.2375.24 via Frontend Transport; Wed, 7 Sep 2022 12:24:25 +0100 Envelope-to: hpothula@marvell.com, xuan.ding@intel.com, thomas@monjalon.net, andrew.rybchenko@oktetlabs.ru, dev@dpdk.org, wenxuanx.wu@intel.com, xiaoyun.li@intel.com, stephen@networkplumber.org, yuanx.wang@intel.com, mdr@ashroe.eu, yuying.zhang@intel.com, qi.z.zhang@intel.com, viacheslavo@nvidia.com, jerinj@marvell.com, ndabilpuram@marvell.com Received: from [10.71.194.74] (port=64441) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1oVtAP-0007oj-3K; Wed, 07 Sep 2022 12:24:25 +0100 Message-ID: <1e1ae69c-8cf3-5447-762f-ef00616dc225@xilinx.com> Date: Wed, 7 Sep 2022 12:24:24 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.2.1 Subject: Re: [EXT] Re: [PATCH v2 1/3] ethdev: introduce pool sort capability Content-Language: en-US To: Hanumanth Reddy Pothula , "Ding, Xuan" , Thomas Monjalon , Andrew Rybchenko CC: "dev@dpdk.org" , "Wu, WenxuanX" , "Li, Xiaoyun" , "stephen@networkplumber.org" , "Wang, YuanX" , "mdr@ashroe.eu" , "Zhang, Yuying" , "Zhang, Qi Z" , "viacheslavo@nvidia.com" , Jerin Jacob Kollanukkaran , Nithin Kumar Dabilpuram References: <20220812104648.1019978-1-hpothula@marvell.com> <20220812172451.1208933-1-hpothula@marvell.com> <76e4091d-4037-ef5b-377f-7daa183e27d3@xilinx.com> From: Ferruh Yigit In-Reply-To: Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM3NAM02FT021:EE_|BL0PR02MB3763:EE_ X-MS-Office365-Filtering-Correlation-Id: 45465998-71bf-4ac1-9ae9-08da90c38d49 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: vn6ZjLPdZKmEEuEVbVleYpsxER1BUiEBLfdtpweuF5qMmptDLkNTh1zj4oizRoKu6U8xtT545nbu7Y/Vl8HJcZvmGdiWUJaQn77zOWY3cBi1QPcVO03dTi+tOu7YqftOB9A3etefeB/wRfN6mMp0XBOMYDCVqb0U1fCS0wGA6mvU0rOUjk6hiDsl7YCB1A1zOc7PXgAHBLlVNtktFqoUiCMlgtnZuSDN6VlN41IUQiOpnfKbTwI77sCpts+4l8JN4Qxhk4OHY+rNTrBY5Nzz5s5v+uNUZ/XnnBPUWtaCZGwjyThKbKsx3UgIZUJumU/md633JkWTiN1WzMzIZM911D2iDIFO2QfQddqSHBeq/JHHculMk+Cheok8BrnCNhm4pQZgCO0eI7zk3mf8wffkKXtyEIk0CbdVrmCNZpYK9n1cp34ifNOFRr34rbn+DUSv8FEHBHeS3SFCxgt2IvAFd8GGlAFW46Hy5HnQZd5qdRkxX8nOOQz781s9ESGhmciiy/AvQ8cTZ46gqLI+4g1qtzRhCMKOdSiWfC2GgSPgm1M0gfK8+OhKv//+6RQ0Iz7RnNbHk0d8TFlGf4fo7EDay6G97cvG6eAZtiyyHdc3jN0ps/lLLaNUY09Lb3ifeiv0s412Co1SErqdB+QJ9CGoJrKeCdZ+fWIELXeeXOV4E16L70JDumTdqNpNbDfCz3lyJ2z7QWUgk/9gKLxWTmeEJck0csjfvrcgb4sswvH0MO5uHlFNs1UbS3zAYUjVhlnyYWSrpBznwO6qzr20wJMi1ah/rzlf6JuQyCt7HlqNxtfogwHhFZHNBRd/0quqCh4anjea93GyrqHW/UImhvlVDQ== X-Forefront-Antispam-Report: CIP:149.199.80.198; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:xir-pvapexch01.xlnx.xilinx.com; PTR:unknown-80-198.xilinx.com; CAT:NONE; SFS:(13230016)(4636009)(376002)(136003)(346002)(396003)(39860400002)(36840700001)(40470700004)(46966006)(478600001)(83380400001)(40460700003)(110136005)(41300700001)(54906003)(316002)(47076005)(40480700001)(82740400003)(186003)(426003)(336012)(2906002)(2616005)(30864003)(70586007)(8936002)(44832011)(7416002)(9786002)(53546011)(5660300002)(36860700001)(31696002)(82310400005)(70206006)(36756003)(7636003)(31686004)(356005)(26005)(8676002)(4326008)(50156003)(43740500002); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Sep 2022 11:24:38.7593 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 45465998-71bf-4ac1-9ae9-08da90c38d49 X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.80.198]; Helo=[xir-pvapexch01.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: DM3NAM02FT021.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR02MB3763 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 9/7/2022 8:02 AM, Hanumanth Reddy Pothula wrote: > > >> -----Original Message----- >> From: Ferruh Yigit >> Sent: Tuesday, September 6, 2022 5:48 PM >> To: Hanumanth Reddy Pothula ; Ding, Xuan >> ; Thomas Monjalon ; Andrew >> Rybchenko >> Cc: dev@dpdk.org; Wu, WenxuanX ; Li, Xiaoyun >> ; stephen@networkplumber.org; Wang, YuanX >> ; mdr@ashroe.eu; Zhang, Yuying >> ; Zhang, Qi Z ; >> viacheslavo@nvidia.com; Jerin Jacob Kollanukkaran ; >> Nithin Kumar Dabilpuram >> Subject: Re: [EXT] Re: [PATCH v2 1/3] ethdev: introduce pool sort capability >> >> On 8/30/2022 1:08 PM, Hanumanth Reddy Pothula wrote: >>> >>> >>>> -----Original Message----- >>>> From: Ferruh Yigit >>>> Sent: Wednesday, August 24, 2022 9:04 PM >>>> To: Ding, Xuan ; Hanumanth Reddy Pothula >>>> ; Thomas Monjalon ; >> Andrew >>>> Rybchenko >>>> Cc: dev@dpdk.org; Wu, WenxuanX ; Li, Xiaoyun >>>> ; stephen@networkplumber.org; Wang, YuanX >>>> ; mdr@ashroe.eu; Zhang, Yuying >>>> ; Zhang, Qi Z ; >>>> viacheslavo@nvidia.com; Jerin Jacob Kollanukkaran >>>> ; Nithin Kumar Dabilpuram >>>> >>>> Subject: [EXT] Re: [PATCH v2 1/3] ethdev: introduce pool sort >>>> capability >>>> >>>> External Email >>>> >>>> --------------------------------------------------------------------- >>>> - >>> >>> >>> Thanks Ding Xuan and Ferruh Yigit for reviewing the changes and for providing >> your valuable feedback. >>> Please find responses inline. >>> >>>> On 8/23/2022 4:26 AM, Ding, Xuan wrote: >>>>> Hi Hanumanth, >>>>> >>>>>> -----Original Message----- >>>>>> From: Hanumanth Pothula >>>>>> Sent: Saturday, August 13, 2022 1:25 AM >>>>>> To: Thomas Monjalon ; Ferruh Yigit >>>>>> ; Andrew Rybchenko >>>>>> >>>>>> Cc: dev@dpdk.org; Ding, Xuan ; Wu, WenxuanX >>>>>> ; Li, Xiaoyun ; >>>>>> stephen@networkplumber.org; Wang, YuanX ; >>>>>> mdr@ashroe.eu; Zhang, Yuying ; Zhang, Qi Z >>>>>> ; viacheslavo@nvidia.com; jerinj@marvell.com; >>>>>> ndabilpuram@marvell.com; Hanumanth Pothula >>>>>> Subject: [PATCH v2 1/3] ethdev: introduce pool sort capability >>>>>> >>>>>> Presently, the 'Buffer Split' feature supports sending multiple >>>>>> segments of the received packet to PMD, which programs the HW to >>>>>> receive the packet in segments from different pools. >>>>>> >>>>>> This patch extends the feature to support the pool sort capability. >>>>>> Some of the HW has support for choosing memory pools based on the >>>>>> packet's size. The pool sort capability allows PMD to choose a >>>>>> memory pool based on the packet's length. >>>>>> >>>>>> This is often useful for saving the memory where the application >>>>>> can create a different pool to steer the specific size of the >>>>>> packet, thus enabling effective use of memory. >>>>>> >>>>>> For example, let's say HW has a capability of three pools, >>>>>> - pool-1 size is 2K >>>>>> - pool-2 size is > 2K and < 4K >>>>>> - pool-3 size is > 4K >>>>>> Here, >>>>>> pool-1 can accommodate packets with sizes < 2K >>>>>> pool-2 can accommodate packets with sizes > 2K and < 4K >>>>>> pool-3 can accommodate packets with sizes > 4K >>>>>> >>>>>> With pool sort capability enabled in SW, an application may create >>>>>> three pools of different sizes and send them to PMD. Allowing PMD >>>>>> to program HW based on packet lengths. So that packets with less >>>>>> than 2K are received on pool-1, packets with lengths between 2K and >>>>>> 4K are received on pool-2 and finally packets greater than 4K are >>>>>> received on pool- >>>> 3. >>>>>> >>>>>> The following two capabilities are added to the rte_eth_rxseg_capa >>>>>> structure, 1. pool_sort --> tells pool sort capability is supported by HW. >>>>>> 2. max_npool --> max number of pools supported by HW. >>>>>> >>>>>> Defined new structure rte_eth_rxseg_sort, to be used only when pool >>>>>> sort capability is present. If required this may be extended >>>>>> further to support more configurations. >>>>>> >>>>>> Signed-off-by: Hanumanth Pothula >>>>>> >>>>>> v2: >>>>>> - Along with spec changes, uploading testpmd and driver changes. >>>>> >>>>> Thanks for CCing. It's an interesting feature. >>>>> >>>>> But I have one question here: >>>>> Buffer split is for split receiving packets into multiple segments, >>>>> while pool sort supports PMD to put the receiving packets into >>>>> different pools >>>> according to packet size. >>>>> Every packet is still intact. >>>>> >>>>> So, at this level, pool sort does not belong to buffer split. >>>>> And you already use a different function to check pool sort rather >>>>> than check >>>> buffer split. >>>>> >>>>> Should a new RX offload be introduced? like >>>> "RTE_ETH_RX_OFFLOAD_POOL_SORT". >>>>> >>> Please find my response below. >>>> >>>> Hi Hanumanth, >>>> >>>> I had the similar concern with the feature. I assume you want to >>>> benefit from exiting config structure that gets multiple mempool as >>>> argument, since this feature also needs multiple mempools, but the feature is >> different. >>>> >>>> It looks to me wrong to check 'OFFLOAD_BUFFER_SPLIT' offload to >>>> decide if to receive into multiple mempool or not, which doesn't have >> anything related split. >>>> Also not sure about using the 'sort' keyword. >>>> What do you think to introduce new fetaure, instead of extending >>>> existing split one? >>> >>> Actually we thought both BUFFER_SPLIT and POOL_SORT are similar >>> features where RX pools are configured in certain way and thought not >>> use up one more RX offload capability, as the existing software architecture >> can be extended to support pool_sort capability. >>> Yes, as part of pool sort, there is no buffer split but pools are picked based on >> the buffer length. >>> >>> Since you think it's better to use new RX offload for POOL_SORT, will go ahead >> and implement the same. >>> >>>> This is optimisation, right? To enable us to use less memory for the >>>> packet buffer, does it qualify to a device offload? >>>> >>> Yes, its qualify as a device offload and saves memory. >>> Marvel NIC has a capability to receive packets on two different pools based >> on its length. >>> Below explained more on the same. >>>> >>>> Also, what is the relation with segmented Rx, how a PMD decide to use >>>> segmented Rx or bigger mempool? How can application can configure this? >>>> >>>> Need to clarify the rules, based on your sample, if a 512 bytes >>>> packet received, does it have to go pool-1, or can it go to any of three pools? >>>> >>> Here, Marvell NIC supports two HW pools, SPB(small packet buffer) pool and >> LPB(large packet buffer) pool. >>> SPB pool can hold up to 4KB >>> LPB pool can hold anything more than 4KB Smaller packets are received >>> on SPB pool and larger packets on LPB pool, based on the RQ configuration. >>> Here, in our case HW pools holds whole packet. So if a packet is >>> divided into segments, lower layer HW going to receive all segments of >>> the packet and then going to place the whole packet in SPB/LPB pool, based >> on the packet length. >>> >> >> If the packet is bigger than 4KB, you have two options, >> 1- Use multiple chained buffers in SPB >> 2- Use single LPB buffer >> >> As I understand (2) is used in this case, but I think we should clarify how this >> feature works with 'RTE_ETH_RX_OFFLOAD_SCATTER' offload, if it is requested >> by user. >> >> Or lets say HW has two pools with 1K and 2K sizes, what is expected with 4K >> packet, with or without scattered Rx offload? >> > > As mentioned, Marvell supports two pools, pool-1(SPB) and pool-2(LPB) > If the packet length is within pool-1 length and has only one segment then the packet is allocated from pool-1. > If the packet length is greater than pool-1 or has more than one segment then the packet is allocated from pool-2. > > So, here packets with a single segment and length less than 1K are allocated from pool-1 and > packets with multiple segments or packets with length greater than 1K are allocated from pool-2. > To have multiple segment or not is HW configuration, it is not external variable. Drivers mostly decide to configure HW to receive multiple segment or not based on buffer size and max packet size device support. In this case since buffer size is not fixed, there are multiple buffer sizes, how driver will configure HW? This is not specific to Marvell HW, for the case multiple mempool supported, it is better to clarify in this patch how it is works with 'RTE_ETH_RX_OFFLOAD_SCATTER' offload. >>> As pools are picked based on the packets length we used SORT term. In case >> you have any better term(word), please suggest. >>> >> >> what about multiple pool, like RTE_ETH_RX_OFFLOAD_MULTIPLE_POOL, I think >> it is more clear but I would like to get more comments from others, naming is >> hard ;) >> > Yes, RTE_ETH_RX_OFFLOAD_MULTIPLE_POOL is clearer than RTE_ETH_RX_OFFLOAD_SORT_POOL. > Thanks for the suggestion. > Will upload V4 with RTE_ETH_RX_OFFLOAD_MULTIPLE_POOL. > >>>> >>>> And I don't see any change in the 'net/cnxk' Rx burst code, when >>>> multiple mempool used, while filling the mbufs shouldn't it check >>>> which mempool is filled. How this works without update in the Rx >>>> burst code, or am I missing some implementation detail? >>>> >>> Please find PMD changes in patch [v2,3/3] net/cnxk: introduce pool >>> sort capability Here, in control path, HW pools are programmed based on the >> inputs it received from the application. >>> Once the HW is programmed, packets are received on HW pools based the >> packets sizes. >> >> I was expecting to changes in datapath too, something like in Rx burst function >> check if spb or lpb is used and update mbuf pointers accordingly. >> But it seems HW doesn't work this way, can you please explain how this feature >> works transparent to datapath code? >> >>>> >>> >>> I will upload V3 where POOL_SORT is implemented as new RX OFFLOAD, unless >> If you have any other suggestion/thoughts. >>> >> >> <...> >