From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C71E3A0A02; Fri, 26 Mar 2021 00:17:06 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5759840685; Fri, 26 Mar 2021 00:17:06 +0100 (CET) Received: from mail-qk1-f178.google.com (mail-qk1-f178.google.com [209.85.222.178]) by mails.dpdk.org (Postfix) with ESMTP id 7190A4067B for ; Fri, 26 Mar 2021 00:17:05 +0100 (CET) Received: by mail-qk1-f178.google.com with SMTP id v70so3577301qkb.8 for ; Thu, 25 Mar 2021 16:17:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=qoalRYcX9D31ourLkfpCFA1KYp3JCvyc3Fu+iSyg0J0=; b=F36M+EPmbMLfvil2zEbpH7bdYk9lt9myVCKl6r1sWu2SDbzonS3yJ5pG+wWfPbBbmB R19T0IaR57ENqYTvRaW5Gde/bWz25GCEAKe4G8kExNGPkzZfOD8TbMQJVvVUv7NusdIH M2nhvtSatSRC4Ktwl9Senpuh6lHRKcAIAbpzM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=qoalRYcX9D31ourLkfpCFA1KYp3JCvyc3Fu+iSyg0J0=; b=njCr9AZTH5Y/su7bmTGVM4YyNQkecytq+jVItTtTpIfBZoxM005i86gKzdp3khsHxD 2DP4SZETH6YFAgc5O0v+PUCZ0QA8zBPCv9mh0fzybIKZzT+2Xft+PaMQwBrCkL6otBBB DFvolwCOj2AzltjcR6FQKRd8j/1Xacq7nLuGpr7MmDettYCnTjrEhIccj7NOVOq7btfg LjkwF77hWMDzjOXr/q+FPKGuKFN4e5VfaMsQP+5Q3jrpM/4rjhV1HCvznFwKr5XxLYkN T7S3z+6HatGqKyna9jrssMm/KxK+DJjnLHITKAWHU+lFe2GZF8xMhKDbzXuHdpm5OfJg tTpQ== X-Gm-Message-State: AOAM5330fh6JHvKVDpP78+1ceGFV2b+or3nbxYv0Ry/ESp0keIbvHcU4 LzKTWkLeaTr6PePBZktEnbW9Xxr8ZM12InpSjjdGQg== X-Google-Smtp-Source: ABdhPJw1LjqNmz/uWARR23JXe8ZIlwWq8WxF989aSW7X3KGfPnkCxlC4RhyeL0JzfNMKTTs40CwFl3MIEKCAsaai9ZM= X-Received: by 2002:a37:7985:: with SMTP id u127mr10635820qkc.333.1616714224576; Thu, 25 Mar 2021 16:17:04 -0700 (PDT) MIME-Version: 1.0 References: <20210318085815.804896-1-lizh@nvidia.com> <20210318085815.804896-2-lizh@nvidia.com> In-Reply-To: From: Ajit Khaparde Date: Thu, 25 Mar 2021 16:16:47 -0700 Message-ID: To: Matan Azrad Cc: "Dumitrescu, Cristian" , Li Zhang , Dekel Peled , Ori Kam , Slava Ovsiienko , Shahaf Shuler , "lironh@marvell.com" , "Singh, Jasvinder" , NBU-Contact-Thomas Monjalon , "Yigit, Ferruh" , Andrew Rybchenko , Jerin Jacob , Hemant Agrawal , "Richardson, Bruce" , "Doherty, Declan" , "dev@dpdk.org" , Raslan Darawsheh , Roni Bar Yanai Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha-256; boundary="000000000000157de005be649d16" X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: Re: [dpdk-dev] [PATCH 2/2] [RFC]: ethdev: manage meter API object handles by the drivers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" --000000000000157de005be649d16 Content-Type: text/plain; charset="UTF-8" On Thu, Mar 25, 2021 at 1:21 AM Matan Azrad wrote: > > Hi Cristian > > From: Dumitrescu, Cristian > > Hi Li and Matan, > > > > > -----Original Message----- > > > From: Li Zhang > > > Sent: Thursday, March 18, 2021 8:58 AM > > > To: dekelp@nvidia.com; orika@nvidia.com; viacheslavo@nvidia.com; > > > matan@nvidia.com; shahafs@nvidia.com; lironh@marvell.com; Singh, > > > Jasvinder ; Thomas Monjalon > > > ; Yigit, Ferruh ; Andrew > > > Rybchenko ; Dumitrescu, Cristian > > > > > > Cc: dev@dpdk.org; rasland@nvidia.com; roniba@nvidia.com > > > Subject: [PATCH 2/2] [RFC]: ethdev: manage meter API object handles by > > > the drivers > > > > > > Currently, all the meter objects are managed by the user IDs: > > > meter, profile and policy. > > > Hence, each PMD should manage data-structure in order to map each API > > > ID to the private PMD management structure. > > > > > > From the application side, it has all the picture how meter is going > > > to be assigned to flows and can easily use direct mapping even when > > > the meter handler is provided by the PMDs. > > > > > > Also, this is the approach of the rte_flow API handles: > > > the flow handle and the shared action handle is provided by the PMDs. > > > > > > Use drivers handlers in order to manage all the meter API objects. > > > > > > > This seems to be take 2 of the discussion that we already had in this thread: > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmails.dp > > dk.org%2Farchives%2Fdev%2F2021- > > March%2F200710.html&data=04%7C01%7Cmatan%40nvidia.com%7Cab0 > > e3cc77b9e4101344e08d8ee434bbe%7C43083d15727340c1b7db39efd9ccc17a% > > 7C0%7C0%7C637521320105450617%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiM > > C4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000& > > amp;sdata=94bFRICfGEzk5s53MRUvFMQe5ZlhP2Tmnu82hwUytc4%3D&re > > served=0, so apologies for mostly summarizing my previous feedback here. > > > > I am against this proposal because: > > 1. We already discussed this topic of user-provided handles vs. driver-provided > > handles at length on this exact email list back in 2017, when we first introduced > > this API, and I don't see any real reason to revisit the decision we took then. > > Why not? > There is more experiences\usages now. > New drivers added the support and also now scalability is growing and growing.... > > > > 2. For me, it is more natural and it also helps the application to simplify its data > > structures if the user provides its own IDs rather than the user having to deal > > with the IDs provided by the driver. > > Generally I don't think other flow DPDK APIs align with your feelings here, see rte_flow object and rte_flow_shared_action. > > Specifically for meter: > - here, meter is HW\driver offload where performance\rate either for meter creation\deletion or for the actual data-path is very important especially when we talk on very big numbers, so "natural" has less importance here. > We need to think on the global solution for application ->API->driver. in meter feature, the user has the ability to manage the IDs better than the PMDs for the most of the use-cases: > 1. meter per flow: just save the driver handle in the app flow context. > 2. meter per VM\USER flows\rte_flow group\any other context grouped multiple flows: just save the driver handle in the app context. > If PMD need to map the IDs, it is more complex for sure, requires more memory and more lookup time. > > - I'm not sure it is natural for all the use-cases, sometimes generating unique ID may complex the app. > > > > 3. It is much easier and portable to pass numeric and string-based IDs around > > (e.g. between processes) as opposed to pointer-based IDs, as pointers are only > > valid in one address space and not in others. There are several DPDK APIs that > > moved away from pointer handles to string IDs. Pardon my ignorance.. But which DPDK APIs moved to string IDs from pointer handles? > > Yes, I agree here generally. > But again, since meter is used only by rte_flow, it is better to align the same handle mechanism. I don't want to say - do this because rte_flow uses a pointer. I don't have a strong opinion for one over the other. In the end the logic can be adapted one way or the other. But having implemented rte_flow support in the PMD, I think it is a good idea to avoid the duplication of meter_id to pointer based handle conversion and bookkeeping logic in the application and the PMD. > > > 4. The mapping of user IDs to internal pointers within the driver is IMO not a > > big issue in terms of memory footprint or API call rate. Matan also confirmed > > this in the above thread when saying tis is not about either driver memory > > footprint or API call speed, as this mapping is easy to optimize. > > Yes, it is not very big deal, but still costs more than the new suggestion, especially in big scale. > > > And last but not least, this change obviously propagates in every API function, > > so it would result in big churn in API, all drivers and all apps (including testpmd, > > etc) implementing it (for IMO no real benefit). Yes, this API is experimental and > > therefore we can operate changes in it, but I'd rather see incremental and > > converging improvements rather than this. > > Yes, it changes all API, but very small part in each, will be very easy to align all the current dpdk components to use this concept. > > > If you guys insist with this proposal, I would like to get more opinions from > > other vendors and contributors from within our DPDK community. > > > Yes, more opinions are very welcomed. > > Thanks --000000000000157de005be649d16--