From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8EE424305F; Fri, 18 Aug 2023 07:57:36 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 678DF40ED9; Fri, 18 Aug 2023 07:57:36 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2082.outbound.protection.outlook.com [40.107.220.82]) by mails.dpdk.org (Postfix) with ESMTP id 329DC40395 for ; Fri, 18 Aug 2023 07:57:35 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=GL7VdQUKDVHkEJpUwsLgz2ykyzFpHT94/v9I6AjGXu1Y3mlf6iX3hvoI0o7ZlMgpGd3Q8CqNXVScjOAnv3r7CEprhdrwJ6scFkLwMucrOb2muKgizcJtNZumbfwzB2kqscJoqIZCb7BnuadVUi6RptsxNj0k6qS1LgP4PiGV+A1MiBwT9WPEPkK0eYZwaFvQhOP0FHQcaP+cAfL15iFr64vofnBFNN5bydpHMbGk5zBQ1ogpJKzOVXcnC4BBeOsViDVDvqH0ShEBCwVJcIHFOaA4vNXIp9h5Q+JQ/QfMNiE1MU0jO95sLRMzshdqfCr++yrwsMK9wdYjTdDO5zfb+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=da5EkmdnGubp/yPtr8ucOw6djyTDHZpdqYyfrkiK1NQ=; b=LKXASl/yHYEGaL+cOVHJDeV9a5RdnrxQIfJW6hGcdzhj/uNNfpJ0KHEQMVkGxCFeM49Fk0pjmVsnM2oE8JM5DVEo8E9Ltrxj/6obnQixzGCn1MQRZuXU8RX9wUapK33SCx7eMeuPRTmPf0EstKcRD8l5Ig0KCIuZ6uNOPYoDpzLxmvSdfZ6pl+0f+cqDDtTvLOam79+Cd/rlHRfqckt6y2AEXBVaC+7g0ojeK+YkB/iYB1s2Iq4+m2FtrEW/Gag1kyJrkAGKxhAGxgGi5Ez+GsMs6gCf3gBHf9cb3qIqL4qK2jikPMAaaEMoOp4xiL5qL5qb3C1PkWlMyOhdqO5Zxg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=da5EkmdnGubp/yPtr8ucOw6djyTDHZpdqYyfrkiK1NQ=; b=r/TmPoS+LZtuq5yLHdeovua8l6+dTlPpJDBzoB2SN+S7ErJANmsW24ZvAoVqvfnQx/7Q35MXVxT68ruuQjvGYTKgE57nom0YtirFvIp5vW2vJKr3zkZo4cx6rT3uymiuDuG9H16FZsMHTGMP44bubXppDvymDhIk8/LTgvtkaZfcM02Nssdja3gGB9HET7nZ9XwghRyS14LOJCg7z3z/fBlBoEEgqxdhhaZMwMh9REmqkWmcjzzlmxb48QK9IDVvXmq4BpcCmy0ZEj+M3N77vDC5FLc5ojZtWQQtR7qvPW+wlNGIoG2amXyueWkc0nFR6ZGWAqV2A3g/d22Jyj95uA== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from CO6PR12MB5459.namprd12.prod.outlook.com (2603:10b6:303:13b::16) by PH7PR12MB6884.namprd12.prod.outlook.com (2603:10b6:510:1ba::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6699.20; Fri, 18 Aug 2023 05:57:31 +0000 Received: from CO6PR12MB5459.namprd12.prod.outlook.com ([fe80::46b7:f479:cba8:9adf]) by CO6PR12MB5459.namprd12.prod.outlook.com ([fe80::46b7:f479:cba8:9adf%7]) with mapi id 15.20.6699.020; Fri, 18 Aug 2023 05:57:31 +0000 Content-Type: multipart/alternative; boundary="------------ETimJiAW66TXbb0PuKGYLLOf" Message-ID: <1fbe4f48-e652-4887-8949-ad9cd15e2c26@nvidia.com> Date: Fri, 18 Aug 2023 13:57:21 +0800 User-Agent: Mozilla Thunderbird Subject: Re: MLX5 PMD access ring library private data Content-Language: en-US To: Honnappa Nagarahalli , Stephen Hemminger Cc: "dev@dpdk.org" , Matan Azrad , "viacheslavo@nvidia.com" , Tyler Retzlaff , Wathsala Wathawana Vithanage , nd References: <20230817070658.45576e6d@hermes.local> From: Jack Min In-Reply-To: X-ClientProxiedBy: SG2PR02CA0104.apcprd02.prod.outlook.com (2603:1096:4:92::20) To CO6PR12MB5459.namprd12.prod.outlook.com (2603:10b6:303:13b::16) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO6PR12MB5459:EE_|PH7PR12MB6884:EE_ X-MS-Office365-Filtering-Correlation-Id: 15f293aa-6c45-4708-9d2c-08db9fb00282 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: DV1rfqtA44YGRAQFYSts1NvorPPfzc258xG+pz67IS9B0rkCF6zBCPz78vAqsXuaxVi6QSky3UKm9cPY42dj1F0dphBK1iQLOEH1XAYQ2tWYbvlhO6yA83nm1VKXp8UCqwHupm8apKZS+7eRPTdxOFHVsbqdlVRjNYkG/oq1s8Z2YMXcJyfXmUeWC2Hk4hvkQMtSmH718zAq+20GGcela9KebhuciV7+x7qLE1FCP89QrrB4RdVjzZOyN89ipl9uSqvl4AF3Gsp1pvwNpCbAOwMv1ezEEaMQS2yK8DcL/Iw29zJxcW2uCHJsmOthcskTG+ncvYFtMonrLUfj1Fb/xexAslpR2KUJjimsfQIaB0o+sYWCWlXaOt2oyUuSf0TLs2wep3im/zOuzW/eD23v4XiKVrH5TEPTythL2RR8cFf/oEbB4UQR6TKEGcWZRu7SqNw556DL8JXZ80sanLlPdOr1lu3AvuEcOyVz3xIp+WqCsCljkfrCWLTkgwvTyjVPGcvTLWRGT8UIuoXLJWcdF9ysLIskQJFebCJW1Ge+K7rTpJ8wud2tx/G/5zT19cGG4qE2pnYebyyywHKTVRg92vJhLH1PmeHNVFtqvUjYi9od+YGwrqF0M0C95vrF+SYKYTdGmm4rQ5bJ1tq7C3EGIQ== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:CO6PR12MB5459.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(376002)(346002)(366004)(136003)(396003)(39860400002)(451199024)(186009)(1800799009)(2616005)(26005)(6666004)(6486002)(6506007)(33964004)(53546011)(6512007)(83380400001)(5660300002)(8936002)(4326008)(8676002)(2906002)(478600001)(41300700001)(110136005)(66946007)(54906003)(66476007)(66556008)(316002)(38100700002)(36756003)(31696002)(86362001)(31686004)(43740500002)(45980500001); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?U1N3QmhOdkhRSUdRaWxpeW1WWmRRbXhheW03ancrL1A5U2Z1cU95ZDFlajM5?= =?utf-8?B?TVhzQnJoWERNVFZmbyswSzZwVjhnMGY2dWxxMTBvTWFIUE9vUVZBMHJLQVhS?= =?utf-8?B?NWxGV1U1TnEvL3k4VEgzRk9IS2hycWNMSHFWRFFmKzFRTWlKM3JMY2w5MW52?= =?utf-8?B?WnNaOEJKNmZPb1JCbVhYR3VlbHc1NXh6UEM4NkpOa2ErM0xzeVdXdGFRdi8r?= =?utf-8?B?cDREUnhibjRFdXhLaUZoVjBjOWMzeU5aN0NLbENsRE9VRExKdkEraVRTWWVu?= =?utf-8?B?OXAxekljOEN6c1orMWUxa084SXZNUmpVRll0aVd6WU4yTE9DbEtDZVRDd2pN?= =?utf-8?B?RkoweVhlalN0eHFSSVc5bkVrem55ZkhIb2JtUmtUcnJyZVNrNGRvbkQ4Z1Y4?= =?utf-8?B?OGZuTGVtVUhMTWVWUDZhQU51cldwbHB0NlpvcWwrVTZ2V21oZnJGTzZzZm9J?= =?utf-8?B?RUMrMHVBSXhGQ0ltQitOdlFIYWxRRWF6MGhoVUxkYmdsUDdLaFBJcEhyVndX?= =?utf-8?B?emh0ZUV4UDdCWnBhbCs4ZkNZMk5kT0dqREp5b3JMRmQ5OGJIbk5aMVZtaDVY?= =?utf-8?B?ZHZqTk1oUFFtV3BkR1FiRUs1c1BNYVJVSzVCazdqcllCdW0vUFQxZW5KNXcv?= =?utf-8?B?dnRSRzVqbjV1WG9KaEVFNWRxZ052TDl3ankwOHlVbnF5VGg1N0RDRXdQMWE0?= =?utf-8?B?U21EQ29sR0xvVXhWeUZpZCthZDc4R0c3VTh2K25NZ256ZWc3SlpwT1dpcTdz?= =?utf-8?B?N0ZRQW9ldFhKVVpjaXA5blJWQitPcEVYZHFCamY2NjBrNUhBeHp6Z2grNmI0?= =?utf-8?B?Q0phRkxxWGM3SGczR2d6RWFaNEFrWWFKS1RUcHh4NGxBRGxXenZMSitiQTFv?= =?utf-8?B?d2RNdm0rS2lGT0ZHMDN0T0V3OUk3dHZxbHRHUzhCRHN4bTJWdTBPTHk3elEz?= =?utf-8?B?Q0NsQlRYdHRYQTB3aWZhVzhVKzQyTFlxMmwyZGh2akM3eElDMVRqU2NhM01P?= =?utf-8?B?MHNLV09SR2pVcFBXdVZtc2xZMkVERlFXQzl3U2ZGdmpuc1ZwdGVYR2tmNmNv?= =?utf-8?B?eXZLQ3JQV0xUbE9Lc29wR1JPT1B1Uk9SWjVMdVpYMUZoUFFaL3FhUjlyQ0s5?= =?utf-8?B?NUhtOUZQMmQ3YVFWSDMyWm41bW9jOUV6WnNWRjVzdGF1RGI2dGswQS8zaGo5?= =?utf-8?B?anlNV3lSTTRRZnFJcDZCSkZwa295UEhERTcrdDhpcDFSREpmcHN3MDJWb1p5?= =?utf-8?B?dWx1dEIrSVFRMDVvTlc3ZzdaSDAwYitKbnNJcFJuWkJMU2ttTVp2RVdGRy9w?= =?utf-8?B?eExZZU5UblpDOTZDUzhTQzZJVXlpdkV1VW9rUE1HTDhMc1JEaWdUTGRKMzBD?= =?utf-8?B?OStOTHdkd3JvRGE2aHFvVDhTN3luczVKTE5WYkdva2o0MkNJcmpDdFRYbnYx?= =?utf-8?B?Ui9HNzBmcEJ6d1E0QXhwU3N3YTFMUEg2d3NmcWV3NENJVHRhZndjbGd5TFJC?= =?utf-8?B?MFNyWUkrZmxKZ25FeGY2NzI1Q0w3dGp5dFNFRU15dTVGRFJIUFQyc2FhZ3N0?= =?utf-8?B?SUtZRS9QOVloZFJlcHhoT01ETmsxUHFXUGp5cmlNTklTR2RISFZUV0JRUVky?= =?utf-8?B?QUxzd0ZGbmFXSVFEWmxYeGJ4a2lXTTR2Z1pxTFZ4bnY0WlhrWGZDSHhWUU04?= =?utf-8?B?STByZ1RZRUpkQnFmd2VscEpRS000UHRKNHREWFR0U3JXZGFtcy9ZNFpzaTdo?= =?utf-8?B?R2NncjBxbjA2SnpNbWhRa2p0Y21sWFpUT1JvMytFSGUrdnNvbWV1cnV1OUwr?= =?utf-8?B?RnpkbkRma0RtUExnaGtzNGwxemdDbjlNaVRxUzhMS1d6UjhCb3VnV0c1UktR?= =?utf-8?B?OEF6dDcvZnA0VzZYYmI4YUorVUk3Vmt0bGRwY2l0ZVJjUDRONVh5OHNramg0?= =?utf-8?B?WTVCb1Fxbm5ob2ppaEl5QmpzMGlIbmN1VmNhZzByS0lZcW5wOGtyUk9zTmNW?= =?utf-8?B?WnJKT3BqQTUwR0lKQ25uam1hVGJjMTA2VGRXL25GYkgzUkZJU0JjR0RVZ2lH?= =?utf-8?B?SXFTMWlLQldURDZ0cXlNRCs0VlFvZlJRdE1UckRMdzd6bzA5MXN0TjcyOC9u?= =?utf-8?Q?/agPg3dve6Dq1wK2dKZ/r88Da?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 15f293aa-6c45-4708-9d2c-08db9fb00282 X-MS-Exchange-CrossTenant-AuthSource: CO6PR12MB5459.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Aug 2023 05:57:30.9951 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: pQvbDHDJoDIE5nLiZEUVFlmZ4npfqlo0OFTxBTyyShIgIwBNLL0oPj4ozSNnfq1PzB2Y9+7xmoJvGB5aB8rTSA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6884 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org --------------ETimJiAW66TXbb0PuKGYLLOf Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit On 2023/8/18 12:30, Honnappa Nagarahalli wrote: > >> -----Original Message----- >> From: Jack Min >> Sent: Thursday, August 17, 2023 9:32 PM >> To: Stephen Hemminger; Honnappa >> Nagarahalli >> Cc:dev@dpdk.org; Matan Azrad; >> viacheslavo@nvidia.com; Tyler Retzlaff; >> Wathsala Wathawana Vithanage; nd >> >> Subject: Re: MLX5 PMD access ring library private data >> >> On 2023/8/17 22:06, Stephen Hemminger wrote: >>> On Thu, 17 Aug 2023 05:06:20 +0000 >>> Honnappa Nagarahalli wrote: >>> >>>> Hi Matan, Viacheslav, >>>> Tyler pointed out that the function >> __mlx5_hws_cnt_pool_enqueue_revert is accessing the ring private structure >> members (prod.head and prod.tail) directly. Even though ' struct rte_ring' is a >> public structure (mainly because the library provides inline functions), the >> structure members are considered private to the ring library. So, this needs to >> be corrected. >>>> It looks like the function __mlx5_hws_cnt_pool_enqueue_revert is trying >> to revert things that were enqueued. It is not clear to me why this >> functionality is required. Can you provide the use case for this? We can >> discuss possible solutions. >>> How can reverting be thread safe? Consumer could have already looked at >> them? >> >> Hey, >> >> In our case, this ring is SC/SP, only accessed by one thread >> (enqueue/dequeue/revert). > You could implement a more simpler and more efficient (For ex: such an implementation would not need any atomic operations, would require less number of cache lines) ring for this. > Is this function being used in the dataplane? Yes,  we can have our own version of ring (no atomic operations) but basic operation are still as same as rte_ring. Since rte ring has been well-designed and tested sufficiently, so there is no strong reason to re-write a new simple version of it until today :) > >> The scenario we have "revert" is: >> >>  We use ring to manager our HW objects (counter in this case) and for each >> core (thread) has "cache" (a SC/SP ring) for sake of performance. >> >> 1. Get objects from "cache" firstly, if cache is empty, we fetch a bulk of free >> objects from global ring into cache. >> >> 2. Put (free) objects also into "cache" firstly, if cache is full, we flush a bulk of >> objects into global ring in order to make some rooms in cache. >> >> However, this HW object cannot be immediately reused after free. It needs >> time to be reset and then can be used again. >> >> So when we flush cache, we want to keep the first enqueued objects still stay >> there because they have more chance already be reset than the latest >> enqueued objects. >> >> Only flush recently enqueued objects back into global ring, act as "LIFO" >> behavior. >> >> This is why we require "revert" enqueued objects. > You could use 'rte_ring_free_count' API before you enqueue to check for available space. Only when cache is full (rte_ring_free_count() is zero), we revert X objects. If there is still  one free slot we will not trigger revert (flush). > >> -Jack >> --------------ETimJiAW66TXbb0PuKGYLLOf Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit
On 2023/8/18 12:30, Honnappa Nagarahalli wrote:

-----Original Message-----
From: Jack Min <jackmin@nvidia.com>
Sent: Thursday, August 17, 2023 9:32 PM
To: Stephen Hemminger <stephen@networkplumber.org>; Honnappa
Nagarahalli <Honnappa.Nagarahalli@arm.com>
Cc: dev@dpdk.org; Matan Azrad <matan@nvidia.com>;
viacheslavo@nvidia.com; Tyler Retzlaff <roretzla@linux.microsoft.com>;
Wathsala Wathawana Vithanage <wathsala.vithanage@arm.com>; nd
<nd@arm.com>
Subject: Re: MLX5 PMD access ring library private data

On 2023/8/17 22:06, Stephen Hemminger wrote:
On Thu, 17 Aug 2023 05:06:20 +0000
Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> wrote:

Hi Matan, Viacheslav,
	Tyler pointed out that the function
__mlx5_hws_cnt_pool_enqueue_revert is accessing the ring private structure
members (prod.head and prod.tail) directly. Even though ' struct rte_ring' is a
public structure (mainly because the library provides inline functions), the
structure members are considered private to the ring library. So, this needs to
be corrected.
It looks like the function __mlx5_hws_cnt_pool_enqueue_revert is trying
to revert things that were enqueued. It is not clear to me why this
functionality is required. Can you provide the use case for this? We can
discuss possible solutions.
How can reverting be thread safe? Consumer could have already looked at
them?

Hey,

In our case, this ring is SC/SP, only accessed by one thread
(enqueue/dequeue/revert).
You could implement a more simpler and more efficient (For ex: such an implementation would not need any atomic operations, would require less number of cache lines) ring for this.
Is this function being used in the dataplane?

Yes,  we can have our own version of ring (no atomic operations) but basic operation are still as same as rte_ring.

Since rte ring has been well-designed and tested sufficiently, so there is no strong reason to re-write a new simple version of it until today :)



The scenario we have "revert" is:

  We use ring to manager our HW objects (counter in this case) and for each
core (thread) has "cache" (a SC/SP ring) for sake of performance.

1. Get objects from "cache" firstly, if cache is empty, we fetch a bulk of free
objects from global ring into cache.

2. Put (free) objects also into "cache" firstly, if cache is full, we flush a bulk of
objects into global ring in order to make some rooms in cache.

However, this HW object cannot be immediately reused after free. It needs
time to be reset and then can be used again.

So when we flush cache, we want to keep the first enqueued objects still stay
there because they have more chance already be reset than the latest
enqueued objects.

Only flush recently enqueued objects back into global ring, act as "LIFO"
behavior.

This is why we require "revert" enqueued objects.
You could use 'rte_ring_free_count' API before you enqueue to check for available space.

Only when cache is full (rte_ring_free_count() is zero), we revert X objects. 

If there is still  one free slot we will not trigger revert (flush).



-Jack


    
--------------ETimJiAW66TXbb0PuKGYLLOf--