From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DFACA43007; Sat, 19 Aug 2023 13:57:39 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 636C7410D7; Sat, 19 Aug 2023 13:57:39 +0200 (CEST) Received: from forward502a.mail.yandex.net (forward502a.mail.yandex.net [178.154.239.82]) by mails.dpdk.org (Postfix) with ESMTP id 480BE40EDF for ; Sat, 19 Aug 2023 13:57:37 +0200 (CEST) Received: from mail-nwsmtp-smtp-production-main-51.vla.yp-c.yandex.net (mail-nwsmtp-smtp-production-main-51.vla.yp-c.yandex.net [IPv6:2a02:6b8:c1f:5e51:0:640:23ee:0]) by forward502a.mail.yandex.net (Yandex) with ESMTP id 75D5D5E8E0; Sat, 19 Aug 2023 14:57:36 +0300 (MSK) Received: by mail-nwsmtp-smtp-production-main-51.vla.yp-c.yandex.net (smtp/Yandex) with ESMTPSA id VvhT2wJDY0U0-qfwO87hQ; Sat, 19 Aug 2023 14:57:35 +0300 X-Yandex-Fwd: 1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1692446255; bh=nsXY8p4kqtVRzO2cnHLKe5oT/yNR9q/H7TIh+8l4Lr0=; h=From:In-Reply-To:Cc:Date:References:To:Subject:Message-ID; b=A9WLRZSDgPpp1bZvraVnyIqEcoXfbau1jU3zyB4l1MfVUvQbR9p8eJiJdn2K07fbc BsWWUMm0a6X71cJepnZr0RPLKFdESXegffkbgBIeonqWC2KdxQfBlVKvchNlEjrRfc XZshCt2ci/OBvM8dXBY3SaQKvs8DTlTHasbagjRM= Authentication-Results: mail-nwsmtp-smtp-production-main-51.vla.yp-c.yandex.net; dkim=pass header.i=@yandex.ru Message-ID: <9850f8f4-bc5b-3af7-c538-99a26aab6d9c@yandex.ru> Date: Sat, 19 Aug 2023 12:57:30 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.11.0 Subject: Re: MLX5 PMD access ring library private data Content-Language: en-US To: Jack Min , Konstantin Ananyev , Stephen Hemminger , Honnappa Nagarahalli Cc: "dev@dpdk.org" , Matan Azrad , "viacheslavo@nvidia.com" , Tyler Retzlaff , Wathsala Wathawana Vithanage , nd References: <20230817070658.45576e6d@hermes.local> <63206979-911f-439b-816d-ee5c1c67f195@nvidia.com> From: Konstantin Ananyev In-Reply-To: <63206979-911f-439b-816d-ee5c1c67f195@nvidia.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org 18/08/2023 10:38, Jack Min пишет: > On 2023/8/18 17:05, Konstantin Ananyev wrote: >>> On 2023/8/17 22:06, Stephen Hemminger wrote: >>>> On Thu, 17 Aug 2023 05:06:20 +0000 >>>> Honnappa Nagarahalli wrote: >>>> >>>>> Hi Matan, Viacheslav, >>>>> Tyler pointed out that the function __mlx5_hws_cnt_pool_enqueue_revert is accessing the ring private structure members >>> (prod.head and prod.tail) directly. Even though ' struct rte_ring' is a public structure (mainly because the library provides inline >>> functions), the structure members are considered private to the ring library. So, this needs to be corrected. >>>>> It looks like the function __mlx5_hws_cnt_pool_enqueue_revert is trying to revert things that were enqueued. It is not clear to >>> me why this functionality is required. Can you provide the use case for this? We can discuss possible solutions. >>>> How can reverting be thread safe? Consumer could have already looked at them? >>> Hey, >>> >>> In our case, this ring is SC/SP, only accessed by one thread >>> (enqueue/dequeue/revert). >>> >>> The scenario we have "revert" is: >>> >>>  We use ring to manager our HW objects (counter in this case) and for >>> each core (thread) has "cache" (a SC/SP ring) for sake of performance. >>> >>> 1. Get objects from "cache" firstly, if cache is empty, we fetch a bulk >>> of free objects from global ring into cache. >>> >>> 2. Put (free) objects also into "cache" firstly, if cache is full, we >>> flush a bulk of objects into global ring in order to make some rooms in >>> cache. >>> >>> However, this HW object cannot be immediately reused after free. It >>> needs time to be reset and then can be used again. >>> >>> So when we flush cache, we want to keep the first enqueued objects still >>> stay there because they have more chance already be reset than the >>> latest enqueued objects. >>> >>> Only flush recently enqueued objects back into global ring, act as >>> "LIFO" behavior. >>> >>> This is why we require "revert" enqueued objects. >>> >> Wouldn't then simple stack fit you better? >> Something like lib/stack/rte_stack_std.h, but even without spinlock around? > > No, stack is always a "LIFO" struct, right? Yep. > > Here first we need this cache works as "FIFO" in most cases (get/put) > because the first enqueued objects have more chance that are already > reset so can reuse them. > > We only require "LIFO" behavior when "flush" cache in order to make some > room, so next free will be quick because it happens in our local cache, > needn't access global ring. > > In short, we require a struct supports "FIFO" and "LIFO". Ok, thanks fro explanation. So you need a ring, but with an ability to revert prod N elements back, right?