From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [148.163.129.52]) by dpdk.org (Postfix) with ESMTP id 66C1B728E for ; Wed, 17 Jan 2018 16:03:23 +0100 (CET) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1-us1.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with ESMTPS id 1B3DE100090; Wed, 17 Jan 2018 15:03:20 +0000 (UTC) Received: from [192.168.38.17] (84.52.114.114) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Wed, 17 Jan 2018 15:03:16 +0000 To: Olivier MATZ CC: References: <1511539591-20966-1-git-send-email-arybchenko@solarflare.com> <20171214133559.i5ppbar2vko6kocu@platinum> From: Andrew Rybchenko Message-ID: <23949676-3427-d311-d67c-1692431276f8@solarflare.com> Date: Wed, 17 Jan 2018 18:03:11 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.5.2 MIME-Version: 1.0 In-Reply-To: <20171214133559.i5ppbar2vko6kocu@platinum> Content-Language: en-GB X-Originating-IP: [84.52.114.114] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ukex01.SolarFlarecom.com (10.17.10.4) X-TM-AS-Product-Ver: SMEX-11.0.0.1191-8.100.1062-23600.003 X-TM-AS-Result: No--20.948300-0.000000-31 X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-MDID: 1516201402-FpwwuS32u4YJ Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] [RFC PATCH 0/6] mempool: add bucket mempool driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Jan 2018 15:03:23 -0000 Hi Olivier, first of all many thanks for the review. See my replies/comments below. Also I'll reply to the the specific patch mails as well. On 12/14/2017 04:36 PM, Olivier MATZ wrote: > Hi Andrew, > > Please find some comments about this patchset below. > I'll also send some comments as replies to the specific patch. > > On Fri, Nov 24, 2017 at 04:06:25PM +0000, Andrew Rybchenko wrote: >> The patch series adds bucket mempool driver which allows to allocate >> (both physically and virtually) contiguous blocks of objects and adds >> mempool API to do it. It is still capable to provide separate objects, >> but it is definitely more heavy-weight than ring/stack drivers. >> >> The target usecase is dequeue in blocks and enqueue separate objects >> back (which are collected in buckets to be dequeued). So, the memory >> pool with bucket driver is created by an application and provided to >> networking PMD receive queue. The choice of bucket driver is done using >> rte_eth_dev_pool_ops_supported(). A PMD that relies upon contiguous >> block allocation should report the bucket driver as the only supported >> and preferred one. > So, you are planning to use this driver for a future/existing PMD? Yes, we're going to use it in the sfc PMD in the case of dedicated FW variant which utilizes the bucketing. > Do you have numbers about the performance gain, in which conditions, > etc... ? And are there conditions where there is a performance loss ? Our idea here is to use it together HW/FW which understand the bucketing. It adds some load on CPU to track buckets, but block/bucket dequeue allows to compensate it. We'll try to prepare performance figures when we have solution close to final. Hopefully pretty soon. >> The number of objects in the contiguous block is a function of bucket >> memory size (.config option) and total element size. > The size of the bucket memory is hardcoded to 32KB. > Why this value ? It is just an example. In fact we test mainly with 64K and 128K. > Won't that be an issue if the user wants to use larger objects? Ideally it should be start-time configurable, but it requires a way to specify driver-specific parameters passed to mempool on allocation. Right now we decided to keep the task for the future since there is no clear understanding on how it should look like. If you have ideas, please, share, we would be thankful. >> As I understand it breaks ABI so it requires 3 acks in accordance with >> policy, deprecation notice and mempool shared library version bump. >> If there is a way to avoid ABI breakage, please, let us know. > If my understanding is correct, the ABI breakage is caused by the > addition of the new block dequeue operation, right? Yes and we'll have more ops to make population of objects customizable. Thanks, Andrew.