From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi0-f46.google.com (mail-oi0-f46.google.com [209.85.218.46]) by dpdk.org (Postfix) with ESMTP id 4B86C5A98 for ; Sat, 7 Mar 2015 00:24:37 +0100 (CET) Received: by oiba3 with SMTP id a3so20351330oib.7 for ; Fri, 06 Mar 2015 15:24:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=j6QzOvpBDvBK/eE6fEGmeFGGlFYYIebHFfPWhFH8Iko=; b=dx8eiKDoFlIN+9HADKJhypTLe9RQ4ZPDSG0SYDVH208U87RljAHYvkiXq29g9/WtnE 2D3lW6b2tmFFaI5hI8A5g0yVN7knRk+t0BhpeokB1m/ATdu0ZS9XknWHYQGMbujLgtgO 3NDKqYZpwHLUpjs6EkjccVwSYGPIPw9xN1QyIOkQ3nHajgUmtjg46ntwKOF5jSoBxP8b bD00QDDG0zVJJlyx35elqOXXwacgQABaT/pruSFxoNrllY7Jtlqei0sDO9HkbXnkuHyp UoVyAcmzh7XG08jbGKLFXpBsJyVKGmF8YD/6yZ2BdPCS9l6z/PLu++yCcX+7DSD2Ey5x qUvA== MIME-Version: 1.0 X-Received: by 10.182.43.129 with SMTP id w1mr12983925obl.86.1425684276553; Fri, 06 Mar 2015 15:24:36 -0800 (PST) Received: by 10.202.105.138 with HTTP; Fri, 6 Mar 2015 15:24:36 -0800 (PST) In-Reply-To: <54F6C832.4070505@6wind.com> References: <1424992506-20484-1-git-send-email-vadim.suraev@gmail.com> <2601191342CEEE43887BDE71AB977258213F2C93@irsmsx105.ger.corp.intel.com> <54F06F3A.40401@6wind.com> <54F6C832.4070505@6wind.com> Date: Sat, 7 Mar 2015 01:24:36 +0200 Message-ID: From: Vadim Suraev To: Olivier MATZ Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: dev@dpdk.org Subject: Re: [dpdk-dev] [PATCH] rte_mbuf: scattered pktmbufs freeing optimization X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Mar 2015 23:24:37 -0000 Hi, Olivier, I realized that if local cache for the mempool is enabled and greater than 0, if, say, the mempool size is X and local cache length is Y (and it is not empty,Y>0) an attempt to allocate a bulk, whose size is greater than local cache size (max) and greater than X-Y (which is the number of entries in the ring) will fail. The reason is: __mempool_get_bulk will check whether the bulk to be allocated is greater than mp->cache_size and will fall to ring_dequeue. And the ring does not contain enough entries in this case while the sum of ring entries + cache length may be greater or equal to the bulk's size, so theoretically the bulk could be allocated. Is it an expected behaviour? Am I missing something? By the way, rte_mempool_count returns a ring count + sum of all local caches, IMHO it could mislead, even twice. Regards, Vadim. On Wed, Mar 4, 2015 at 10:54 AM, Olivier MATZ wrote: > Hi Vadim, > > On 02/27/2015 06:09 PM, Vadim Suraev wrote: > >> >Indeed, this function looks useful, and I also have a work in progress >> >on this topic, but currently it is not well tested. >> I'm sorry, I didn't know. I'll not interfere with my patch)) >> > > That not what I wanted to say :) > > You are very welcome with your patch, I just wanted to notify > that I am also working in the same area and that's why I listed > the things I'm currently working on. > > > >About the inlining, I have no objection now, although Stephen may be >> >right. I think we can consider un-inline some functions, based on >> >performance measurements. >> I've also noticed in many cases it makes no difference. Seems to be some >> trade-off. >> >> >- clarify the difference between raw_alloc/raw_free and >> > mempool_get/mempool_put: For instance, I think that the reference >> > counter initialization should be managed by rte_pktmbuf_reset() like >> > the rest of the fields, therefore raw_alloc/raw_free could be replaced >> It looks useful to me since not all of the fields are used in every >> particular application so >> if the allocation is decoupled from reset, one can save some cycles. >> > > Yes, this is also a trade-off between maintainability and speed, and > speed is probably the first constraint for the dpdk. But maybe we can > find an alternative that is both fast and maintainable. > > Thanks, > Olivier > >