From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.droids-corp.org (zoll.droids-corp.org [94.23.50.67]) by dpdk.org (Postfix) with ESMTP id B4AA35A76 for ; Mon, 9 Mar 2015 09:38:52 +0100 (CET) Received: from was59-1-82-226-113-214.fbx.proxad.net ([82.226.113.214] helo=[192.168.0.10]) by mail.droids-corp.org with esmtpsa (TLS1.2:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.80) (envelope-from ) id 1YUtGz-0005El-RN; Mon, 09 Mar 2015 09:42:53 +0100 Message-ID: <54FD5C10.7060701@6wind.com> Date: Mon, 09 Mar 2015 09:38:40 +0100 From: Olivier MATZ User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Icedove/31.3.0 MIME-Version: 1.0 To: Vadim Suraev References: <1424992506-20484-1-git-send-email-vadim.suraev@gmail.com> <2601191342CEEE43887BDE71AB977258213F2C93@irsmsx105.ger.corp.intel.com> <54F06F3A.40401@6wind.com> <54F6C832.4070505@6wind.com> In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Cc: dev@dpdk.org Subject: Re: [dpdk-dev] [PATCH] rte_mbuf: scattered pktmbufs freeing optimization X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Mar 2015 08:38:52 -0000 Hi Vadim, On 03/07/2015 12:24 AM, Vadim Suraev wrote: > Hi, Olivier, > I realized that if local cache for the mempool is enabled and greater > than 0, > if, say, the mempool size is X and local cache length is Y (and it is > not empty,Y>0) > an attempt to allocate a bulk, whose size is greater than local cache > size (max) and greater than X-Y (which is the number of entries in the > ring) will fail. > The reason is: > __mempool_get_bulk will check whether the bulk to be allocated is > greater than mp->cache_size and will fall to ring_dequeue. > And the ring does not contain enough entries in this case while the sum > of ring entries + cache length may be greater or equal to the bulk's > size, so theoretically the bulk could be allocated. > Is it an expected behaviour? Am I missing something? I think it's the expected behavior as the code of mempool_get() tries to minimize the number of tests. In this situation, even if len(mempool) + len(cache) is greater than the number of requested objects, we are almost out of buffers, so returning ENOBUF is not a problem. If the user wants to ensure that he can allocates at least X buffers, he can create the pool with: mempool_create(X + cache_size * MAX_LCORE) > By the way, rte_mempool_count returns a ring count + sum of all local > caches, IMHO it could mislead, even twice. Right, today rte_mempool_count() cannot really be used for something else than debug or stats. Adding rte_mempool_common_count() and rte_mempool_cache_len() may be useful to give the user a better control (and they will be faster because they won't browse the cache lengths of all lcores). But we have to keep in mind that for multi-consumer pools checking the common_count before retrieving objects is useless because the other lcores can retrieve objects at the same time. Regards, Olivier