From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-we0-f180.google.com (mail-we0-f180.google.com [74.125.82.180]) by dpdk.org (Postfix) with ESMTP id 22D9E5906 for ; Wed, 18 Dec 2013 10:01:31 +0100 (CET) Received: by mail-we0-f180.google.com with SMTP id t61so7064768wes.25 for ; Wed, 18 Dec 2013 01:02:37 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :cc:subject:references:in-reply-to:content-type :content-transfer-encoding; bh=ojg/r5m0+UAFy/r2BHnh98F7na5tJYvIRMabNxhFZMo=; b=DtO8+cbCv8Mx/mPEvnCzv/X7UdWqG96LWrUhpkZfl0LjXeY1V9bjv54HPukQjC11V5 R38r7KZguycAw2KD/jsokBVNtISVWVRt0SyM4w5nJesgwikSNSX9mA0IiwWKHLVFls/Z eHi9xmrL6rQDQ+DWd+DV+aXVYHq7rpFzXkWqMQ1bTORkCneNs179clg1NfuW5dGA4hq2 GqIJWd1oZZNUB4QtlpsGpIL5of4Ta9g1IvmH/2Tj2+aWKifHHHFWS4fTPaLHU1DnyKu4 vCtZEA5l5WDNC4fLbESFT2iE/mF2fSu1kpDLudNfYVqCAEfHe4Tsqsaws/gDfUMFkWff pYjg== X-Gm-Message-State: ALoCoQlDCsD/AsaG8H00UfAgE21ByMUUvKHcfEGAeKGYIV5SfqC9hGFvLdgjYYtrkt4lMG5X/45m X-Received: by 10.194.20.130 with SMTP id n2mr16980397wje.62.1387357357272; Wed, 18 Dec 2013 01:02:37 -0800 (PST) Received: from [10.16.0.195] (6wind.net2.nerim.net. [213.41.180.237]) by mx.google.com with ESMTPSA id xn17sm2231451wib.1.2013.12.18.01.02.35 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 18 Dec 2013 01:02:36 -0800 (PST) Message-ID: <52B164AB.9000002@6wind.com> Date: Wed, 18 Dec 2013 10:02:35 +0100 From: Olivier MATZ User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20131005 Icedove/17.0.9 MIME-Version: 1.0 To: "Schumm, Ken" References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] When are mbufs released back to the mempool? X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Dec 2013 09:01:31 -0000 Hello Ken, On 12/17/2013 07:13 PM, Schumm, Ken wrote: > When running l2fwd the number of available mbufs returned by > rte_mempool_count() starts at 7680 on an idle system. > > As traffic commences the count declines from 7680 to > 5632 (expected). You are right, some mbufs are kept at 2 places: - in mempool per-core cache: as you noticed, each lcore has a cache to avoid a (more) costly access to the common pool. - also, the mbufs stay in the hardware transmission ring of the NIC. Let's say the size of your hw ring is 512, it means that when transmitting the 513th mbuf, you will free the first mbuf given to your NIC. Therefore, (hw-tx-ring-size * nb-tx-queue) mbufs can be stored in tx hw rings. Of course, the same applies to rx rings, but it's easier to see it as they are filled when initializing the driver. When choosing the number of mbufs, you need to take a value greater than (hw-rx-ring-size * nb-rx-queue) + (hw-tx-ring-size * nb-tx-queue) + (nb-lcores * mbuf-pool-cache-size) > Is this also true of ring buffers? No, if you talk about rte_ring, there is no cache in this structure. Regards, Olivier