From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 1740C5906 for ; Thu, 19 Dec 2013 20:08:42 +0100 (CET) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP; 19 Dec 2013 11:09:49 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.95,514,1384329600"; d="scan'208";a="427396282" Received: from orsmsx107.amr.corp.intel.com ([10.22.240.5]) by orsmga001.jf.intel.com with ESMTP; 19 Dec 2013 11:09:48 -0800 Received: from orsmsx103.amr.corp.intel.com ([169.254.2.114]) by ORSMSX107.amr.corp.intel.com ([169.254.1.138]) with mapi id 14.03.0123.003; Thu, 19 Dec 2013 11:09:48 -0800 From: "Schumm, Ken" To: Olivier MATZ Thread-Topic: [dpdk-dev] When are mbufs released back to the mempool? Thread-Index: Ac77U7KrHCnCMBiYSBelk4UK5b93tgAvz/uAADaqQ7A= Date: Thu, 19 Dec 2013 19:09:48 +0000 Message-ID: References: <52B164AB.9000002@6wind.com> In-Reply-To: <52B164AB.9000002@6wind.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.22.254.140] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] When are mbufs released back to the mempool? X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Dec 2013 19:08:43 -0000 Hello Olivier, Do you know what the reason is for the tx rings filling up and holding on t= o mbufs? It seems they could be freed when the DMA xfer is acknowledged instead of w= aiting until the ring was full. Thanks! Ken Schumm -----Original Message----- From: Olivier MATZ [mailto:olivier.matz@6wind.com]=20 Sent: Wednesday, December 18, 2013 1:03 AM To: Schumm, Ken Cc: dev@dpdk.org Subject: Re: [dpdk-dev] When are mbufs released back to the mempool? Hello Ken, On 12/17/2013 07:13 PM, Schumm, Ken wrote: > When running l2fwd the number of available mbufs returned by > rte_memp= ool_count() starts at 7680 on an idle system. > > As traffic commences the count declines from 7680 to > 5632 (expected). You are right, some mbufs are kept at 2 places: - in mempool per-core cache: as you noticed, each lcore has a cache to avoid a (more) costly access to the common pool. - also, the mbufs stay in the hardware transmission ring of the NIC. Let's say the size of your hw ring is 512, it means that when transmitting the 513th mbuf, you will free the first mbuf given to your NIC. Therefore, (hw-tx-ring-size * nb-tx-queue) mbufs can be stored in tx hw rings. Of course, the same applies to rx rings, but it's easier to see it as they are filled when initializing the driver. When choosing the number of mbufs, you need to take a value greater than (h= w-rx-ring-size * nb-rx-queue) + (hw-tx-ring-size * nb-tx-queue) + (nb-lcores * mbuf-pool-cache-size) > Is this also true of ring buffers? No, if you talk about rte_ring, there is no cache in this structure. Regards, Olivier