From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [143.182.124.21]) by dpdk.org (Postfix) with ESMTP id 9E2311F3 for ; Thu, 19 Sep 2013 21:42:45 +0200 (CEST) Received: from azsmga001.ch.intel.com ([10.2.17.19]) by azsmga101.ch.intel.com with ESMTP; 19 Sep 2013 12:43:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.90,939,1371106800"; d="scan'208";a="362964449" Received: from orsmsx101.amr.corp.intel.com ([10.22.225.128]) by azsmga001.ch.intel.com with ESMTP; 19 Sep 2013 12:43:23 -0700 Received: from orsmsx102.amr.corp.intel.com ([169.254.1.143]) by ORSMSX101.amr.corp.intel.com ([169.254.8.125]) with mapi id 14.03.0123.003; Thu, 19 Sep 2013 12:43:22 -0700 From: "Venkatesan, Venky" To: Robert Sanford , Dmitry Vyal Thread-Topic: [dpdk-dev] How to fight forwarding performance regression on large mempool sizes. Thread-Index: AQHOtXAMY+48E8Q3YEuFC7Jie8TbN5nNdcMQ Date: Thu, 19 Sep 2013 19:43:22 +0000 Message-ID: <1FD9B82B8BF2CF418D9A1000154491D973F4AEAE@ORSMSX102.amr.corp.intel.com> References: <523AACC9.8010304@gmail.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.22.254.140] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] How to fight forwarding performance regression on large mempool sizes. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Sep 2013 19:42:46 -0000 Dmitry,=20 One other question - what version of DPDK are you doing on? =20 -Venky -----Original Message----- From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Robert Sanford Sent: Thursday, September 19, 2013 12:40 PM To: Dmitry Vyal Cc: dev@dpdk.org Subject: Re: [dpdk-dev] How to fight forwarding performance regression on l= arge mempool sizes. Hi Dmitry, The biggest drop-off seems to be from size 128K to 256K. Are you using 1GB = huge pages already (rather than 2MB)? I would think that it would not use over 1GB until you ask for 512K mbufs o= r more. -- Regards, Robert On Thu, Sep 19, 2013 at 3:50 AM, Dmitry Vyal wrote: > Good day everyone, > > While working on IP packet defragmenting I had to enlarge mempool=20 > size. I did this to provide large enough time window for assembling a=20 > fragment sequence. Unfortunately, I got a performance regression: if I=20 > enlarge mempool size from 2**12 to 2**20 MBufs, packet performance for=20 > not fragmented packets drops from ~8.5mpps to ~5.5mpps for single=20 > core. I made only a single measure, so the data are noisy, but the trend = is evident: > SIZE 4096 - 8.47mpps > SIZE 8192 - 8.26mpps > SIZE 16384 - 8.29mpps > SIZE 32768 - 8.31mpps > SIZE 65536 - 8.12mpps > SIZE 131072 - 7.93mpps > SIZE 262144 - 6.22mpps > SIZE 524288 - 5.72mpps > SIZE 1048576 - 5.63mpps > > And I need even larger sizes. > > I want to ask for an advice, how best to tackle this? One way I'm=20 > thinking about is to make two mempools, one large for fragments (we=20 > may accumulate a big number of them) and one small for full packets,=20 > we just forward them burst by burst. Is it possible to configure RSS=20 > to distribute packets between queues according to this scheme? Perhaps, t= here are better ways? > > Thanks, > Dmitry >