From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wg0-f41.google.com (mail-wg0-f41.google.com [74.125.82.41]) by dpdk.org (Postfix) with ESMTP id A55911F3 for ; Thu, 19 Sep 2013 21:39:10 +0200 (CEST) Received: by mail-wg0-f41.google.com with SMTP id l18so7482288wgh.0 for ; Thu, 19 Sep 2013 12:39:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=GXi5QyXueSbg6ulvkYu93cxC2qHpCNzBLcHZPcoXa5k=; b=HqIVF0jDbPTXFXm8JxcwKBxd0a9GxCTHGfh2VPvGrWPlongIVu5ctALAN1oGkMeS51 uiRf/H2MPv7GiTLB0Z4OnrIclL8lqcEcI9UVwQxfAGJ9rjT/xJC6zqB0THVTTz+znpNr VQ9hZR8mMb50Dnucc4poUxU9PynAYyDp++lt/3l8td5kd+QPHyfjrZ+Ek8VLjU/uaPcJ hSd3bStstWIrPEWohRrGTWBi75qvqlfrrd+OgwFDtDztq5Ck5ycTSB+S3t2xseIHH+XM JwiF8PrSAALB9+qPbsLTUJoH86e89Pdw35Q1nweTb5H/VLxvEBuW5pHCeMxn05Yen6M4 pTJg== X-Gm-Message-State: ALoCoQk/z8LWjOqVjy9sIgj/OkKhwZXBBis7C2NyiKouQts+0vjSPnPNcCx6HM0lSWbv6ZrUNLLK MIME-Version: 1.0 X-Received: by 10.194.20.170 with SMTP id o10mr2866296wje.4.1379619589265; Thu, 19 Sep 2013 12:39:49 -0700 (PDT) Received: by 10.216.213.72 with HTTP; Thu, 19 Sep 2013 12:39:49 -0700 (PDT) In-Reply-To: <523AACC9.8010304@gmail.com> References: <523AACC9.8010304@gmail.com> Date: Thu, 19 Sep 2013 15:39:49 -0400 Message-ID: From: Robert Sanford To: Dmitry Vyal Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: dev@dpdk.org Subject: Re: [dpdk-dev] How to fight forwarding performance regression on large mempool sizes. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Sep 2013 19:39:10 -0000 Hi Dmitry, The biggest drop-off seems to be from size 128K to 256K. Are you using 1GB huge pages already (rather than 2MB)? I would think that it would not use over 1GB until you ask for 512K mbufs or more. -- Regards, Robert On Thu, Sep 19, 2013 at 3:50 AM, Dmitry Vyal wrote: > Good day everyone, > > While working on IP packet defragmenting I had to enlarge mempool size. I > did this to provide large enough time window for assembling a fragment > sequence. Unfortunately, I got a performance regression: if I enlarge > mempool size from 2**12 to 2**20 MBufs, packet performance for not > fragmented packets drops from ~8.5mpps to ~5.5mpps for single core. I made > only a single measure, so the data are noisy, but the trend is evident: > SIZE 4096 - 8.47mpps > SIZE 8192 - 8.26mpps > SIZE 16384 - 8.29mpps > SIZE 32768 - 8.31mpps > SIZE 65536 - 8.12mpps > SIZE 131072 - 7.93mpps > SIZE 262144 - 6.22mpps > SIZE 524288 - 5.72mpps > SIZE 1048576 - 5.63mpps > > And I need even larger sizes. > > I want to ask for an advice, how best to tackle this? One way I'm thinking > about is to make two mempools, one large for fragments (we may accumulate a > big number of them) and one small for full packets, we just forward them > burst by burst. Is it possible to configure RSS to distribute packets > between queues according to this scheme? Perhaps, there are better ways? > > Thanks, > Dmitry >