From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id 773AE1B968 for ; Mon, 29 Jan 2018 17:25:05 +0100 (CET) X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Jan 2018 08:25:03 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,431,1511856000"; d="scan'208";a="26354884" Received: from bricha3-mobl3.ger.corp.intel.com ([10.237.221.77]) by fmsmga001.fm.intel.com with SMTP; 29 Jan 2018 08:25:01 -0800 Received: by (sSMTP sendmail emulation); Mon, 29 Jan 2018 16:25:00 +0000 Date: Mon, 29 Jan 2018 16:25:00 +0000 From: Bruce Richardson To: Adrien Mazarguil Cc: Kevin Laatz , dev@dpdk.org Message-ID: <20180129162500.GA7904@bricha3-MOBL3.ger.corp.intel.com> References: <20180112103053.47110-1-kevin.laatz@intel.com> <20180112104846.47396-1-kevin.laatz@intel.com> <20180116131319.GF4256@6wind.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180116131319.GF4256@6wind.com> Organization: Intel Research and Development Ireland Ltd. User-Agent: Mutt/1.9.1 (2017-09-22) Subject: Re: [dpdk-dev] [PATCH v2 0/3] Increase default RX/TX ring sizes X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Jan 2018 16:25:06 -0000 On Tue, Jan 16, 2018 at 02:13:19PM +0100, Adrien Mazarguil wrote: > Hi Kevin, > > On Fri, Jan 12, 2018 at 10:48:43AM +0000, Kevin Laatz wrote: > > Increasing the RX/TX default ring size to 1024/1024 to accommodate for > > faster NICs. With the increase of number of PPS, a larger RX buffer is > > required in order to avoid packet loss. While a ring size of 128 may be > > large enough for 1G and possibly 10G NICs, this is not going to scale to > > small packet sizes at 25G and 40G line rates. As we are increasing the RX > > buffer size to 1024, we also need to increase the TX buffer size to ensure > > that the TX side does not become the bottleneck. > > > > v2 > > - fixed typos in commit messages > > - fixed typo in Cc email address > > I agree with the above and this series contents but would like to comment > anyway. > > Since typical TX/RX bursts are usually somewhere between 16 to 64 packets > depending on the application, increasing ring size instead of burst size to > keep up with packet rate may mean that software (PMD/application) is too > slow on the RX side or hardware is too slow on the TX side (rings always > full basically), and this is worked around by introducing latency to absorb > packet loss. This is not necessarily a good trade-off. Well, if RX burst size of 64 is in use, the existing default of 128 is definely very much too low - though point taken about slowness of RX. > > Granted the most appropriate burst/ring/threshold values always depend on > the application and underlying hardware, and each vendor is responsible for > documenting ideal values for typical applications by providing performance > results. I actually think it probably depends a lot on the NIC hardware in question. What is the optimal size for an app with a 1G NIC will be different for a 25G or 100G NIC. I therefore think in a future release we should have an ethdev API to allow each driver to propose it's recommended ring sizes. The app can perhaps provide a burst size hint as parameter. What do you think? > > My concern is that modifying defaults makes performance comparison > with past DPDK releases more difficult for existing automated tests > that do not provide ring size and other parameters. There should an > impact given that larger rings require more buffers, use more cache, > and access more memory in general. > I'm actually not too concerned about that, as I would expect most serious performance comparisons to be done with individually tuned rx and tx ring size parameters. For zero loss throughput tests, larger ring sizes are needed for any of the NIC I've tested anyway. Overall, I feel this change is long overdue. Series Acked-by: Bruce Richardson