From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wr0-f180.google.com (mail-wr0-f180.google.com [209.85.128.180]) by dpdk.org (Postfix) with ESMTP id 441FA1B418 for ; Tue, 30 Jan 2018 14:48:42 +0100 (CET) Received: by mail-wr0-f180.google.com with SMTP id s5so11252528wra.0 for ; Tue, 30 Jan 2018 05:48:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=Q8+NJI8dXNgE5WZaQUtXvoNSaS138EPfAm7aCopT8hw=; b=OyUXKPGc/Hp+iFAy2f0lzq2yf4joDyP+mh+EORFIJC50o8DhEkb3YDeAUn8r0rSGsL H1J1BCCoNMx9kfCtj0Wm9tW0J73s+Ikw2zq6sG7o+SZOuSB40+CN2hTBma9fHxPIwik8 w2sD5+BvC4S6bY3Pl10OMVbEAnvA3UbI4uuWyNO5wlPXaJiTzO29SoguwdnjEivvou8K R/V68nfvY8LTfF8MjOwoBSED+ophmMdFKM+wcu5jsV9VhX6eDkL/5GQs/P1UL+/dUtwY IaZ1GdXJM9kRR+up5n0m0mse5GJjtO5LvYS3mJcG3lohUSNyWdbU/Tb7zi7zBUTYrS7V qGHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=Q8+NJI8dXNgE5WZaQUtXvoNSaS138EPfAm7aCopT8hw=; b=iDvwbr7os6PE7MsXJ5gKb9NZwKQjEV0HKw03ZePvjgXT3b/WT7qQn6eY7Wt6SQYwnb 3TbHXAM6KkhAKDoFipD8Mem6yvbmdA28/fyZpMQ38QXuvcAhDfm0TT8UgF8WCXA7XQZU b45pGWc+GEo/9GEDThO5vtf1YwrEDUpFOkoqt69u2XwVf0QrQyyKFCXf10uJ6LY9+Jjj wHfJqTMwi6Q4k2QM2AR2atZvnY1Iv9DSAkmsyRTXdvabz3k21FIsgMn+SjVQl0SQIcze k+86eTSkHC2gfIu3E53Li5rv8xHqtpAjB+N+dquMr1fKI6YfNabM29g9JQJKNU276V/8 ZTRw== X-Gm-Message-State: AKwxytfBmWWldzuoXv21WPpUVrGVcwnLc1AspNCjAgCBFmLAlN0BlrL5 NWM+tyAk2kwxJV4hdeeVTv+74eqS X-Google-Smtp-Source: AH8x227W4WoM+ydt4ycttFQRnGsFg3KtRWWjnJ5ufZVMTbK2OkKoUDXjLSwuYzt741gaczp0zACuVg== X-Received: by 10.223.175.34 with SMTP id z31mr8519803wrc.35.1517320121949; Tue, 30 Jan 2018 05:48:41 -0800 (PST) Received: from 6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id k7sm15305084wrg.38.2018.01.30.05.48.41 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 30 Jan 2018 05:48:41 -0800 (PST) Date: Tue, 30 Jan 2018 14:48:28 +0100 From: Adrien Mazarguil To: Bruce Richardson Cc: Kevin Laatz , dev@dpdk.org Message-ID: <20180130134828.GL4256@6wind.com> References: <20180112103053.47110-1-kevin.laatz@intel.com> <20180112104846.47396-1-kevin.laatz@intel.com> <20180116131319.GF4256@6wind.com> <20180129162500.GA7904@bricha3-MOBL3.ger.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180129162500.GA7904@bricha3-MOBL3.ger.corp.intel.com> Subject: Re: [dpdk-dev] [PATCH v2 0/3] Increase default RX/TX ring sizes X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Jan 2018 13:48:42 -0000 On Mon, Jan 29, 2018 at 04:25:00PM +0000, Bruce Richardson wrote: > On Tue, Jan 16, 2018 at 02:13:19PM +0100, Adrien Mazarguil wrote: > > Hi Kevin, > > > > On Fri, Jan 12, 2018 at 10:48:43AM +0000, Kevin Laatz wrote: > > > Increasing the RX/TX default ring size to 1024/1024 to accommodate for > > > faster NICs. With the increase of number of PPS, a larger RX buffer is > > > required in order to avoid packet loss. While a ring size of 128 may be > > > large enough for 1G and possibly 10G NICs, this is not going to scale to > > > small packet sizes at 25G and 40G line rates. As we are increasing the RX > > > buffer size to 1024, we also need to increase the TX buffer size to ensure > > > that the TX side does not become the bottleneck. > > > > > > v2 > > > - fixed typos in commit messages > > > - fixed typo in Cc email address > > > > I agree with the above and this series contents but would like to comment > > anyway. > > > > Since typical TX/RX bursts are usually somewhere between 16 to 64 packets > > depending on the application, increasing ring size instead of burst size to > > keep up with packet rate may mean that software (PMD/application) is too > > slow on the RX side or hardware is too slow on the TX side (rings always > > full basically), and this is worked around by introducing latency to absorb > > packet loss. This is not necessarily a good trade-off. > > Well, if RX burst size of 64 is in use, the existing default of 128 is > definely very much too low - though point taken about slowness of RX. Agreed, I just wanted to stress that increasing TX/RX ring sizes may result in rings still full, but thanks to their FIFO nature, now with increased latency and resource consumption while still dropping packets in case of HW/SW slowness. This is not the proper workaround for such a scenario (not uncommon). > > Granted the most appropriate burst/ring/threshold values always depend on > > the application and underlying hardware, and each vendor is responsible for > > documenting ideal values for typical applications by providing performance > > results. > > I actually think it probably depends a lot on the NIC hardware in > question. What is the optimal size for an app with a 1G NIC will be > different for a 25G or 100G NIC. I therefore think in a future release > we should have an ethdev API to allow each driver to propose it's > recommended ring sizes. The app can perhaps provide a burst size hint as > parameter. What do you think? Sounds like a good idea. It could also be implemented without hurting any API by making 0 descriptors a special value for rte_eth_[rt]x_queue_setup(), so that being lazy translates to optimized defaults at the cost of some uncertainty regarding mbuf pool sizing. PMDs that do not implement this would reject queue creation as they likely already do. > > My concern is that modifying defaults makes performance comparison > > with past DPDK releases more difficult for existing automated tests > > that do not provide ring size and other parameters. There should an > > impact given that larger rings require more buffers, use more cache, > > and access more memory in general. > > > I'm actually not too concerned about that, as I would expect most > serious performance comparisons to be done with individually tuned rx > and tx ring size parameters. For zero loss throughput tests, larger ring > sizes are needed for any of the NIC I've tested anyway. > > Overall, I feel this change is long overdue. > > Series Acked-by: Bruce Richardson True, therefore: Acked-by: Adrien Mazarguil -- Adrien Mazarguil 6WIND