From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9431BA0032; Thu, 12 May 2022 01:00:52 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7AABE410EF; Thu, 12 May 2022 01:00:52 +0200 (CEST) Received: from forward501p.mail.yandex.net (forward501p.mail.yandex.net [77.88.28.111]) by mails.dpdk.org (Postfix) with ESMTP id 5143D40E64 for ; Thu, 12 May 2022 01:00:51 +0200 (CEST) Received: from sas8-f59b707a6213.qloud-c.yandex.net (sas8-f59b707a6213.qloud-c.yandex.net [IPv6:2a02:6b8:c1b:2988:0:640:f59b:707a]) by forward501p.mail.yandex.net (Yandex) with ESMTP id D7D2D6213110; Thu, 12 May 2022 02:00:50 +0300 (MSK) Received: from sas8-c6148047b62a.qloud-c.yandex.net (sas8-c6148047b62a.qloud-c.yandex.net [2a02:6b8:c1b:2a11:0:640:c614:8047]) by sas8-f59b707a6213.qloud-c.yandex.net (mxback/Yandex) with ESMTP id 3LiDhOxlDo-0ofaVeHY; Thu, 12 May 2022 02:00:50 +0300 X-Yandex-Fwd: 2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1652310050; bh=EgAn4zRkzNm+EMUYjnK2S0Gjg/stc8ruh96Xn5mRd7M=; h=In-Reply-To:From:Subject:Cc:References:Date:Message-ID:To; b=U0m0Rx384TLEH+Tcn8jIWJjph1z2TORcqrbcfYfgq6s1nQLEzfHg0bk1HIb9JCquG j5eZ8PbHhWI1pb021dW+h0z4lIpD1NcET3el0mjmLd7vn24nfbzWQwbbSLtFpR3AUc k8zj6YxbcHDgPKO+urSMgLNjHr0HYmUVuSa9bCn8= Authentication-Results: sas8-f59b707a6213.qloud-c.yandex.net; dkim=pass header.i=@yandex.ru Received: by sas8-c6148047b62a.qloud-c.yandex.net (smtp/Yandex) with ESMTPSA id Xe0ThAfpfq-0nMisuxs; Thu, 12 May 2022 02:00:50 +0300 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (Client certificate not present) Message-ID: Date: Thu, 12 May 2022 00:00:48 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.8.0 Subject: Re: [PATCH v1 0/5] Direct re-arming of buffers on receive side Content-Language: en-US To: Feifei Wang Cc: dev@dpdk.org, nd@arm.com References: <20220420081650.2043183-1-feifei.wang2@arm.com> From: Konstantin Ananyev In-Reply-To: <20220420081650.2043183-1-feifei.wang2@arm.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org > Currently, the transmit side frees the buffers into the lcore cache and > the receive side allocates buffers from the lcore cache. The transmit > side typically frees 32 buffers resulting in 32*8=256B of stores to > lcore cache. The receive side allocates 32 buffers and stores them in > the receive side software ring, resulting in 32*8=256B of stores and > 256B of load from the lcore cache. > > This patch proposes a mechanism to avoid freeing to/allocating from > the lcore cache. i.e. the receive side will free the buffers from > transmit side directly into it's software ring. This will avoid the 256B > of loads and stores introduced by the lcore cache. It also frees up the > cache lines used by the lcore cache. > > However, this solution poses several constraints: > > 1)The receive queue needs to know which transmit queue it should take > the buffers from. The application logic decides which transmit port to > use to send out the packets. In many use cases the NIC might have a > single port ([1], [2], [3]), in which case a given transmit queue is > always mapped to a single receive queue (1:1 Rx queue: Tx queue). This > is easy to configure. > > If the NIC has 2 ports (there are several references), then we will have > 1:2 (RX queue: TX queue) mapping which is still easy to configure. > However, if this is generalized to 'N' ports, the configuration can be > long. More over the PMD would have to scan a list of transmit queues to > pull the buffers from. Just to re-iterate some generic concerns about this proposal: - We effectively link RX and TX queues - when this feature is enabled, user can't stop TX queue without stopping linked RX queue first. Right now user is free to start/stop any queues at his will. If that feature will allow to link queues from different ports, then even ports will become dependent and user will have to pay extra care when managing such ports. - very limited usage scenario - it will have a positive effect only when we have a fixed forwarding mapping: all (or nearly all) packets from the RX queue are forwarded into the same TX queue. Wonder did you had a chance to consider mempool-cache ZC API, similar to one we have for the ring? It would allow us on TX free path to avoid copying mbufs to temporary array on the stack. Instead we can put them straight from TX SW ring to the mempool cache. That should save extra store/load for mbuf and might help to achieve some performance gain without by-passing mempool. It probably wouldn't be as fast as what you proposing, but might be fast enough to consider as alternative. Again, it would be a generic one, so we can avoid all these implications and limitations. > 2)The other factor that needs to be considered is 'run-to-completion' vs > 'pipeline' models. In the run-to-completion model, the receive side and > the transmit side are running on the same lcore serially. In the pipeline > model. The receive side and transmit side might be running on different > lcores in parallel. This requires locking. This is not supported at this > point. > > 3)Tx and Rx buffers must be from the same mempool. And we also must > ensure Tx buffer free number is equal to Rx buffer free number: > (txq->tx_rs_thresh == RTE_I40E_RXQ_REARM_THRESH) > Thus, 'tx_next_dd' can be updated correctly in direct-rearm mode. This > is due to tx_next_dd is a variable to compute tx sw-ring free location. > Its value will be one more round than the position where next time free > starts. > > Current status in this RFC: > 1)An API is added to allow for mapping a TX queue to a RX queue. > Currently it supports 1:1 mapping. > 2)The i40e driver is changed to do the direct re-arm of the receive > side. > 3)L3fwd application is modified to do the direct rearm mapping > automatically without user config. This follows the rules that the > thread can map TX queue to a RX queue based on the first received > package destination port. > > Testing status: > 1.The testing results for L3fwd are as follows: > ------------------------------------------------------------------- > enabled direct rearm > ------------------------------------------------------------------- > Arm: > N1SDP(neon path): > without fast-free mode with fast-free mode > +14.1% +7.0% > > Ampere Altra(neon path): > without fast-free mode with fast-free mode > +17.1 +14.0% > > X86: > Dell-8268(limit frequency): > sse path: > without fast-free mode with fast-free mode > +6.96% +2.02% > avx2 path: > without fast-free mode with fast-free mode > +9.04% +7.75% > avx512 path: > without fast-free mode with fast-free mode > +5.43% +1.57% > ------------------------------------------------------------------- > This patch can not affect base performance of normal mode. > Furthermore, the reason for that limiting the CPU frequency is > that dell-8268 can encounter i40e NIC bottleneck with maximum > frequency. > > 2.The testing results for VPP-L3fwd are as follows: > ------------------------------------------------------------------- > Arm: > N1SDP(neon path): > with direct re-arm mode enabled > +7.0% > ------------------------------------------------------------------- > For Ampere Altra and X86,VPP-L3fwd test has not been done. > > Reference: > [1] https://store.nvidia.com/en-us/networking/store/product/MCX623105AN-CDAT/NVIDIAMCX623105ANCDATConnectX6DxENAdapterCard100GbECryptoDisabled/ > [2] https://www.intel.com/content/www/us/en/products/sku/192561/intel-ethernet-network-adapter-e810cqda1/specifications.html > [3] https://www.broadcom.com/products/ethernet-connectivity/network-adapters/100gb-nic-ocp/n1100g > > Feifei Wang (5): > net/i40e: remove redundant Dtype initialization > net/i40e: enable direct rearm mode > ethdev: add API for direct rearm mode > net/i40e: add direct rearm mode internal API > examples/l3fwd: enable direct rearm mode > > drivers/net/i40e/i40e_ethdev.c | 34 +++ > drivers/net/i40e/i40e_rxtx.c | 4 - > drivers/net/i40e/i40e_rxtx.h | 4 + > drivers/net/i40e/i40e_rxtx_common_avx.h | 269 ++++++++++++++++++++++++ > drivers/net/i40e/i40e_rxtx_vec_avx2.c | 14 +- > drivers/net/i40e/i40e_rxtx_vec_avx512.c | 249 +++++++++++++++++++++- > drivers/net/i40e/i40e_rxtx_vec_neon.c | 141 ++++++++++++- > drivers/net/i40e/i40e_rxtx_vec_sse.c | 170 ++++++++++++++- > examples/l3fwd/l3fwd_lpm.c | 16 +- > lib/ethdev/ethdev_driver.h | 15 ++ > lib/ethdev/rte_ethdev.c | 14 ++ > lib/ethdev/rte_ethdev.h | 31 +++ > lib/ethdev/version.map | 1 + > 13 files changed, 949 insertions(+), 13 deletions(-) >