From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1ACF0A04E6; Sun, 8 Nov 2020 05:24:06 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 55E37DED; Sun, 8 Nov 2020 05:24:04 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 1D5DAA3 for ; Sun, 8 Nov 2020 05:24:02 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from akozyrev@nvidia.com) with SMTP; 8 Nov 2020 06:24:01 +0200 Received: from nvidia.com (pegasus02.mtr.labs.mlnx [10.210.16.122]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 0A84O1NO018847; Sun, 8 Nov 2020 06:24:01 +0200 From: Alexander Kozyrev To: dev@dpdk.org Cc: rasland@nvidia.com, matan@nvidia.com, viacheslavo@nvidia.com Date: Sun, 8 Nov 2020 04:23:57 +0000 Message-Id: <20201108042357.28542-1-akozyrev@nvidia.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20201106000442.26059-1-akozyrev@nvidia.com> References: <20201106000442.26059-1-akozyrev@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH v2] net/mlx5: improve vMPRQ descriptors allocation locality X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" There is a performance penalty for the replenish scheme used in vectorized Rx burst for both MPRQ and SPRQ. Mbuf elements are being filled at the end of the mbufs array and being replenished at the beginning. That leads to an increase in cache misses and the performance drop. The more Rx descriptors are used the worse the situation. Change the allocation scheme for vectorized MPRQ Rx burst: allocate new mbufs only when consumed mbufs are almost depleted (always have one burst gap between allocated and consumed indices). Keeping a small number of mbufs allocated improves cache locality and improves performance a lot. Unfortunately, this approach cannot be applied to SPRQ Rx burst routine. In MPRQ Rx burst we simply copy packets from external MPRQ buffers or attach these buffers to mbufs. In SPRQ Rx burst we allow the NIC to fill mbufs for us. Hence keeping a small number of allocated mbufs will limit NIC ability to fill as many buffers as possible. This fact offsets the advantage of better cache locality. Fixes: 0f20acbf5e ("net/mlx5: implement vectorized MPRQ burst") Signed-off-by: Alexander Kozyrev --- v1: https://patchwork.dpdk.org/patch/83779/ v2: fixed assertion for the number of mbufs to replenish drivers/net/mlx5/mlx5_rxtx_vec.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c index 469ea8401d..68c51dce31 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec.c +++ b/drivers/net/mlx5/mlx5_rxtx_vec.c @@ -145,16 +145,17 @@ mlx5_rx_mprq_replenish_bulk_mbuf(struct mlx5_rxq_data *rxq) const uint32_t strd_n = 1 << rxq->strd_num_n; const uint32_t elts_n = wqe_n * strd_n; const uint32_t wqe_mask = elts_n - 1; - uint32_t n = elts_n - (rxq->elts_ci - rxq->rq_pi); + uint32_t n = rxq->elts_ci - rxq->rq_pi; uint32_t elts_idx = rxq->elts_ci & wqe_mask; struct rte_mbuf **elts = &(*rxq->elts)[elts_idx]; - /* Not to cross queue end. */ - if (n >= rxq->rq_repl_thresh) { - MLX5_ASSERT(n >= MLX5_VPMD_RXQ_RPLNSH_THRESH(elts_n)); + if (n <= rxq->rq_repl_thresh) { + MLX5_ASSERT(n + MLX5_VPMD_RX_MAX_BURST >= + MLX5_VPMD_RXQ_RPLNSH_THRESH(elts_n)); MLX5_ASSERT(MLX5_VPMD_RXQ_RPLNSH_THRESH(elts_n) > MLX5_VPMD_DESCS_PER_LOOP); - n = RTE_MIN(n, elts_n - elts_idx); + /* Not to cross queue end. */ + n = RTE_MIN(n + MLX5_VPMD_RX_MAX_BURST, elts_n - elts_idx); if (rte_mempool_get_bulk(rxq->mp, (void *)elts, n) < 0) { rxq->stats.rx_nombuf += n; return; -- 2.24.1