From: Ruifeng Wang <ruifeng.wang@arm.com>
To: rasland@nvidia.com, matan@nvidia.com, shahafs@nvidia.com,
viacheslavo@nvidia.com
Cc: dev@dpdk.org, jerinj@marvell.com, nd@arm.com,
honnappa.nagarahalli@arm.com, Ruifeng Wang <ruifeng.wang@arm.com>
Subject: [dpdk-dev] [PATCH 2/2] net/mlx5: reduce unnecessary memory access
Date: Tue, 1 Jun 2021 08:30:55 +0000 [thread overview]
Message-ID: <20210601083055.97261-3-ruifeng.wang@arm.com> (raw)
In-Reply-To: <20210601083055.97261-1-ruifeng.wang@arm.com>
MR btree len is a constant during Rx replenish.
Moved retrieve of the value out of loop to reduce data loads.
Slight performance uplift was measured on N1SDP.
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
---
drivers/net/mlx5/mlx5_rxtx_vec.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c
index d5af2d91ff..fc7e2a7f41 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.c
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.c
@@ -95,6 +95,7 @@ mlx5_rx_replenish_bulk_mbuf(struct mlx5_rxq_data *rxq)
volatile struct mlx5_wqe_data_seg *wq =
&((volatile struct mlx5_wqe_data_seg *)rxq->wqes)[elts_idx];
unsigned int i;
+ uint16_t btree_len;
if (n >= rxq->rq_repl_thresh) {
MLX5_ASSERT(n >= MLX5_VPMD_RXQ_RPLNSH_THRESH(q_n));
@@ -106,6 +107,8 @@ mlx5_rx_replenish_bulk_mbuf(struct mlx5_rxq_data *rxq)
rxq->stats.rx_nombuf += n;
return;
}
+
+ btree_len = mlx5_mr_btree_len(&rxq->mr_ctrl.cache_bh);
for (i = 0; i < n; ++i) {
void *buf_addr;
@@ -119,8 +122,7 @@ mlx5_rx_replenish_bulk_mbuf(struct mlx5_rxq_data *rxq)
wq[i].addr = rte_cpu_to_be_64((uintptr_t)buf_addr +
RTE_PKTMBUF_HEADROOM);
/* If there's a single MR, no need to replace LKey. */
- if (unlikely(mlx5_mr_btree_len(&rxq->mr_ctrl.cache_bh)
- > 1))
+ if (unlikely(btree_len > 1))
wq[i].lkey = mlx5_rx_mb2mr(rxq, elts[i]);
}
rxq->rq_ci += n;
--
2.25.1
next prev parent reply other threads:[~2021-06-01 8:31 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-01 8:30 [dpdk-dev] [PATCH 0/2] MLX5 PMD tuning Ruifeng Wang
2021-06-01 8:30 ` [dpdk-dev] [PATCH 1/2] net/mlx5: remove redundant operations Ruifeng Wang
2021-07-02 8:12 ` Slava Ovsiienko
2021-07-02 10:30 ` Ruifeng Wang
2021-07-05 10:01 ` Slava Ovsiienko
2021-07-07 8:00 ` Ruifeng Wang
2021-06-01 8:30 ` Ruifeng Wang [this message]
2021-07-02 7:05 ` [dpdk-dev] [PATCH 2/2] net/mlx5: reduce unnecessary memory access Slava Ovsiienko
2021-07-02 7:28 ` Ruifeng Wang
2021-06-30 7:22 ` [dpdk-dev] [PATCH 0/2] MLX5 PMD tuning Ruifeng Wang
2021-07-07 9:03 ` [dpdk-dev] [PATCH v2 " Ruifeng Wang
2021-07-07 9:03 ` [dpdk-dev] [PATCH v2 1/2] net/mlx5: remove redundant operations Ruifeng Wang
2021-07-12 15:31 ` Slava Ovsiienko
2021-07-07 9:03 ` [dpdk-dev] [PATCH v2 2/2] net/mlx5: reduce unnecessary memory access Ruifeng Wang
2021-07-12 15:33 ` Slava Ovsiienko
2021-07-13 9:32 ` [dpdk-dev] [PATCH v2 0/2] MLX5 PMD tuning Raslan Darawsheh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210601083055.97261-3-ruifeng.wang@arm.com \
--to=ruifeng.wang@arm.com \
--cc=dev@dpdk.org \
--cc=honnappa.nagarahalli@arm.com \
--cc=jerinj@marvell.com \
--cc=matan@nvidia.com \
--cc=nd@arm.com \
--cc=rasland@nvidia.com \
--cc=shahafs@nvidia.com \
--cc=viacheslavo@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).