DPDK patches and discussions
 help / color / mirror / Atom feed
From: Hemant Agrawal <hemant.agrawal@nxp.com>
To: ferruh.yigit@intel.com
Cc: dev@dpdk.org, g.singh@nxp.com,
	Alex Marginean <alexandru.marginean@nxp.com>
Subject: [dpdk-dev] [PATCH 09/10] net/enetc: improve prefetch in Rx ring clean
Date: Mon,  2 Mar 2020 20:02:08 +0530	[thread overview]
Message-ID: <20200302143209.11854-10-hemant.agrawal@nxp.com> (raw)
In-Reply-To: <20200302143209.11854-1-hemant.agrawal@nxp.com>

From: Alex Marginean <alexandru.marginean@nxp.com>

LS1028A does not have platform cache so any reads following a hardware
write will go directly to DDR.  Latency of such a read is in excess of 100
core cycles, so try to prefetch more in advance to mitigate this.
How much is worth prefetching really depends on traffic conditions.  With
congested Rx this could go up to 4 cache lines or so.  But if software
keeps up with hardware and follows behind Rx PI by a cache line then it's
harmful in terms of performance to cache more.  We would only prefetch
data that's yet to be written by ENETC, which will be evicted again anyway.

Signed-off-by: Alex Marginean <alexandru.marginean@nxp.com>
---
 drivers/net/enetc/enetc_rxtx.c | 38 +++++++++++++++++++++++++++++-----
 1 file changed, 33 insertions(+), 5 deletions(-)

diff --git a/drivers/net/enetc/enetc_rxtx.c b/drivers/net/enetc/enetc_rxtx.c
index 1acc43a08..e57ecf2d4 100644
--- a/drivers/net/enetc/enetc_rxtx.c
+++ b/drivers/net/enetc/enetc_rxtx.c
@@ -14,6 +14,8 @@
 #include "enetc.h"
 #include "enetc_logs.h"
 
+#define ENETC_CACHE_LINE_RXBDS	(RTE_CACHE_LINE_SIZE / \
+				 sizeof(union enetc_rx_bd))
 #define ENETC_RXBD_BUNDLE 16 /* Number of buffers to allocate at once */
 
 static int
@@ -321,18 +323,37 @@ enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
 		    int work_limit)
 {
 	int rx_frm_cnt = 0;
-	int cleaned_cnt, i;
+	int cleaned_cnt, i, bd_count;
 	struct enetc_swbd *rx_swbd;
+	union enetc_rx_bd *rxbd;
 
-	cleaned_cnt = enetc_bd_unused(rx_ring);
 	/* next descriptor to process */
 	i = rx_ring->next_to_clean;
+	/* next descriptor to process */
+	rxbd = ENETC_RXBD(*rx_ring, i);
+	rte_prefetch0(rxbd);
+	bd_count = rx_ring->bd_count;
+	/* LS1028A does not have platform cache so any software access following
+	 * a hardware write will go directly to DDR.  Latency of such a read is
+	 * in excess of 100 core cycles, so try to prefetch more in advance to
+	 * mitigate this.
+	 * How much is worth prefetching really depends on traffic conditions.
+	 * With congested Rx this could go up to 4 cache lines or so.  But if
+	 * software keeps up with hardware and follows behind Rx PI by a cache
+	 * line or less then it's harmful in terms of performance to cache more.
+	 * We would only prefetch BDs that have yet to be written by ENETC,
+	 * which will have to be evicted again anyway.
+	 */
+	rte_prefetch0(ENETC_RXBD(*rx_ring,
+				 (i + ENETC_CACHE_LINE_RXBDS) % bd_count));
+	rte_prefetch0(ENETC_RXBD(*rx_ring,
+				 (i + ENETC_CACHE_LINE_RXBDS * 2) % bd_count));
+
+	cleaned_cnt = enetc_bd_unused(rx_ring);
 	rx_swbd = &rx_ring->q_swbd[i];
 	while (likely(rx_frm_cnt < work_limit)) {
-		union enetc_rx_bd *rxbd;
 		uint32_t bd_status;
 
-		rxbd = ENETC_RXBD(*rx_ring, i);
 		bd_status = rte_le_to_cpu_32(rxbd->r.lstatus);
 		if (!bd_status)
 			break;
@@ -353,11 +374,18 @@ enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
 			i = 0;
 			rx_swbd = &rx_ring->q_swbd[i];
 		}
+		rxbd = ENETC_RXBD(*rx_ring, i);
+		rte_prefetch0(ENETC_RXBD(*rx_ring,
+					 (i + ENETC_CACHE_LINE_RXBDS) %
+					  bd_count));
+		rte_prefetch0(ENETC_RXBD(*rx_ring,
+					 (i + ENETC_CACHE_LINE_RXBDS * 2) %
+					 bd_count));
 
-		rx_ring->next_to_clean = i;
 		rx_frm_cnt++;
 	}
 
+	rx_ring->next_to_clean = i;
 	enetc_refill_rx_ring(rx_ring, cleaned_cnt);
 
 	return rx_frm_cnt;
-- 
2.17.1


  parent reply	other threads:[~2020-03-02  9:01 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-02 14:31 [dpdk-dev] [PATCH 00/10] net/enetc: optimization and cleanup Hemant Agrawal
2020-03-02 14:32 ` [dpdk-dev] [PATCH 01/10] net/enetc: do not stall in clean Tx ring Hemant Agrawal
2020-03-02 14:32 ` [dpdk-dev] [PATCH 02/10] net/enetc: use relaxed read for Tx CI in clean Tx Hemant Agrawal
2020-03-02 14:32 ` [dpdk-dev] [PATCH 03/10] net/enetc: batch process enetc clean Tx ring calls Hemant Agrawal
2020-03-02 14:32 ` [dpdk-dev] [PATCH 04/10] net/enetc: erratum wa for Rx lock-up issue Hemant Agrawal
2020-03-02 14:32 ` [dpdk-dev] [PATCH 05/10] net/enetc: improve batching Rx ring refill Hemant Agrawal
2020-03-02 14:32 ` [dpdk-dev] [PATCH 06/10] net/enetc: cache align enetc bdr structure Hemant Agrawal
2020-03-02 14:32 ` [dpdk-dev] [PATCH 07/10] net/enetc: use bulk alloc in Rx refill ring Hemant Agrawal
2020-03-02 14:32 ` [dpdk-dev] [PATCH 08/10] net/enetc: use bulk free in Tx clean Hemant Agrawal
2020-03-02 14:32 ` Hemant Agrawal [this message]
2020-03-02 14:32 ` [dpdk-dev] [PATCH 10/10] net/enetc: init SI transactions attribute reg Hemant Agrawal
2020-03-03 12:31 ` [dpdk-dev] [PATCH 00/10] net/enetc: optimization and cleanup Gagandeep Singh
2020-03-03 14:02   ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200302143209.11854-10-hemant.agrawal@nxp.com \
    --to=hemant.agrawal@nxp.com \
    --cc=alexandru.marginean@nxp.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=g.singh@nxp.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).