DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 1/3] net/bnx2x: fix to use required mem barriers in Rx path
@ 2020-01-14  0:39 Rasesh Mody
  2020-01-14  0:39 ` [dpdk-dev] [PATCH 2/3] net/bnx2x: fix reset of scan FP flag Rasesh Mody
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Rasesh Mody @ 2020-01-14  0:39 UTC (permalink / raw)
  To: dev, jerinj, ferruh.yigit; +Cc: Rasesh Mody, GR-Everest-DPDK-Dev, stable

When handling RX completion queue PMD is not using required read/write
barriers before reading completion queue element (CQE) indices,
updating/writing hardware consumer and producer.
This patch adds appropriate read/write memory barriers in places which
are required by driver and adapter to read or update indices.

Fixes: 540a211084a7 ("bnx2x: driver core")
Cc: stable@dpdk.org

Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/bnx2x/bnx2x.c      |  5 +++++
 drivers/net/bnx2x/bnx2x_rxtx.c | 22 ++++++++++++++++++++++
 2 files changed, 27 insertions(+)

diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
index ed31335ac..9c5e7995d 100644
--- a/drivers/net/bnx2x/bnx2x.c
+++ b/drivers/net/bnx2x/bnx2x.c
@@ -1255,6 +1255,11 @@ static uint8_t bnx2x_rxeof(struct bnx2x_softc *sc, struct bnx2x_fastpath *fp)
 		return 0;
 	}
 
+	/* Add memory barrier as status block fields can change. This memory
+	 * barrier will flush out all the read/write operations to status block
+	 * generated before the barrier. It will ensure stale data is not read
+	 */
+	mb();
 	/* CQ "next element" is of the size of the regular element */
 	hw_cq_cons = le16toh(*fp->rx_cq_cons_sb);
 	if (unlikely((hw_cq_cons & USABLE_RCQ_ENTRIES_PER_PAGE) ==
diff --git a/drivers/net/bnx2x/bnx2x_rxtx.c b/drivers/net/bnx2x/bnx2x_rxtx.c
index ae97dfee3..b52f023ea 100644
--- a/drivers/net/bnx2x/bnx2x_rxtx.c
+++ b/drivers/net/bnx2x/bnx2x_rxtx.c
@@ -324,11 +324,22 @@ bnx2x_upd_rx_prod_fast(struct bnx2x_softc *sc, struct bnx2x_fastpath *fp,
 	struct ustorm_eth_rx_producers rx_prods = { 0 };
 	uint32_t *val = NULL;
 
+	/* Update producers */
 	rx_prods.bd_prod  = rx_bd_prod;
 	rx_prods.cqe_prod = rx_cq_prod;
 
+	/*
+	 * Make sure that the BD and SGE data is updated before updating the
+	 * producers since FW might read the BD/SGE right after the producer
+	 * is updated.
+	 * The following barrier is also mandatory since FW will assumes BDs
+	 * must have buffers.
+	 */
+	wmb();
 	val = (uint32_t *)&rx_prods;
 	REG_WR(sc, fp->ustorm_rx_prods_offset, val[0]);
+
+	wmb();			/* keep prod updates ordered */
 }
 
 static uint16_t
@@ -346,6 +357,11 @@ bnx2x_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	uint16_t len, pad;
 	struct rte_mbuf *rx_mb = NULL;
 
+	/* Add memory barrier as status block fields can change. This memory
+	 * barrier will flush out all the read/write operations to status block
+	 * generated before the barrier. It will ensure stale data is not read
+	 */
+	mb();
 	hw_cq_cons = le16toh(*fp->rx_cq_cons_sb);
 	if ((hw_cq_cons & USABLE_RCQ_ENTRIES_PER_PAGE) ==
 			USABLE_RCQ_ENTRIES_PER_PAGE) {
@@ -357,6 +373,12 @@ bnx2x_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	sw_cq_cons = rxq->rx_cq_head;
 	sw_cq_prod = rxq->rx_cq_tail;
 
+	/*
+	 * Memory barrier necessary as speculative reads of the Rx
+	 * buffer can be ahead of the index in the status block
+	 */
+	rmb();
+
 	if (sw_cq_cons == hw_cq_cons)
 		return 0;
 
-- 
2.18.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [dpdk-dev] [PATCH 2/3] net/bnx2x: fix reset of scan FP flag
  2020-01-14  0:39 [dpdk-dev] [PATCH 1/3] net/bnx2x: fix to use required mem barriers in Rx path Rasesh Mody
@ 2020-01-14  0:39 ` Rasesh Mody
  2020-01-14  0:39 ` [dpdk-dev] [PATCH 3/3] net/bnx2x: fix to sync fastpath Rx queue access Rasesh Mody
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Rasesh Mody @ 2020-01-14  0:39 UTC (permalink / raw)
  To: dev, jerinj, ferruh.yigit; +Cc: Rasesh Mody, GR-Everest-DPDK-Dev, stable

The fastpath task queue handler resets the fastpath scan flag
unconditionally, this patch changes that to reset the flag
only if it was set.

Fixes: 08a6e472c3d7 ("net/bnx2x: fix packet drop")
Cc: stable@dpdk.org

Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/bnx2x/bnx2x.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
index 9c5e7995d..d38da4f60 100644
--- a/drivers/net/bnx2x/bnx2x.c
+++ b/drivers/net/bnx2x/bnx2x.c
@@ -4582,10 +4582,10 @@ static void bnx2x_handle_fp_tq(struct bnx2x_fastpath *fp)
 			bnx2x_handle_fp_tq(fp);
 			return;
 		}
+		/* We have completed slow path completion, clear the flag */
+		rte_atomic32_set(&sc->scan_fp, 0);
 	}
 
-	/* Assuming we have completed slow path completion, clear the flag */
-	rte_atomic32_set(&sc->scan_fp, 0);
 	bnx2x_ack_sb(sc, fp->igu_sb_id, USTORM_ID,
 		   le16toh(fp->fp_hc_idx), IGU_INT_ENABLE, 1);
 }
-- 
2.18.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [dpdk-dev] [PATCH 3/3] net/bnx2x: fix to sync fastpath Rx queue access
  2020-01-14  0:39 [dpdk-dev] [PATCH 1/3] net/bnx2x: fix to use required mem barriers in Rx path Rasesh Mody
  2020-01-14  0:39 ` [dpdk-dev] [PATCH 2/3] net/bnx2x: fix reset of scan FP flag Rasesh Mody
@ 2020-01-14  0:39 ` Rasesh Mody
  2020-01-14 13:49 ` [dpdk-dev] [PATCH 1/3] net/bnx2x: fix to use required mem barriers in Rx path Jerin Jacob
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Rasesh Mody @ 2020-01-14  0:39 UTC (permalink / raw)
  To: dev, jerinj, ferruh.yigit; +Cc: Rasesh Mody, GR-Everest-DPDK-Dev, stable

PMD handles fast path completions in the Rx handler and control path
completions in the interrupt handler. They both are processing
completions from the same fastpath completion queue. There is a
potential for race condition when these two paths are processing
the completions from the same queue and trying to updating Rx Producer.

Add a fastpath Rx lock between these two paths to close this race.

Fixes: 540a211084a7 ("bnx2x: driver core")
Cc: stable@dpdk.org

Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/bnx2x/bnx2x.c      | 12 ++++++++++++
 drivers/net/bnx2x/bnx2x.h      |  3 +++
 drivers/net/bnx2x/bnx2x_rxtx.c |  8 +++++++-
 3 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
index d38da4f60..7ea98b936 100644
--- a/drivers/net/bnx2x/bnx2x.c
+++ b/drivers/net/bnx2x/bnx2x.c
@@ -1167,6 +1167,10 @@ static int bnx2x_has_rx_work(struct bnx2x_fastpath *fp)
 	if (unlikely((rx_cq_cons_sb & MAX_RCQ_ENTRIES(rxq)) ==
 		     MAX_RCQ_ENTRIES(rxq)))
 		rx_cq_cons_sb++;
+
+	PMD_RX_LOG(DEBUG, "hw CQ cons = %d, sw CQ cons = %d",
+		   rx_cq_cons_sb, rxq->rx_cq_head);
+
 	return rxq->rx_cq_head != rx_cq_cons_sb;
 }
 
@@ -1249,9 +1253,12 @@ static uint8_t bnx2x_rxeof(struct bnx2x_softc *sc, struct bnx2x_fastpath *fp)
 	uint16_t bd_cons, bd_prod, bd_prod_fw, comp_ring_cons;
 	uint16_t hw_cq_cons, sw_cq_cons, sw_cq_prod;
 
+	rte_spinlock_lock(&(fp)->rx_mtx);
+
 	rxq = sc->rx_queues[fp->index];
 	if (!rxq) {
 		PMD_RX_LOG(ERR, "RX queue %d is NULL", fp->index);
+		rte_spinlock_unlock(&(fp)->rx_mtx);
 		return 0;
 	}
 
@@ -1326,9 +1333,14 @@ static uint8_t bnx2x_rxeof(struct bnx2x_softc *sc, struct bnx2x_fastpath *fp)
 	rxq->rx_cq_head = sw_cq_cons;
 	rxq->rx_cq_tail = sw_cq_prod;
 
+	PMD_RX_LOG(DEBUG, "BD prod = %d, sw CQ prod = %d",
+		   bd_prod_fw, sw_cq_prod);
+
 	/* Update producers */
 	bnx2x_update_rx_prod(sc, fp, bd_prod_fw, sw_cq_prod);
 
+	rte_spinlock_unlock(&(fp)->rx_mtx);
+
 	return sw_cq_cons != hw_cq_cons;
 }
 
diff --git a/drivers/net/bnx2x/bnx2x.h b/drivers/net/bnx2x/bnx2x.h
index 3383c7675..1dbc98197 100644
--- a/drivers/net/bnx2x/bnx2x.h
+++ b/drivers/net/bnx2x/bnx2x.h
@@ -360,6 +360,9 @@ struct bnx2x_fastpath {
 	/* pointer back to parent structure */
 	struct bnx2x_softc *sc;
 
+	/* Used to synchronize fastpath Rx access */
+	rte_spinlock_t rx_mtx;
+
 	/* status block */
 	struct bnx2x_dma                 sb_dma;
 	union bnx2x_host_hc_status_block status_block;
diff --git a/drivers/net/bnx2x/bnx2x_rxtx.c b/drivers/net/bnx2x/bnx2x_rxtx.c
index b52f023ea..c8bb202d6 100644
--- a/drivers/net/bnx2x/bnx2x_rxtx.c
+++ b/drivers/net/bnx2x/bnx2x_rxtx.c
@@ -357,6 +357,8 @@ bnx2x_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	uint16_t len, pad;
 	struct rte_mbuf *rx_mb = NULL;
 
+	rte_spinlock_lock(&(fp)->rx_mtx);
+
 	/* Add memory barrier as status block fields can change. This memory
 	 * barrier will flush out all the read/write operations to status block
 	 * generated before the barrier. It will ensure stale data is not read
@@ -379,8 +381,10 @@ bnx2x_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	 */
 	rmb();
 
-	if (sw_cq_cons == hw_cq_cons)
+	if (sw_cq_cons == hw_cq_cons) {
+		rte_spinlock_unlock(&(fp)->rx_mtx);
 		return 0;
+	}
 
 	while (nb_rx < nb_pkts && sw_cq_cons != hw_cq_cons) {
 
@@ -461,6 +465,8 @@ bnx2x_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 
 	bnx2x_upd_rx_prod_fast(sc, fp, bd_prod, sw_cq_prod);
 
+	rte_spinlock_unlock(&(fp)->rx_mtx);
+
 	return nb_rx;
 }
 
-- 
2.18.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] [PATCH 1/3] net/bnx2x: fix to use required mem barriers in Rx path
  2020-01-14  0:39 [dpdk-dev] [PATCH 1/3] net/bnx2x: fix to use required mem barriers in Rx path Rasesh Mody
  2020-01-14  0:39 ` [dpdk-dev] [PATCH 2/3] net/bnx2x: fix reset of scan FP flag Rasesh Mody
  2020-01-14  0:39 ` [dpdk-dev] [PATCH 3/3] net/bnx2x: fix to sync fastpath Rx queue access Rasesh Mody
@ 2020-01-14 13:49 ` Jerin Jacob
  2020-01-15  1:57   ` [dpdk-dev] [EXT] " Rasesh Mody
  2020-01-26 22:54 ` [dpdk-dev] [PATCH v2 1/2] net/bnx2x: fix reset of scan FP flag Rasesh Mody
  2020-01-26 22:54 ` [dpdk-dev] [PATCH v2 2/2] net/bnx2x: fix to sync fastpath Rx queue access Rasesh Mody
  4 siblings, 1 reply; 9+ messages in thread
From: Jerin Jacob @ 2020-01-14 13:49 UTC (permalink / raw)
  To: Rasesh Mody, Gavin Hu
  Cc: dpdk-dev, Jerin Jacob, Ferruh Yigit, GR-Everest-DPDK-Dev, dpdk stable

+ Gavin

On Tue, Jan 14, 2020 at 6:09 AM Rasesh Mody <rmody@marvell.com> wrote:
>
> When handling RX completion queue PMD is not using required read/write
> barriers before reading completion queue element (CQE) indices,
> updating/writing hardware consumer and producer.
> This patch adds appropriate read/write memory barriers in places which
> are required by driver and adapter to read or update indices.
>
> Fixes: 540a211084a7 ("bnx2x: driver core")
> Cc: stable@dpdk.org
>
> Signed-off-by: Rasesh Mody <rmody@marvell.com>
> ---
>  drivers/net/bnx2x/bnx2x.c      |  5 +++++
>  drivers/net/bnx2x/bnx2x_rxtx.c | 22 ++++++++++++++++++++++
>  2 files changed, 27 insertions(+)
>
> diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
> index ed31335ac..9c5e7995d 100644
> --- a/drivers/net/bnx2x/bnx2x.c
> +++ b/drivers/net/bnx2x/bnx2x.c
> @@ -1255,6 +1255,11 @@ static uint8_t bnx2x_rxeof(struct bnx2x_softc *sc, struct bnx2x_fastpath *fp)
>                 return 0;
>         }
>
> +       /* Add memory barrier as status block fields can change. This memory
> +        * barrier will flush out all the read/write operations to status block
> +        * generated before the barrier. It will ensure stale data is not read
> +        */
> +       mb();

# Do you need full barriers here?
# Which architecture did you saw this issue?
# rte_cio_* barriers are performance Friday, Have you checked
rte_cio_* would suffice the requirements.
See the discussion in  http://patches.dpdk.org/patch/64038/

I assume 2/3 and 3/3 patches are for the slow path. if so, it is fine
to use full barriers on those patches.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH 1/3] net/bnx2x: fix to use required mem barriers in Rx path
  2020-01-14 13:49 ` [dpdk-dev] [PATCH 1/3] net/bnx2x: fix to use required mem barriers in Rx path Jerin Jacob
@ 2020-01-15  1:57   ` Rasesh Mody
  2020-01-26 22:55     ` Rasesh Mody
  0 siblings, 1 reply; 9+ messages in thread
From: Rasesh Mody @ 2020-01-15  1:57 UTC (permalink / raw)
  To: Jerin Jacob, Gavin Hu
  Cc: dpdk-dev, Jerin Jacob Kollanukkaran, Ferruh Yigit,
	GR-Everest-DPDK-Dev, dpdk stable

Hi Jerin,

>From: Jerin Jacob <jerinjacobk@gmail.com>
>Sent: Tuesday, January 14, 2020 5:49 AM
>To: Rasesh Mody <rmody@marvell.com>; Gavin Hu <gavin.hu@arm.com>
>Cc: dpdk-dev <dev@dpdk.org>; Jerin Jacob Kollanukkaran
><jerinj@marvell.com>; Ferruh Yigit <ferruh.yigit@intel.com>; GR-Everest-
>DPDK-Dev <GR-Everest-DPDK-Dev@marvell.com>; dpdk stable
><stable@dpdk.org>
>Subject: [EXT] Re: [dpdk-dev] [PATCH 1/3] net/bnx2x: fix to use required
>mem barriers in Rx path
>
>External Email
>
>----------------------------------------------------------------------
>+ Gavin
>
>On Tue, Jan 14, 2020 at 6:09 AM Rasesh Mody <rmody@marvell.com> wrote:
>>
>> When handling RX completion queue PMD is not using required read/write
>> barriers before reading completion queue element (CQE) indices,
>> updating/writing hardware consumer and producer.
>> This patch adds appropriate read/write memory barriers in places which
>> are required by driver and adapter to read or update indices.
>>
>> Fixes: 540a211084a7 ("bnx2x: driver core")
>> Cc: stable@dpdk.org
>>
>> Signed-off-by: Rasesh Mody <rmody@marvell.com>
>> ---
>>  drivers/net/bnx2x/bnx2x.c      |  5 +++++
>>  drivers/net/bnx2x/bnx2x_rxtx.c | 22 ++++++++++++++++++++++
>>  2 files changed, 27 insertions(+)
>>
>> diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
>> index ed31335ac..9c5e7995d 100644
>> --- a/drivers/net/bnx2x/bnx2x.c
>> +++ b/drivers/net/bnx2x/bnx2x.c
>> @@ -1255,6 +1255,11 @@ static uint8_t bnx2x_rxeof(struct bnx2x_softc
>*sc, struct bnx2x_fastpath *fp)
>>                 return 0;
>>         }
>>
>> +       /* Add memory barrier as status block fields can change. This memory
>> +        * barrier will flush out all the read/write operations to status block
>> +        * generated before the barrier. It will ensure stale data is not read
>> +        */
>> +       mb();
>
># Do you need full barriers here?

Yes

># Which architecture did you saw this issue?
># rte_cio_* barriers are performance Friday, Have you checked
>rte_cio_* would suffice the requirements.
>See the discussion in  https://urldefense.proofpoint.com/v2/url?u=http-
>3A__patches.dpdk.org_patch_64038_&d=DwIBaQ&c=nKjWec2b6R0mOyPaz7
>xtfQ&r=9aB46H7c7TYTnBun6ODgtnNLQdw3jNiVKHbs9eOyBSU&m=lmdOnuAp
>3MUDqX100Z4E2BDgaSSAy9oBgksySHVfEBI&s=oNq78smfHMB5fUZW_ew0_p
>e6gp_5C0MTw0TSPPWR8qQ&e=

This patch is to prevent potential issue which can happen in absence of required barriers.
The above barrier is in slow path. However, there is another full barrier in fast path as part of this patch which can use rte_cio_* .
I'll revise the patch and resend.

>
>I assume 2/3 and 3/3 patches are for the slow path. if so, it is fine to use full
>barriers on those patches.

The 2/3 is slow path fix and 3/3 is fixing a race condition that can occur between slow path and fast path.

Thanks!
-Rasesh



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [dpdk-dev] [PATCH v2 1/2] net/bnx2x: fix reset of scan FP flag
  2020-01-14  0:39 [dpdk-dev] [PATCH 1/3] net/bnx2x: fix to use required mem barriers in Rx path Rasesh Mody
                   ` (2 preceding siblings ...)
  2020-01-14 13:49 ` [dpdk-dev] [PATCH 1/3] net/bnx2x: fix to use required mem barriers in Rx path Jerin Jacob
@ 2020-01-26 22:54 ` Rasesh Mody
  2020-01-28  9:59   ` Jerin Jacob
  2020-01-26 22:54 ` [dpdk-dev] [PATCH v2 2/2] net/bnx2x: fix to sync fastpath Rx queue access Rasesh Mody
  4 siblings, 1 reply; 9+ messages in thread
From: Rasesh Mody @ 2020-01-26 22:54 UTC (permalink / raw)
  To: dev, jerinj, ferruh.yigit; +Cc: Rasesh Mody, GR-Everest-DPDK-Dev, stable

The fastpath task queue handler resets the fastpath scan flag
unconditionally, this patch changes that to reset the flag
only if it was set.

Fixes: 08a6e472c3d7 ("net/bnx2x: fix packet drop")
Cc: stable@dpdk.org

Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/bnx2x/bnx2x.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
index ed31335ac..50e289b28 100644
--- a/drivers/net/bnx2x/bnx2x.c
+++ b/drivers/net/bnx2x/bnx2x.c
@@ -4577,10 +4577,10 @@ static void bnx2x_handle_fp_tq(struct bnx2x_fastpath *fp)
 			bnx2x_handle_fp_tq(fp);
 			return;
 		}
+		/* We have completed slow path completion, clear the flag */
+		rte_atomic32_set(&sc->scan_fp, 0);
 	}
 
-	/* Assuming we have completed slow path completion, clear the flag */
-	rte_atomic32_set(&sc->scan_fp, 0);
 	bnx2x_ack_sb(sc, fp->igu_sb_id, USTORM_ID,
 		   le16toh(fp->fp_hc_idx), IGU_INT_ENABLE, 1);
 }
-- 
2.18.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [dpdk-dev] [PATCH v2 2/2] net/bnx2x: fix to sync fastpath Rx queue access
  2020-01-14  0:39 [dpdk-dev] [PATCH 1/3] net/bnx2x: fix to use required mem barriers in Rx path Rasesh Mody
                   ` (3 preceding siblings ...)
  2020-01-26 22:54 ` [dpdk-dev] [PATCH v2 1/2] net/bnx2x: fix reset of scan FP flag Rasesh Mody
@ 2020-01-26 22:54 ` Rasesh Mody
  4 siblings, 0 replies; 9+ messages in thread
From: Rasesh Mody @ 2020-01-26 22:54 UTC (permalink / raw)
  To: dev, jerinj, ferruh.yigit; +Cc: Rasesh Mody, GR-Everest-DPDK-Dev, stable

PMD handles fast path completions in the Rx handler and control path
completions in the interrupt handler. They both are processing
completions from the same fastpath completion queue. There is a
potential for race condition when these two paths are processing
the completions from the same queue and trying to updating Rx Producer.

Add a fastpath Rx lock between these two paths to close this race.

Fixes: 540a211084a7 ("bnx2x: driver core")
Cc: stable@dpdk.org

Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/bnx2x/bnx2x.c      | 12 ++++++++++++
 drivers/net/bnx2x/bnx2x.h      |  3 +++
 drivers/net/bnx2x/bnx2x_rxtx.c |  8 +++++++-
 3 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
index 50e289b28..0b4030e2b 100644
--- a/drivers/net/bnx2x/bnx2x.c
+++ b/drivers/net/bnx2x/bnx2x.c
@@ -1167,6 +1167,10 @@ static int bnx2x_has_rx_work(struct bnx2x_fastpath *fp)
 	if (unlikely((rx_cq_cons_sb & MAX_RCQ_ENTRIES(rxq)) ==
 		     MAX_RCQ_ENTRIES(rxq)))
 		rx_cq_cons_sb++;
+
+	PMD_RX_LOG(DEBUG, "hw CQ cons = %d, sw CQ cons = %d",
+		   rx_cq_cons_sb, rxq->rx_cq_head);
+
 	return rxq->rx_cq_head != rx_cq_cons_sb;
 }
 
@@ -1249,9 +1253,12 @@ static uint8_t bnx2x_rxeof(struct bnx2x_softc *sc, struct bnx2x_fastpath *fp)
 	uint16_t bd_cons, bd_prod, bd_prod_fw, comp_ring_cons;
 	uint16_t hw_cq_cons, sw_cq_cons, sw_cq_prod;
 
+	rte_spinlock_lock(&(fp)->rx_mtx);
+
 	rxq = sc->rx_queues[fp->index];
 	if (!rxq) {
 		PMD_RX_LOG(ERR, "RX queue %d is NULL", fp->index);
+		rte_spinlock_unlock(&(fp)->rx_mtx);
 		return 0;
 	}
 
@@ -1321,9 +1328,14 @@ static uint8_t bnx2x_rxeof(struct bnx2x_softc *sc, struct bnx2x_fastpath *fp)
 	rxq->rx_cq_head = sw_cq_cons;
 	rxq->rx_cq_tail = sw_cq_prod;
 
+	PMD_RX_LOG(DEBUG, "BD prod = %d, sw CQ prod = %d",
+		   bd_prod_fw, sw_cq_prod);
+
 	/* Update producers */
 	bnx2x_update_rx_prod(sc, fp, bd_prod_fw, sw_cq_prod);
 
+	rte_spinlock_unlock(&(fp)->rx_mtx);
+
 	return sw_cq_cons != hw_cq_cons;
 }
 
diff --git a/drivers/net/bnx2x/bnx2x.h b/drivers/net/bnx2x/bnx2x.h
index 3383c7675..1dbc98197 100644
--- a/drivers/net/bnx2x/bnx2x.h
+++ b/drivers/net/bnx2x/bnx2x.h
@@ -360,6 +360,9 @@ struct bnx2x_fastpath {
 	/* pointer back to parent structure */
 	struct bnx2x_softc *sc;
 
+	/* Used to synchronize fastpath Rx access */
+	rte_spinlock_t rx_mtx;
+
 	/* status block */
 	struct bnx2x_dma                 sb_dma;
 	union bnx2x_host_hc_status_block status_block;
diff --git a/drivers/net/bnx2x/bnx2x_rxtx.c b/drivers/net/bnx2x/bnx2x_rxtx.c
index ae97dfee3..0d7703584 100644
--- a/drivers/net/bnx2x/bnx2x_rxtx.c
+++ b/drivers/net/bnx2x/bnx2x_rxtx.c
@@ -346,6 +346,8 @@ bnx2x_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	uint16_t len, pad;
 	struct rte_mbuf *rx_mb = NULL;
 
+	rte_spinlock_lock(&(fp)->rx_mtx);
+
 	hw_cq_cons = le16toh(*fp->rx_cq_cons_sb);
 	if ((hw_cq_cons & USABLE_RCQ_ENTRIES_PER_PAGE) ==
 			USABLE_RCQ_ENTRIES_PER_PAGE) {
@@ -357,8 +359,10 @@ bnx2x_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	sw_cq_cons = rxq->rx_cq_head;
 	sw_cq_prod = rxq->rx_cq_tail;
 
-	if (sw_cq_cons == hw_cq_cons)
+	if (sw_cq_cons == hw_cq_cons) {
+		rte_spinlock_unlock(&(fp)->rx_mtx);
 		return 0;
+	}
 
 	while (nb_rx < nb_pkts && sw_cq_cons != hw_cq_cons) {
 
@@ -439,6 +443,8 @@ bnx2x_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 
 	bnx2x_upd_rx_prod_fast(sc, fp, bd_prod, sw_cq_prod);
 
+	rte_spinlock_unlock(&(fp)->rx_mtx);
+
 	return nb_rx;
 }
 
-- 
2.18.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH 1/3] net/bnx2x: fix to use required mem barriers in Rx path
  2020-01-15  1:57   ` [dpdk-dev] [EXT] " Rasesh Mody
@ 2020-01-26 22:55     ` Rasesh Mody
  0 siblings, 0 replies; 9+ messages in thread
From: Rasesh Mody @ 2020-01-26 22:55 UTC (permalink / raw)
  To: Jerin Jacob, Gavin Hu
  Cc: dpdk-dev, Jerin Jacob Kollanukkaran, Ferruh Yigit,
	GR-Everest-DPDK-Dev, dpdk stable

HI Jerin,

>From: Rasesh Mody <rmody@marvell.com>
>Sent: Tuesday, January 14, 2020 5:57 PM
>
>Hi Jerin,
>
>>From: Jerin Jacob <jerinjacobk@gmail.com>
>>Sent: Tuesday, January 14, 2020 5:49 AM
>>To: Rasesh Mody <rmody@marvell.com>; Gavin Hu <gavin.hu@arm.com>
>>Cc: dpdk-dev <dev@dpdk.org>; Jerin Jacob Kollanukkaran
>><jerinj@marvell.com>; Ferruh Yigit <ferruh.yigit@intel.com>;
>>GR-Everest- DPDK-Dev <GR-Everest-DPDK-Dev@marvell.com>; dpdk stable
>><stable@dpdk.org>
>>Subject: [EXT] Re: [dpdk-dev] [PATCH 1/3] net/bnx2x: fix to use
>>required mem barriers in Rx path
>>
>>External Email
>>
>>----------------------------------------------------------------------
>>+ Gavin
>>
>>On Tue, Jan 14, 2020 at 6:09 AM Rasesh Mody <rmody@marvell.com> wrote:
>>>
>>> When handling RX completion queue PMD is not using required
>>> read/write barriers before reading completion queue element (CQE)
>>> indices, updating/writing hardware consumer and producer.
>>> This patch adds appropriate read/write memory barriers in places
>>> which are required by driver and adapter to read or update indices.
>>>
>>> Fixes: 540a211084a7 ("bnx2x: driver core")
>>> Cc: stable@dpdk.org
>>>
>>> Signed-off-by: Rasesh Mody <rmody@marvell.com>
>>> ---
>>>  drivers/net/bnx2x/bnx2x.c      |  5 +++++
>>>  drivers/net/bnx2x/bnx2x_rxtx.c | 22 ++++++++++++++++++++++
>>>  2 files changed, 27 insertions(+)
>>>
>>> diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
>>> index ed31335ac..9c5e7995d 100644
>>> --- a/drivers/net/bnx2x/bnx2x.c
>>> +++ b/drivers/net/bnx2x/bnx2x.c
>>> @@ -1255,6 +1255,11 @@ static uint8_t bnx2x_rxeof(struct bnx2x_softc
>>*sc, struct bnx2x_fastpath *fp)
>>>                 return 0;
>>>         }
>>>
>>> +       /* Add memory barrier as status block fields can change. This memory
>>> +        * barrier will flush out all the read/write operations to status block
>>> +        * generated before the barrier. It will ensure stale data is not read
>>> +        */
>>> +       mb();
>>
>># Do you need full barriers here?
>
>Yes
>
>># Which architecture did you saw this issue?
>># rte_cio_* barriers are performance Friday, Have you checked
>>rte_cio_* would suffice the requirements.
>>See the discussion in  https://urldefense.proofpoint.com/v2/url?u=http-
>>3A__patches.dpdk.org_patch_64038_&d=DwIBaQ&c=nKjWec2b6R0mOyPaz
>7
>>xtfQ&r=9aB46H7c7TYTnBun6ODgtnNLQdw3jNiVKHbs9eOyBSU&m=lmdOnuA
>p
>>3MUDqX100Z4E2BDgaSSAy9oBgksySHVfEBI&s=oNq78smfHMB5fUZW_ew0_
>p
>>e6gp_5C0MTw0TSPPWR8qQ&e=
>
>This patch is to prevent potential issue which can happen in absence of
>required barriers.
>The above barrier is in slow path. However, there is another full barrier in fast
>path as part of this patch which can use rte_cio_* .
>I'll revise the patch and resend.

Note that memory barrier changes are a preventive fix. We did not observe any issue with the performance with original changes.
If we change read/write fast path barriers to rte_cio_*, PMD will end up having mixed barriers for slow path and fast path.
We'll revisit these changes later. I have sent v2 series without memory barrier changes.

Thanks!
-Rasesh
>
>>
>>I assume 2/3 and 3/3 patches are for the slow path. if so, it is fine
>>to use full barriers on those patches.
>
>The 2/3 is slow path fix and 3/3 is fixing a race condition that can occur
>between slow path and fast path.
>
>Thanks!
>-Rasesh
>



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/2] net/bnx2x: fix reset of scan FP flag
  2020-01-26 22:54 ` [dpdk-dev] [PATCH v2 1/2] net/bnx2x: fix reset of scan FP flag Rasesh Mody
@ 2020-01-28  9:59   ` Jerin Jacob
  0 siblings, 0 replies; 9+ messages in thread
From: Jerin Jacob @ 2020-01-28  9:59 UTC (permalink / raw)
  To: Rasesh Mody
  Cc: dpdk-dev, Jerin Jacob, Ferruh Yigit, GR-Everest-DPDK-Dev, dpdk stable

On Mon, Jan 27, 2020 at 4:24 AM Rasesh Mody <rmody@marvell.com> wrote:
>
> The fastpath task queue handler resets the fastpath scan flag
> unconditionally, this patch changes that to reset the flag
> only if it was set.
>
> Fixes: 08a6e472c3d7 ("net/bnx2x: fix packet drop")
> Cc: stable@dpdk.org
>
> Signed-off-by: Rasesh Mody <rmody@marvell.com>

Series applied to dpdk-next-net-mrvl/master. Thanks.


> ---
>  drivers/net/bnx2x/bnx2x.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
> index ed31335ac..50e289b28 100644
> --- a/drivers/net/bnx2x/bnx2x.c
> +++ b/drivers/net/bnx2x/bnx2x.c
> @@ -4577,10 +4577,10 @@ static void bnx2x_handle_fp_tq(struct bnx2x_fastpath *fp)
>                         bnx2x_handle_fp_tq(fp);
>                         return;
>                 }
> +               /* We have completed slow path completion, clear the flag */
> +               rte_atomic32_set(&sc->scan_fp, 0);
>         }
>
> -       /* Assuming we have completed slow path completion, clear the flag */
> -       rte_atomic32_set(&sc->scan_fp, 0);
>         bnx2x_ack_sb(sc, fp->igu_sb_id, USTORM_ID,
>                    le16toh(fp->fp_hc_idx), IGU_INT_ENABLE, 1);
>  }
> --
> 2.18.0
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-01-28  9:59 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-14  0:39 [dpdk-dev] [PATCH 1/3] net/bnx2x: fix to use required mem barriers in Rx path Rasesh Mody
2020-01-14  0:39 ` [dpdk-dev] [PATCH 2/3] net/bnx2x: fix reset of scan FP flag Rasesh Mody
2020-01-14  0:39 ` [dpdk-dev] [PATCH 3/3] net/bnx2x: fix to sync fastpath Rx queue access Rasesh Mody
2020-01-14 13:49 ` [dpdk-dev] [PATCH 1/3] net/bnx2x: fix to use required mem barriers in Rx path Jerin Jacob
2020-01-15  1:57   ` [dpdk-dev] [EXT] " Rasesh Mody
2020-01-26 22:55     ` Rasesh Mody
2020-01-26 22:54 ` [dpdk-dev] [PATCH v2 1/2] net/bnx2x: fix reset of scan FP flag Rasesh Mody
2020-01-28  9:59   ` Jerin Jacob
2020-01-26 22:54 ` [dpdk-dev] [PATCH v2 2/2] net/bnx2x: fix to sync fastpath Rx queue access Rasesh Mody

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).