From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 44318A04FE for ; Tue, 14 Jan 2020 01:39:51 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 288AC1D164; Tue, 14 Jan 2020 01:39:51 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 5579F1D160; Tue, 14 Jan 2020 01:39:48 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 00E0QdhM020719; Mon, 13 Jan 2020 16:39:47 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=UclS+hZNQ0UG7qbCBf7P1AULPLln4zL672WgpUt+zog=; b=b464/FXcwd+B1CcuT+4DvZMbXjFKzmLLMa0MGIf7/gLG5V+6p4ekfe0qehzadYpdP/a4 0YEes/LrQrojuWqc+LbT/n+8UJmy5IyONgtvRmDSMMysYksSVehQP19WynY/JnVOyyEx orZwToGYVZ1Eg5GDB2/iWUjHbpmpw7Ib4GfyAmL6P0I9G+WCfKSSt1epjnTMOWaPY0EO K/oJvwsIkb7qmusxUg2uEZf7cybYHRkKg8GaWMpZvKw2YXoksJHjb8sn9N4p3KCAkj4z d6eOhYGsqmj5Gfq79IYodCJFXq2EBf0FMfM5HjDGd464ONFfGrJ1piJMyfjWFcB/VMsX HA== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2xfckur0ec-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 13 Jan 2020 16:39:47 -0800 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 13 Jan 2020 16:39:45 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 13 Jan 2020 16:39:45 -0800 Received: from irv1user08.caveonetworks.com (unknown [10.104.116.105]) by maili.marvell.com (Postfix) with ESMTP id C92553F703F; Mon, 13 Jan 2020 16:39:45 -0800 (PST) Received: (from rmody@localhost) by irv1user08.caveonetworks.com (8.14.4/8.14.4/Submit) id 00E0djfT017787; Mon, 13 Jan 2020 16:39:45 -0800 X-Authentication-Warning: irv1user08.caveonetworks.com: rmody set sender to rmody@marvell.com using -f From: Rasesh Mody To: , , CC: Rasesh Mody , , Date: Mon, 13 Jan 2020 16:39:20 -0800 Message-ID: <20200114003920.17705-3-rmody@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20200114003920.17705-1-rmody@marvell.com> References: <20200114003920.17705-1-rmody@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-01-13_08:2020-01-13, 2020-01-13 signatures=0 Subject: [dpdk-stable] [PATCH 3/3] net/bnx2x: fix to sync fastpath Rx queue access X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" PMD handles fast path completions in the Rx handler and control path completions in the interrupt handler. They both are processing completions from the same fastpath completion queue. There is a potential for race condition when these two paths are processing the completions from the same queue and trying to updating Rx Producer. Add a fastpath Rx lock between these two paths to close this race. Fixes: 540a211084a7 ("bnx2x: driver core") Cc: stable@dpdk.org Signed-off-by: Rasesh Mody --- drivers/net/bnx2x/bnx2x.c | 12 ++++++++++++ drivers/net/bnx2x/bnx2x.h | 3 +++ drivers/net/bnx2x/bnx2x_rxtx.c | 8 +++++++- 3 files changed, 22 insertions(+), 1 deletion(-) diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c index d38da4f60..7ea98b936 100644 --- a/drivers/net/bnx2x/bnx2x.c +++ b/drivers/net/bnx2x/bnx2x.c @@ -1167,6 +1167,10 @@ static int bnx2x_has_rx_work(struct bnx2x_fastpath *fp) if (unlikely((rx_cq_cons_sb & MAX_RCQ_ENTRIES(rxq)) == MAX_RCQ_ENTRIES(rxq))) rx_cq_cons_sb++; + + PMD_RX_LOG(DEBUG, "hw CQ cons = %d, sw CQ cons = %d", + rx_cq_cons_sb, rxq->rx_cq_head); + return rxq->rx_cq_head != rx_cq_cons_sb; } @@ -1249,9 +1253,12 @@ static uint8_t bnx2x_rxeof(struct bnx2x_softc *sc, struct bnx2x_fastpath *fp) uint16_t bd_cons, bd_prod, bd_prod_fw, comp_ring_cons; uint16_t hw_cq_cons, sw_cq_cons, sw_cq_prod; + rte_spinlock_lock(&(fp)->rx_mtx); + rxq = sc->rx_queues[fp->index]; if (!rxq) { PMD_RX_LOG(ERR, "RX queue %d is NULL", fp->index); + rte_spinlock_unlock(&(fp)->rx_mtx); return 0; } @@ -1326,9 +1333,14 @@ static uint8_t bnx2x_rxeof(struct bnx2x_softc *sc, struct bnx2x_fastpath *fp) rxq->rx_cq_head = sw_cq_cons; rxq->rx_cq_tail = sw_cq_prod; + PMD_RX_LOG(DEBUG, "BD prod = %d, sw CQ prod = %d", + bd_prod_fw, sw_cq_prod); + /* Update producers */ bnx2x_update_rx_prod(sc, fp, bd_prod_fw, sw_cq_prod); + rte_spinlock_unlock(&(fp)->rx_mtx); + return sw_cq_cons != hw_cq_cons; } diff --git a/drivers/net/bnx2x/bnx2x.h b/drivers/net/bnx2x/bnx2x.h index 3383c7675..1dbc98197 100644 --- a/drivers/net/bnx2x/bnx2x.h +++ b/drivers/net/bnx2x/bnx2x.h @@ -360,6 +360,9 @@ struct bnx2x_fastpath { /* pointer back to parent structure */ struct bnx2x_softc *sc; + /* Used to synchronize fastpath Rx access */ + rte_spinlock_t rx_mtx; + /* status block */ struct bnx2x_dma sb_dma; union bnx2x_host_hc_status_block status_block; diff --git a/drivers/net/bnx2x/bnx2x_rxtx.c b/drivers/net/bnx2x/bnx2x_rxtx.c index b52f023ea..c8bb202d6 100644 --- a/drivers/net/bnx2x/bnx2x_rxtx.c +++ b/drivers/net/bnx2x/bnx2x_rxtx.c @@ -357,6 +357,8 @@ bnx2x_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) uint16_t len, pad; struct rte_mbuf *rx_mb = NULL; + rte_spinlock_lock(&(fp)->rx_mtx); + /* Add memory barrier as status block fields can change. This memory * barrier will flush out all the read/write operations to status block * generated before the barrier. It will ensure stale data is not read @@ -379,8 +381,10 @@ bnx2x_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) */ rmb(); - if (sw_cq_cons == hw_cq_cons) + if (sw_cq_cons == hw_cq_cons) { + rte_spinlock_unlock(&(fp)->rx_mtx); return 0; + } while (nb_rx < nb_pkts && sw_cq_cons != hw_cq_cons) { @@ -461,6 +465,8 @@ bnx2x_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) bnx2x_upd_rx_prod_fast(sc, fp, bd_prod, sw_cq_prod); + rte_spinlock_unlock(&(fp)->rx_mtx); + return nb_rx; } -- 2.18.0