From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6DDB34323E; Mon, 30 Oct 2023 11:40:12 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8D5BC4067D; Mon, 30 Oct 2023 11:40:11 +0100 (CET) Received: from smtpbgbr2.qq.com (smtpbgbr2.qq.com [54.207.22.56]) by mails.dpdk.org (Postfix) with ESMTP id 99B5F40285; Mon, 30 Oct 2023 11:40:07 +0100 (CET) X-QQ-mid: bizesmtp65t1698662400tqm4qvq2 Received: from wxdbg.localdomain.com ( [122.235.139.36]) by bizesmtp.qq.com (ESMTP) with id ; Mon, 30 Oct 2023 18:39:50 +0800 (CST) X-QQ-SSF: 01400000000000K0Z000000A0000000 X-QQ-FEAT: RtFAdvlBO4AbDDrulu5HX9QyfUZ3LY6nwnYMX9iCOqWN7ix3d33cOgsiQIDuT 7OkitY1w6fa6VBtPKwNq8Lv9mOIQrnp42ghrFX+WsGCUO+pM6A5aBxIPe7qU5P8Z2QAj1US inVcyIZUvpBBpm0CwOMHBLkshCI/e06Y7BPZ2tAq8PqkmWRrL7qsqXpK+3KdRwiqnkHVerg NBSL865JYVMBU/7+2/uwydgawNBoizlsLM+yI0mw91Tkbp35D4+XL+744Zvt1z2sYRJCq5P BtJLv996zgn+ZjusWkjjYBP3R2SmP316cKkASoV2EPgzo0LNrVqbk5NFc2tWWOjsXmT/oA2 rPVUh+Ly+Sq5mhSsfhoJ6sXGp3pDXPLkFgmlgaJR2pDNYuPbaqd8wt4YNgY8bdW2UjSoYE5 X-QQ-GoodBg: 2 X-BIZMAIL-ID: 13822407596033058005 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu , stable@dpdk.org Subject: [PATCH 1/2] net/txgbe: add proper memory barriers in Rx Date: Mon, 30 Oct 2023 18:51:43 +0800 Message-Id: <20231030105144.595502-1-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybglogicsvrgz:qybglogicsvrgz5a-1 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Refer to commit 85e46c532bc7 ("net/ixgbe: add proper memory barriers in Rx"). Fix the same issue as ixgbe. Segmentation fault has been observed while running the txgbe_recv_pkts_lro() function to receive packets on the Loongson 3A5000 processor. It's caused by the out-of-order execution of CPU. So add a proper memory barrier to ensure the read ordering be correct. We also did the same thing in the txgbe_recv_pkts() function to make the rxd data be valid even though we did not find segmentation fault in this function. Fixes: 0e484278c85f ("net/txgbe: support Rx") Cc: stable@dpdk.org Signed-off-by: Jiawen Wu --- drivers/net/txgbe/txgbe_rxtx.c | 47 +++++++++++++++------------------- 1 file changed, 21 insertions(+), 26 deletions(-) diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c index 834ada886a..24fc34d3c4 100644 --- a/drivers/net/txgbe/txgbe_rxtx.c +++ b/drivers/net/txgbe/txgbe_rxtx.c @@ -1476,11 +1476,22 @@ txgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * of accesses cannot be reordered by the compiler. If they were * not volatile, they could be reordered which could lead to * using invalid descriptor fields when read from rxd. + * + * Meanwhile, to prevent the CPU from executing out of order, we + * need to use a proper memory barrier to ensure the memory + * ordering below. */ rxdp = &rx_ring[rx_id]; staterr = rxdp->qw1.lo.status; if (!(staterr & rte_cpu_to_le_32(TXGBE_RXD_STAT_DD))) break; + + /* + * Use acquire fence to ensure that status_error which includes + * DD bit is loaded before loading of other descriptor words. + */ + rte_atomic_thread_fence(__ATOMIC_ACQUIRE); + rxd = *rxdp; /* @@ -1726,32 +1737,10 @@ txgbe_recv_pkts_lro(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, next_desc: /* - * The code in this whole file uses the volatile pointer to - * ensure the read ordering of the status and the rest of the - * descriptor fields (on the compiler level only!!!). This is so - * UGLY - why not to just use the compiler barrier instead? DPDK - * even has the rte_compiler_barrier() for that. - * - * But most importantly this is just wrong because this doesn't - * ensure memory ordering in a general case at all. For - * instance, DPDK is supposed to work on Power CPUs where - * compiler barrier may just not be enough! - * - * I tried to write only this function properly to have a - * starting point (as a part of an LRO/RSC series) but the - * compiler cursed at me when I tried to cast away the - * "volatile" from rx_ring (yes, it's volatile too!!!). So, I'm - * keeping it the way it is for now. - * - * The code in this file is broken in so many other places and - * will just not work on a big endian CPU anyway therefore the - * lines below will have to be revisited together with the rest - * of the txgbe PMD. - * - * TODO: - * - Get rid of "volatile" and let the compiler do its job. - * - Use the proper memory barrier (rte_rmb()) to ensure the - * memory ordering below. + * "Volatile" only prevents caching of the variable marked + * volatile. Most important, "volatile" cannot prevent the CPU + * from executing out of order. So, it is necessary to use a + * proper memory barrier to ensure the memory ordering below. */ rxdp = &rx_ring[rx_id]; staterr = rte_le_to_cpu_32(rxdp->qw1.lo.status); @@ -1759,6 +1748,12 @@ txgbe_recv_pkts_lro(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, if (!(staterr & TXGBE_RXD_STAT_DD)) break; + /* + * Use acquire fence to ensure that status_error which includes + * DD bit is loaded before loading of other descriptor words. + */ + rte_atomic_thread_fence(__ATOMIC_ACQUIRE); + rxd = *rxdp; PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_id=%u " -- 2.27.0