From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 76BD042A5D; Thu, 4 May 2023 14:58:26 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1423F41144; Thu, 4 May 2023 14:58:26 +0200 (CEST) Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by mails.dpdk.org (Postfix) with ESMTP id E0979410DC for ; Thu, 4 May 2023 14:58:24 +0200 (CEST) Received: from loongson.cn (unknown [10.20.42.90]) by gateway (Coremail) with SMTP id _____8Ax2enuq1NkOqQEAA--.7739S3; Thu, 04 May 2023 20:58:22 +0800 (CST) Received: from [10.20.42.90] (unknown [10.20.42.90]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Dx_8vrq1Nk1fZJAA--.5367S3; Thu, 04 May 2023 20:58:20 +0800 (CST) Subject: Re: [PATCH v2] net/ixgbe: add proper memory barriers for some Rx functions To: =?UTF-8?Q?Morten_Br=c3=b8rup?= , "Zhang, Qi Z" , "Yang, Qiming" , "Wu, Wenjun1" Cc: dev@dpdk.org, maobibo@loongson.cn References: <20230424090532.367194-1-zhoumin@loongson.cn> <98CBD80474FA8B44BF855DF32C47DC35D878C7@smartserver.smartshare.dk> From: zhoumin Message-ID: <13eea5e4-b059-32ae-c757-9be6e6169dfe@loongson.cn> Date: Thu, 4 May 2023 20:58:19 +0800 User-Agent: Mozilla/5.0 (X11; Linux loongarch64; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35D878C7@smartserver.smartshare.dk> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US X-CM-TRANSID: AQAAf8Dx_8vrq1Nk1fZJAA--.5367S3 X-CM-SenderInfo: 52kr3ztlq6z05rqj20fqof0/ X-Coremail-Antispam: 1Uk129KBjvJXoWxWF4DuryDWFWDXFWUZr4fZrb_yoWrGF43pa yrJa40kryDX3yxK34Sqws8Wr1Iyw4rXr18WrykAwsYk34qq3WIqF1qgryUua4DJr4UAw1I vF4jg39xGFs8AFDanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUj1kv1TuYvTs0mT0YCTnIWj qI5I8CrVACY4xI64kE6c02F40Ex7xfYxn0WfASr-VFAUDa7-sFnT9fnUUIcSsGvfJTRUUU bx8YFVCjjxCrM7AC8VAFwI0_Jr0_Gr1l1xkIjI8I6I8E6xAIw20EY4v20xvaj40_Wr0E3s 1l1IIY67AEw4v_Jr0_Jr4l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxSw2x7M28EF7xv wVC0I7IYx2IY67AKxVW5JVW7JwA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxVW8JVWxJwA2z4 x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq3wAS0I0E 0xvYzxvE52x082IY62kv0487Mc804VCY07AIYIkI8VC2zVCFFI0UMc02F40EFcxC0VAKzV Aqx4xG6I80ewAv7VC0I7IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S 6xCaFVCjc4AY6r1j6r4UM4x0Y48IcVAKI48JMxk0xIA0c2IEe2xFo4CEbIxvr21l42xK82 IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC2 0s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r126r1DMIIYrxkI7VAKI48JMI IF0xvE2Ix0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I0E14v26r1j6r4UMIIF 0xvE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87 Iv6xkF7I0E14v26r1j6r4UYxBIdaVFxhVjvjDU0xZFpf9x07UE-erUUUUU= X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Hi Morten, Thanks for your comments. On Fri, Apr 28, 2023 at 2:27PM, Morten Brørup wrote: >> From: Zhang, Qi Z [mailto:qi.z.zhang@intel.com] >> Sent: Friday, 28 April 2023 05.44 >> >>> From: Min Zhou >>> Sent: Monday, April 24, 2023 5:06 PM >>> >>> Segmentation fault has been observed while running the >>> ixgbe_recv_pkts_lro() function to receive packets on the Loongson 3C5000 >>> processor which has 64 cores and 4 NUMA nodes. >>> >>> From the ixgbe_recv_pkts_lro() function, we found that as long as the first >>> packet has the EOP bit set, and the length of this packet is less than or >> equal >>> to rxq->crc_len, the segmentation fault will definitely happen even though >>> on the other platforms, such as X86. >>> >>> Because when processd the first packet the first_seg->next will be NULL, if >> at >>> the same time this packet has the EOP bit set and its length is less than or >>> equal to rxq->crc_len, the following loop will be excecuted: >>> >>> for (lp = first_seg; lp->next != rxm; lp = lp->next) >>> ; >>> >>> We know that the first_seg->next will be NULL under this condition. So the >>> expression of lp->next->next will cause the segmentation fault. >>> >>> Normally, the length of the first packet with EOP bit set will be greater >> than >>> rxq->crc_len. However, the out-of-order execution of CPU may make the >>> read ordering of the status and the rest of the descriptor fields in this >>> function not be correct. The related codes are as following: >>> >>> rxdp = &rx_ring[rx_id]; >>> #1 staterr = rte_le_to_cpu_32(rxdp->wb.upper.status_error); >>> >>> if (!(staterr & IXGBE_RXDADV_STAT_DD)) >>> break; >>> >>> #2 rxd = *rxdp; >>> >>> The sentence #2 may be executed before sentence #1. This action is likely to >>> make the ready packet zero length. If the packet is the first packet and has >>> the EOP bit set, the above segmentation fault will happen. >>> >>> So, we should add rte_rmb() to ensure the read ordering be correct. We also >>> did the same thing in the ixgbe_recv_pkts() function to make the rxd data be >>> valid even thougth we did not find segmentation fault in this function. >>> >>> Signed-off-by: Min Zhou >>> --- >>> v2: >>> - Make the calling of rte_rmb() for all platforms >>> --- >>> drivers/net/ixgbe/ixgbe_rxtx.c | 3 +++ >>> 1 file changed, 3 insertions(+) >>> >>> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c >>> index c9d6ca9efe..302a5ab7ff 100644 >>> --- a/drivers/net/ixgbe/ixgbe_rxtx.c >>> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c >>> @@ -1823,6 +1823,8 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf >>> **rx_pkts, >>> staterr = rxdp->wb.upper.status_error; >>> if (!(staterr & rte_cpu_to_le_32(IXGBE_RXDADV_STAT_DD))) >>> break; >>> + >>> + rte_rmb(); >> So "volatile" does not prevent re-order with Loongson compiler? > "Volatile" does not prevent re-ordering on any compiler. "Volatile" only prevents caching of the variable marked volatile. > > https://wiki.sei.cmu.edu/confluence/display/c/CON02-C.+Do+not+use+volatile+as+a+synchronization+primitive > > Thinking out loud: I don't know the performance cost of rte_rmb(); perhaps using atomic accesses with the optimal memory ordering would be a better solution in the long term. Yes, rte_rmb() probably had side effects on the performance. I will use a better solution to solve the problem in the V2 patch. >> >>> rxd = *rxdp; >>> >>> /* >>> @@ -2122,6 +2124,7 @@ ixgbe_recv_pkts_lro(void *rx_queue, struct >>> rte_mbuf **rx_pkts, uint16_t nb_pkts, >>> if (!(staterr & IXGBE_RXDADV_STAT_DD)) >>> break; >>> >>> + rte_rmb(); >>> rxd = *rxdp; >>> >>> PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_id=%u " >>> -- >>> 2.31.1 Best regards, Min