From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BC83642A65; Fri, 5 May 2023 03:46:05 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3294840ED7; Fri, 5 May 2023 03:46:05 +0200 (CEST) Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by mails.dpdk.org (Postfix) with ESMTP id F09AC406BA for ; Fri, 5 May 2023 03:46:02 +0200 (CEST) Received: from loongson.cn (unknown [10.20.42.90]) by gateway (Coremail) with SMTP id _____8AxW+rXX1RkJ_IEAA--.8107S3; Fri, 05 May 2023 09:45:59 +0800 (CST) Received: from [10.20.42.90] (unknown [10.20.42.90]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Dx97XVX1Rk8vxKAA--.6750S3; Fri, 05 May 2023 09:45:57 +0800 (CST) Subject: Re: [PATCH v2] net/ixgbe: add proper memory barriers for some Rx functions To: Ruifeng Wang , Konstantin Ananyev Cc: "dev@dpdk.org" , "maobibo@loongson.cn" , "qiming.yang@intel.com" , "wenjun1.wu@intel.com" , "drc@linux.vnet.ibm.com" , nd References: <20230424090532.367194-1-zhoumin@loongson.cn> <20671e5e-8e86-4bc3-2d95-4cec014b0539@yandex.ru> From: zhoumin Message-ID: <013a551f-7cd9-2849-c7f3-867375d7b463@loongson.cn> Date: Fri, 5 May 2023 09:45:57 +0800 User-Agent: Mozilla/5.0 (X11; Linux loongarch64; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US X-CM-TRANSID: AQAAf8Dx97XVX1Rk8vxKAA--.6750S3 X-CM-SenderInfo: 52kr3ztlq6z05rqj20fqof0/ X-Coremail-Antispam: 1Uk129KBjvJXoWxWw15AF1DXF47CrWrtr13Jwb_yoWrur18pF s3JF4jkr4kXr4Ikwn2qw1rur1akrWrWF18WrykA34vkws0gw1SqF4jgFy09FyDJr4jy3W0 vr4UW39xKFs8AaDanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUj1kv1TuYvTs0mT0YCTnIWj qI5I8CrVACY4xI64kE6c02F40Ex7xfYxn0WfASr-VFAUDa7-sFnT9fnUUIcSsGvfJTRUUU bI8YFVCjjxCrM7AC8VAFwI0_Jr0_Gr1l1xkIjI8I6I8E6xAIw20EY4v20xvaj40_Wr0E3s 1l1IIY67AEw4v_JrI_Jryl8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxSw2x7M28EF7xv wVC0I7IYx2IY67AKxVW5JVW7JwA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxVWxJVW8Jr1l84 ACjcxK6I8E87Iv67AKxVWxJVW8Jr1l84ACjcxK6I8E87Iv6xkF7I0E14v26r4UJVWxJr1l e2I262IYc4CY6c8Ij28IcVAaY2xG8wAqjxCEc2xF0cIa020Ex4CE44I27wAqx4xG64xvF2 IEw4CE5I8CrVC2j2WlYx0E2Ix0cI8IcVAFwI0_Jr0_Jr4lYx0Ex4A2jsIE14v26r1j6r4U McvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvEwIxGrwCYjI0SjxkI62AI1cAE67vIY487Mx AIw28IcxkI7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_ Jr4lx2IqxVCjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwI xGrwCI42IY6xIIjxv20xvE14v26r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVWUJVW8 JwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcV C2z280aVCY1x0267AKxVWUJVW8JbIYCTnIWIevJa73UjIFyTuYvjxU25EfUUUUU X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Hi Ruifeng, Thanks for your review. On Thur, May 4, 2023 at 2:13PM, Ruifeng Wang wrote: >> -----Original Message----- >> From: Konstantin Ananyev >> Sent: Monday, May 1, 2023 9:29 PM >> To: zhoumin@loongson.cn >> Cc: dev@dpdk.org; maobibo@loongson.cn; qiming.yang@intel.com; wenjun1.wu@intel.com; >> Ruifeng Wang ; drc@linux.vnet.ibm.com >> Subject: Re: [PATCH v2] net/ixgbe: add proper memory barriers for some Rx functions >> >>> Segmentation fault has been observed while running the >>> ixgbe_recv_pkts_lro() function to receive packets on the Loongson >>> 3C5000 processor which has 64 cores and 4 NUMA nodes. >>> >>> From the ixgbe_recv_pkts_lro() function, we found that as long as the >>> first packet has the EOP bit set, and the length of this packet is >>> less than or equal to rxq->crc_len, the segmentation fault will >>> definitely happen even though on the other platforms, such as X86. >>> >>> Because when processd the first packet the first_seg->next will be >>> NULL, if at the same time this packet has the EOP bit set and its >>> length is less than or equal to rxq->crc_len, the following loop will be excecuted: >>> >>> for (lp = first_seg; lp->next != rxm; lp = lp->next) >>> ; >>> >>> We know that the first_seg->next will be NULL under this condition. So >>> the expression of lp->next->next will cause the segmentation fault. >>> >>> Normally, the length of the first packet with EOP bit set will be >>> greater than rxq->crc_len. However, the out-of-order execution of CPU >>> may make the read ordering of the status and the rest of the >>> descriptor fields in this function not be correct. The related codes are as following: >>> >>> rxdp = &rx_ring[rx_id]; >>> #1 staterr = rte_le_to_cpu_32(rxdp->wb.upper.status_error); >>> >>> if (!(staterr & IXGBE_RXDADV_STAT_DD)) >>> break; >>> >>> #2 rxd = *rxdp; >>> >>> The sentence #2 may be executed before sentence #1. This action is >>> likely to make the ready packet zero length. If the packet is the >>> first packet and has the EOP bit set, the above segmentation fault will happen. >>> >>> So, we should add rte_rmb() to ensure the read ordering be correct. We >>> also did the same thing in the ixgbe_recv_pkts() function to make the >>> rxd data be valid even thougth we did not find segmentation fault in this function. >>> >>> Signed-off-by: Min Zhou > "Fixes" tag for backport. OK, I will add the "Fixes" tag in the V3 patch. > >>> --- >>> v2: >>> - Make the calling of rte_rmb() for all platforms >>> --- >>> drivers/net/ixgbe/ixgbe_rxtx.c | 3 +++ >>> 1 file changed, 3 insertions(+) >>> >>> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c >>> b/drivers/net/ixgbe/ixgbe_rxtx.c index c9d6ca9efe..302a5ab7ff 100644 >>> --- a/drivers/net/ixgbe/ixgbe_rxtx.c >>> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c >>> @@ -1823,6 +1823,8 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, >>> staterr = rxdp->wb.upper.status_error; >>> if (!(staterr & rte_cpu_to_le_32(IXGBE_RXDADV_STAT_DD))) >>> break; >>> + >>> + rte_rmb(); >>> rxd = *rxdp; >> >> >> Indeed, looks like a problem to me on systems with relaxed MO. >> Strange that it was never hit on arm or ppc - cc-ing ARM/PPC maintainers. > Thanks, Konstantin. > >> About a fix - looks right, but a bit excessive to me - as I understand all we need here is >> to prevent re-ordering by CPU itself. >> So rte_smp_rmb() seems enough here. > Agree that rte_rmb() is excessive. > rte_smp_rmb() or rte_atomic_thread_fence(__ATOMIC_ACQUIRE) is enough. Thanks for your advice. I will compare the rte_smp_rmb(), __atomic_load_n() and rte_atomic_thread_fence() to choose a better one. > And it is better to add a comment to justify the barrier. OK, I will add a comment for this change. >> Or might be just: >> staterr = __atomic_load_n(&rxdp->wb.upper.status_error, __ATOMIC_ACQUIRE); >> >> >>> /* >>> @@ -2122,6 +2124,7 @@ ixgbe_recv_pkts_lro(void *rx_queue, struct rte_mbuf **rx_pkts, >> uint16_t nb_pkts, > With the proper barrier in place, I think the long comments at the beginning of this loop can be removed. Yes, I think the long comments can be simplified when the proper barrier is already in place. >>> if (!(staterr & IXGBE_RXDADV_STAT_DD)) >>> break; >>> >>> + rte_rmb(); >>> rxd = *rxdp; >>> >>> PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_id=%u " >>> -- >>> 2.31.1 Best regards, Min