From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 39A54A00BE; Tue, 19 Apr 2022 22:27:46 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C41134068E; Tue, 19 Apr 2022 22:27:45 +0200 (CEST) Received: from mail-qk1-f179.google.com (mail-qk1-f179.google.com [209.85.222.179]) by mails.dpdk.org (Postfix) with ESMTP id 5494240687 for ; Tue, 19 Apr 2022 22:27:44 +0200 (CEST) Received: by mail-qk1-f179.google.com with SMTP id q75so5794900qke.6 for ; Tue, 19 Apr 2022 13:27:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20210112.gappssmtp.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=cj7TFvXh0LoDgaszEeOWo6U4hMvypuwERhHFS/FtZ1M=; b=LbZ3jX1c+gxlKTqMCJaWusuJCqaAnqf2XdeHvIgQsFz7dMiwEgUranEySA0kqNu8H8 +v3srPgYWZJSxsZx9v6nVc1vAufEIO8f2DeWJmZb/0sLLzS6OfIC0p0oDw7s7X3LueNC XSLsDVmhqW/BO6XDo02Tkr1FninOWHDBWiRTwPdMKtsGGiPXp4TFCvMA3p5uLVib1w6c ufkSQKR04+Xz7TnuPoSUvlFQxaKa4riaFME0E41LBptHzVIcV+TbRyYdIE88T2PlSGKF Simm9bUI5ckzkWxIzOXU51dQuZ/IIXZkPbNnmCRHO1s2E9HDTl7sws5TQ58o3q7mKMEr CQsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=cj7TFvXh0LoDgaszEeOWo6U4hMvypuwERhHFS/FtZ1M=; b=MBq9Tho91+EqFPguOyPDl3rdk1+yzbjHy9fc1L6w5fP8t0EI2RBqb0LZoe5uJQO7yi FUKoZ/RZjKu3lVFUlvE1sMUoecq7KAwvpURcBgunQrWgC4Tntj2DpvV5gV3J/NQzrRBd sPcwHehd1bbLKtcO7r2hbD2GhgGyhwc+c2HajJ1ZM0aDahhKixsyPP8ovEBFYs3lAW5k 3P50eyM1X8FF3yWOHR905s76ik7kI1RWB9yWxDve7LKN72miFGOVZTmHuHgRJ49SIp5s 6yt7cI7Bz+KGj2UeMAM49NluAJzWtMZaxMPF4v5QVQhRKYl340qDoeuqo6iSkZQO7E3x Hd9w== X-Gm-Message-State: AOAM530wbxjARTOWZVAconKAZIdclqVJFrcXfhGOi0BwkeZtdSFw4KEx W3MzkZCshB39iIFuW21bE1bEZMQoNreSWizWrNzyfg== X-Google-Smtp-Source: ABdhPJzyjgDWAaDDCsFVKJNlZLfB2d6Ygk4VPFPESG0p1Xxx+P2cjXbyvw75YqAV1ProYwgLH/ytkqwMMehpOAZh83s= X-Received: by 2002:a37:6107:0:b0:69e:7b8a:e733 with SMTP id v7-20020a376107000000b0069e7b8ae733mr9514923qkb.459.1650400063621; Tue, 19 Apr 2022 13:27:43 -0700 (PDT) MIME-Version: 1.0 References: <20220419080150.2511dee2@hermes.local> In-Reply-To: <20220419080150.2511dee2@hermes.local> From: =?UTF-8?Q?Micha=C5=82_Krawczyk?= Date: Tue, 19 Apr 2022 22:27:32 +0200 Message-ID: Subject: Re: DPDK:20.11.1: net/ena crash while fetching xstats To: Stephen Hemminger Cc: Amiya Mohakud , dev , Sachin Kanoje , Megha Punjani , Sharad Saha , Eswar Sadaram , "Brandes, Shai" , ena-dev Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org wt., 19 kwi 2022 o 17:01 Stephen Hemminger napisa=C5=82(a): > > On Tue, 19 Apr 2022 14:10:23 +0200 > Micha=C5=82 Krawczyk wrote: > > > pon., 18 kwi 2022 o 17:19 Amiya Mohakud > > napisa=C5=82(a): > > > > > > + Megha, Sharad and Eswar. > > > > > > On Mon, Apr 18, 2022 at 2:03 PM Amiya Mohakud wrote: > > >> > > >> Hi Michal/DPDK-Experts, > > >> > > >> I am facing one issue in net/ena driver while fetching extended stat= s (xstats). The DPDK seems to segfault with below backtrace. > > >> > > >> DPDK Version: 20.11.1 > > >> ENA version: 2.2.1 > > >> > > >> > > >> Using host libthread_db library "/lib64/libthread_db.so.1". > > >> > > >> Core was generated by `/opt/dpfs/usr/local/bin/brdagent'. > > >> > > >> Program terminated with signal SIGSEGV, Segmentation fault. > > >> > > >> #0 __memmove_avx_unaligned_erms () at ../sysdeps/x86_64/multiarch/m= emmove-vec-unaligned-erms.S:232 > > >> > > >> 232 VMOVU %VEC(0), (%rdi) > > >> > > >> [Current thread is 1 (Thread 0x7fffed93a400 (LWP 5060))] > > >> > > >> > > >> Thread 1 (Thread 0x7fffed93a400 (LWP 5060)): > > >> > > >> #0 __memmove_avx_unaligned_erms () at ../sysdeps/x86_64/multiarch/m= emmove-vec-unaligned-erms.S:232 > > >> > > >> #1 0x00007ffff3c246df in ena_com_handle_admin_completion () from ..= /lib64/../../lib64/libdpdk.so.20 > > >> > > >> #2 0x00007ffff3c1e7f5 in ena_interrupt_handler_rte () from ../lib64= /../../lib64/libdpdk.so.20 > > >> > > >> #3 0x00007ffff3519902 in eal_intr_thread_main () from /../lib64/../= ../lib64/libdpdk.so.20 > > >> > > >> #4 0x00007ffff510714a in start_thread (arg=3D) at pt= hread_create.c:479 > > >> > > >> #5 0x00007ffff561ff23 in clone () at ../sysdeps/unix/sysv/linux/x86= _64/clone.S:95 > > >> > > >> > > >> > > >> > > >> Background: > > >> > > >> This used to work fine with DPDK-19.11.3 , that means there was no c= rash observed with the 19.11.3 DPDK version, but now after upgrading to DPD= K 20.11.1, DPDK is crashing with the above trace. > > >> It looks to me as a DPDK issue. > > >> I could see multiple fixes/patches in the net/ena area, but not able= to identify which patch would exactly fix this issue. > > >> > > >> For example: http://git.dpdk.org/dpdk/diff/?h=3Dreleases&id=3Daab588= 57330bb4bd03f6699bf1ee716f72993774 > > >> https://inbox.dpdk.org/dev/20210430125725.28796-6-mk@semihalf.com/T/= #me99457c706718bb236d1fd8006ee7a0319ce76fc > > >> > > >> > > >> Could you please help here and let me know what patch could fix this= issue. > > >> > > > > + Shai Brandes and ena-dev > > > > Hi Amiya, > > > > Thanks for reaching me out. Could you please provide us with more > > details regarding the reproduction? I cannot reproduce this on my > > setup for DPDK v20.11.1 when using testpmd and probing for the xstats. > > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > [ec2-user@ dpdk]$ sudo ./build/app/dpdk-testpmd -- -i > > EAL: Detected 8 lcore(s) > > EAL: Detected 1 NUMA nodes > > EAL: Detected static linkage of DPDK > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > > EAL: Selected IOVA mode 'PA' > > EAL: No available hugepages reported in hugepages-1048576kB > > EAL: Probing VFIO support... > > EAL: Invalid NUMA socket, default to 0 > > EAL: Invalid NUMA socket, default to 0 > > EAL: Probe PCI driver: net_ena (1d0f:ec20) device: 0000:00:06.0 (socket= 0) > > EAL: No legacy callbacks, legacy socket not created > > Interactive-mode selected > > ena_mtu_set(): Set MTU: 1500 > > > > testpmd: create a new mbuf pool : n=3D203456, size=3D2176, s= ocket=3D0 > > testpmd: preferred mempool ops selected: ring_mp_mc > > > > Warning! port-topology=3Dpaired and odd forward ports number, the last > > port will pair with itself. > > > > Configuring Port 0 (socket 0) > > Port 0: > > Checking link statuses... > > Done > > Error during enabling promiscuous mode for port 0: Operation not > > supported - ignore > > testpmd> start > > io packet forwarding - ports=3D1 - cores=3D1 - streams=3D1 - NUMA suppo= rt > > enabled, MP allocation mode: native > > Logical Core 1 (socket 0) forwards packets on 1 streams: > > RX P=3D0/Q=3D0 (socket 0) -> TX P=3D0/Q=3D0 (socket 0) peer=3D02:00:0= 0:00:00:00 > > > > io packet forwarding packets/burst=3D32 > > nb forwarding cores=3D1 - nb forwarding ports=3D1 > > port 0: RX queue number: 1 Tx queue number: 1 > > Rx offloads=3D0x0 Tx offloads=3D0x0 > > RX queue: 0 > > RX desc=3D0 - RX free threshold=3D0 > > RX threshold registers: pthresh=3D0 hthresh=3D0 wthresh=3D0 > > RX Offloads=3D0x0 > > TX queue: 0 > > TX desc=3D0 - TX free threshold=3D0 > > TX threshold registers: pthresh=3D0 hthresh=3D0 wthresh=3D0 > > TX offloads=3D0x0 - TX RS bit threshold=3D0 > > testpmd> show port xstats 0 > > ###### NIC extended statistics for port 0 > > rx_good_packets: 1 > > tx_good_packets: 1 > > rx_good_bytes: 42 > > tx_good_bytes: 42 > > rx_missed_errors: 0 > > rx_errors: 0 > > tx_errors: 0 > > rx_mbuf_allocation_errors: 0 > > rx_q0_packets: 1 > > rx_q0_bytes: 42 > > rx_q0_errors: 0 > > tx_q0_packets: 1 > > tx_q0_bytes: 42 > > wd_expired: 0 > > dev_start: 1 > > dev_stop: 0 > > tx_drops: 0 > > bw_in_allowance_exceeded: 0 > > bw_out_allowance_exceeded: 0 > > pps_allowance_exceeded: 0 > > conntrack_allowance_exceeded: 0 > > linklocal_allowance_exceeded: 0 > > rx_q0_cnt: 1 > > rx_q0_bytes: 42 > > rx_q0_refill_partial: 0 > > rx_q0_bad_csum: 0 > > rx_q0_mbuf_alloc_fail: 0 > > rx_q0_bad_desc_num: 0 > > rx_q0_bad_req_id: 0 > > tx_q0_cnt: 1 > > tx_q0_bytes: 42 > > tx_q0_prepare_ctx_err: 0 > > tx_q0_linearize: 0 > > tx_q0_linearize_failed: 0 > > tx_q0_tx_poll: 1 > > tx_q0_doorbells: 1 > > tx_q0_bad_req_id: 0 > > tx_q0_available_desc: 1022 > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > > > I think that you can see the regression because of the new xstats (ENI > > limiters), which were added after DPDK v19.11 (mainline commit: > > 45718ada5fa12619db4821646ba964a2df365c68), but I'm not sure what is > > the reason why you can see that. > > > > Especially I've got few questions below. > > > > 1. Is the application you're using the single-process or multiprocess? > > If so, from which process are you probing for the xstats? > > 2. Have you tried running latest DPDK v20.11 LTS? > > 3. What kernel module are you using (igb_uio/vfio-pci)? > > 4. On what AWS instance type it was reproduced? > > 5. Is the Seg Fault happening the first time you call for the xstats? > > > > If you've got any other information which could be useful, please > > share, it will help us with resolving the cause of the issue. > > > > Thanks, > > Michal > > > > >> > > >> Regards > > >> Amiya > > Try getting xstats in secondary process. > I think that is where the bug was found. Thanks Stephen, indeed the issue reproduces in the secondary process. Basically ENA v2.2.1 is not MP aware, meaning it cannot be used safely from the secondary process. The main obstacle is the admin queue which is used for processing the hardware requests which can be used safely only from the primary process. It's not strictly a bug, as we weren't exposing 'MP Awareness' in the PMD features list, it's more like a lack of proper MP support. The latest ENA PMD release should be MP safe. We currently don't have PMD backport ready for the older LTS release (but we're planning to do so for ENA v2.6.0 on the amzn-drivers repository: https://github.com/amzn/amzn-drivers/tree/master/userspace/dpdk). I can provide you with a list of the patches that were added across the ENA PMD releases which were connected with the MP support: net/ena: make ethdev references multi-process safe aab58857330bb4bd03f6699bf1ee716f72993774 net/ena: disable ops not supported by secondary process 39ecdd3dfa15d5ac591ce8d77d362480bff32355 net/ena: proxy AQ calls to primary process (this is the critical patch) e3595539e0e03f0dbb81904f8edaaef0447a4f62 net/ena: enable stats for multi-process mode 3aa3fa851f58873457bdc5c387d0e5956f812322 net/ena/base: make IO memzone unique per port 850e1bb1c72b3d1163b2857ab7a02af11ba29c40 Thanks, Michal