From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1CACBA00BE; Tue, 19 Apr 2022 14:10:36 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0DF954068E; Tue, 19 Apr 2022 14:10:36 +0200 (CEST) Received: from mail-qk1-f170.google.com (mail-qk1-f170.google.com [209.85.222.170]) by mails.dpdk.org (Postfix) with ESMTP id 8C62340687 for ; Tue, 19 Apr 2022 14:10:34 +0200 (CEST) Received: by mail-qk1-f170.google.com with SMTP id a186so10493454qkc.10 for ; Tue, 19 Apr 2022 05:10:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20210112.gappssmtp.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=wY1EdA1/SNPq9SYM04BRaFV6RXkP6Xbd4KxnE3H39AM=; b=1C4636yKMQdGtCins5f3xbqeNCo8K96bGRZy0gER+lTLCg0X0ELDWo4GyaPkbXQEaZ M17bzJAJeyYRkIc5eSNWsgO7WdPSM0XvkqjTY1yPkaZ7ftWeeSbGUchKZXkpyjTyUlag 5fzm4/5gmfFoA6ksE120Z1fITcrQAev5Mio3SjPqS0kSSbnEqDCGooLkevxZNVMX5XZG Uqqm+0xyVcFNgWE/19/MCnN9hWCNgsg/Sbd2cmwK0FOKzyYwaHgT8T9i+fJ0OdP/odvp D3GKh7YX8Fad/ZajmQZuV7K8XmGpqg23XKtwt6vR7hHr9zYlhC3Yk/MbLcjFtkQ89aAX /3rg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=wY1EdA1/SNPq9SYM04BRaFV6RXkP6Xbd4KxnE3H39AM=; b=Ose4Ot/BPQdCNVarqCYLNd8gDZ7bDLbbv1pg/HNEs5wTmJ89dEe6tDTjz9Eqw4FGfS syTRs/dgArufEAcoBdtz5n/HuK/m02LPuKnStF7W98Z2dvDTUQp55s0y/PTN+YJlLSK/ 7tlVYvhmcuKKr/OtXaZhjz98l4z2Eq5p7wZjeZGodfzsBM3hUhKmzzDeM/gbKoRza+gN AUGO2vlEezYCSQj7khHwsijk6rtrOSi4/qhbBt8N56IQrgTEOqAqZQvslWrfjkQW9Mbj 5OMjcDEE0+TSMr4fAJlhX/P1a594W+BR1fkx13lOfAY0EPEG9sPFiQre/n2FQSVAK65w KbFQ== X-Gm-Message-State: AOAM53360tqTiplSr+cw2aU72LtrvJkCWtULuqGIa6gU72zZPzWItAVd 0qvDizfkiXJTWMIvjKcxz5dzvpjArrZZ8LQGSFFN5A== X-Google-Smtp-Source: ABdhPJzxcZCzFiGKLwihIQaav7kuXUGbL7vitb5lJ9oVlr9yKZVJT1NtQJ7RKK4u0OyrM4VBO+zEFbwJBpztZ5iHdWY= X-Received: by 2002:a05:620a:b9b:b0:69e:687c:ee5c with SMTP id k27-20020a05620a0b9b00b0069e687cee5cmr9303327qkh.704.1650370233947; Tue, 19 Apr 2022 05:10:33 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: =?UTF-8?Q?Micha=C5=82_Krawczyk?= Date: Tue, 19 Apr 2022 14:10:23 +0200 Message-ID: Subject: Re: DPDK:20.11.1: net/ena crash while fetching xstats To: Amiya Mohakud Cc: dev , Sachin Kanoje , Megha Punjani , Sharad Saha , Eswar Sadaram , "Brandes, Shai" , ena-dev Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org pon., 18 kwi 2022 o 17:19 Amiya Mohakud napisa=C5=82(a): > > + Megha, Sharad and Eswar. > > On Mon, Apr 18, 2022 at 2:03 PM Amiya Mohakud wrote: >> >> Hi Michal/DPDK-Experts, >> >> I am facing one issue in net/ena driver while fetching extended stats (x= stats). The DPDK seems to segfault with below backtrace. >> >> DPDK Version: 20.11.1 >> ENA version: 2.2.1 >> >> >> Using host libthread_db library "/lib64/libthread_db.so.1". >> >> Core was generated by `/opt/dpfs/usr/local/bin/brdagent'. >> >> Program terminated with signal SIGSEGV, Segmentation fault. >> >> #0 __memmove_avx_unaligned_erms () at ../sysdeps/x86_64/multiarch/memmo= ve-vec-unaligned-erms.S:232 >> >> 232 VMOVU %VEC(0), (%rdi) >> >> [Current thread is 1 (Thread 0x7fffed93a400 (LWP 5060))] >> >> >> Thread 1 (Thread 0x7fffed93a400 (LWP 5060)): >> >> #0 __memmove_avx_unaligned_erms () at ../sysdeps/x86_64/multiarch/memmo= ve-vec-unaligned-erms.S:232 >> >> #1 0x00007ffff3c246df in ena_com_handle_admin_completion () from ../lib= 64/../../lib64/libdpdk.so.20 >> >> #2 0x00007ffff3c1e7f5 in ena_interrupt_handler_rte () from ../lib64/../= ../lib64/libdpdk.so.20 >> >> #3 0x00007ffff3519902 in eal_intr_thread_main () from /../lib64/../../l= ib64/libdpdk.so.20 >> >> #4 0x00007ffff510714a in start_thread (arg=3D) at pthrea= d_create.c:479 >> >> #5 0x00007ffff561ff23 in clone () at ../sysdeps/unix/sysv/linux/x86_64/= clone.S:95 >> >> >> >> >> Background: >> >> This used to work fine with DPDK-19.11.3 , that means there was no crash= observed with the 19.11.3 DPDK version, but now after upgrading to DPDK 20= .11.1, DPDK is crashing with the above trace. >> It looks to me as a DPDK issue. >> I could see multiple fixes/patches in the net/ena area, but not able to = identify which patch would exactly fix this issue. >> >> For example: http://git.dpdk.org/dpdk/diff/?h=3Dreleases&id=3Daab5885733= 0bb4bd03f6699bf1ee716f72993774 >> https://inbox.dpdk.org/dev/20210430125725.28796-6-mk@semihalf.com/T/#me9= 9457c706718bb236d1fd8006ee7a0319ce76fc >> >> >> Could you please help here and let me know what patch could fix this iss= ue. >> + Shai Brandes and ena-dev Hi Amiya, Thanks for reaching me out. Could you please provide us with more details regarding the reproduction? I cannot reproduce this on my setup for DPDK v20.11.1 when using testpmd and probing for the xstats. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D [ec2-user@ dpdk]$ sudo ./build/app/dpdk-testpmd -- -i EAL: Detected 8 lcore(s) EAL: Detected 1 NUMA nodes EAL: Detected static linkage of DPDK EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'PA' EAL: No available hugepages reported in hugepages-1048576kB EAL: Probing VFIO support... EAL: Invalid NUMA socket, default to 0 EAL: Invalid NUMA socket, default to 0 EAL: Probe PCI driver: net_ena (1d0f:ec20) device: 0000:00:06.0 (socket 0) EAL: No legacy callbacks, legacy socket not created Interactive-mode selected ena_mtu_set(): Set MTU: 1500 testpmd: create a new mbuf pool : n=3D203456, size=3D2176, socke= t=3D0 testpmd: preferred mempool ops selected: ring_mp_mc Warning! port-topology=3Dpaired and odd forward ports number, the last port will pair with itself. Configuring Port 0 (socket 0) Port 0: Checking link statuses... Done Error during enabling promiscuous mode for port 0: Operation not supported - ignore testpmd> start io packet forwarding - ports=3D1 - cores=3D1 - streams=3D1 - NUMA support enabled, MP allocation mode: native Logical Core 1 (socket 0) forwards packets on 1 streams: RX P=3D0/Q=3D0 (socket 0) -> TX P=3D0/Q=3D0 (socket 0) peer=3D02:00:00:00= :00:00 io packet forwarding packets/burst=3D32 nb forwarding cores=3D1 - nb forwarding ports=3D1 port 0: RX queue number: 1 Tx queue number: 1 Rx offloads=3D0x0 Tx offloads=3D0x0 RX queue: 0 RX desc=3D0 - RX free threshold=3D0 RX threshold registers: pthresh=3D0 hthresh=3D0 wthresh=3D0 RX Offloads=3D0x0 TX queue: 0 TX desc=3D0 - TX free threshold=3D0 TX threshold registers: pthresh=3D0 hthresh=3D0 wthresh=3D0 TX offloads=3D0x0 - TX RS bit threshold=3D0 testpmd> show port xstats 0 ###### NIC extended statistics for port 0 rx_good_packets: 1 tx_good_packets: 1 rx_good_bytes: 42 tx_good_bytes: 42 rx_missed_errors: 0 rx_errors: 0 tx_errors: 0 rx_mbuf_allocation_errors: 0 rx_q0_packets: 1 rx_q0_bytes: 42 rx_q0_errors: 0 tx_q0_packets: 1 tx_q0_bytes: 42 wd_expired: 0 dev_start: 1 dev_stop: 0 tx_drops: 0 bw_in_allowance_exceeded: 0 bw_out_allowance_exceeded: 0 pps_allowance_exceeded: 0 conntrack_allowance_exceeded: 0 linklocal_allowance_exceeded: 0 rx_q0_cnt: 1 rx_q0_bytes: 42 rx_q0_refill_partial: 0 rx_q0_bad_csum: 0 rx_q0_mbuf_alloc_fail: 0 rx_q0_bad_desc_num: 0 rx_q0_bad_req_id: 0 tx_q0_cnt: 1 tx_q0_bytes: 42 tx_q0_prepare_ctx_err: 0 tx_q0_linearize: 0 tx_q0_linearize_failed: 0 tx_q0_tx_poll: 1 tx_q0_doorbells: 1 tx_q0_bad_req_id: 0 tx_q0_available_desc: 1022 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D I think that you can see the regression because of the new xstats (ENI limiters), which were added after DPDK v19.11 (mainline commit: 45718ada5fa12619db4821646ba964a2df365c68), but I'm not sure what is the reason why you can see that. Especially I've got few questions below. 1. Is the application you're using the single-process or multiprocess? If so, from which process are you probing for the xstats? 2. Have you tried running latest DPDK v20.11 LTS? 3. What kernel module are you using (igb_uio/vfio-pci)? 4. On what AWS instance type it was reproduced? 5. Is the Seg Fault happening the first time you call for the xstats? If you've got any other information which could be useful, please share, it will help us with resolving the cause of the issue. Thanks, Michal >> >> Regards >> Amiya