From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1E9AD425A9 for ; Fri, 15 Sep 2023 23:33:10 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D5BEB402B7; Fri, 15 Sep 2023 23:33:09 +0200 (CEST) Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) by mails.dpdk.org (Postfix) with ESMTP id D9AE9402AF for ; Fri, 15 Sep 2023 23:33:08 +0200 (CEST) Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-68fdcc37827so2664440b3a.0 for ; Fri, 15 Sep 2023 14:33:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1694813588; x=1695418388; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:from:to:cc:subject:date :message-id:reply-to; bh=1GgUm1oQIGlZYLP4kHTP/M5XeovHNAsPxcEdfxHZ4YY=; b=NsO47mdK0UJN974gcKqli37eX+g5EyT/f59acbIO/3jpZCOF1g68QpIseeymG3ZUff VrHNYJYh65+HDojVjU9Z/PhPUI0L33CpulFJPqiNxzJAjY66NLIefp9QOzwcPhRFPOV3 zIvxcbri9XgrmggCWff6s4vVY3WeQsUC/xrImNz1FC+7tYFcTPeUVJrHFA7oHIhw6ms1 pBJtmDEjNp6ghQaRciK15a8CM4/qD8KjLVaa3O5cYNWkK9qm/j9Sb2pfMueCJbzuzNW/ odoVAvslM0u7s0T/KFyonhKLV1Kgc5UNBhyHkcumqpNORD64n+jRmdtV4a8vPPBEnzMh 2wNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694813588; x=1695418388; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1GgUm1oQIGlZYLP4kHTP/M5XeovHNAsPxcEdfxHZ4YY=; b=QEXyHU/HNizGhuJsbf6ue+8FDMT3KYpr/1yrYQPvNcpRCQ0e8cCsqCrkX08+Bph4l7 ufQ6TfvmJIZm5kdXGt0xuoNrySkAlVTSXbi6XNz7cOCAmvEok4LK/rYVgdFLFs6I7+G+ e6+YAfzbeP6eQZcx67puAZMkQuLsttK9PIitGzM6LsE83nxyOaSL/pac8+24hNsxCVqe A74+L74BFrQiMwUN40l+miSBuMR6VGwKOeFMIs5+eU6D/yNppuyXBWqelrAbeeHg84ez n/pJRLV2gSge4FOoqUR2sOocCJ9OOMw/OyUgNCIvjdBLW9Ju7yC10mVOUk3w8tVY9SMX y6cA== X-Gm-Message-State: AOJu0Yx0RMjCjqAwNX0dE8i6K1vs/FC4qDsj0KQyt5/hN6mmw0AAAC3j JBQc43VGkqkAAeurdmavcI0MD9ZoCYLT5VJG83Q= X-Google-Smtp-Source: AGHT+IEN8xBxPrFGTAEwuCN/pki/MwuHerdbUxSuxa/YOYRLGC9AsktRSwizdgrA8UmTB1ytGMEfrw== X-Received: by 2002:a17:903:1cf:b0:1b5:522a:1578 with SMTP id e15-20020a17090301cf00b001b5522a1578mr8706105plh.29.1694813587701; Fri, 15 Sep 2023 14:33:07 -0700 (PDT) Received: from hermes.local (204-195-112-131.wavecable.com. [204.195.112.131]) by smtp.gmail.com with ESMTPSA id b13-20020a170902d50d00b001bba3a4888bsm3918769plg.102.2023.09.15.14.33.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 15 Sep 2023 14:33:07 -0700 (PDT) Date: Fri, 15 Sep 2023 14:33:05 -0700 From: Stephen Hemminger To: Gabor LENCSE Cc: users@dpdk.org Subject: Re: rte_exit() does not terminate the program -- is it a bug or a new feature? Message-ID: <20230915143305.0ac313c6@hermes.local> In-Reply-To: <35b55d11-bb67-2363-6f0a-0fb9667ebe6d@hit.bme.hu> References: <3ef90e53-c28b-09e8-e3f3-2b78727114ff@hit.bme.hu> <20230915080608.724f0102@hermes.local> <35b55d11-bb67-2363-6f0a-0fb9667ebe6d@hit.bme.hu> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org On Fri, 15 Sep 2023 20:28:44 +0200 Gabor LENCSE wrote: > Dear Stephen, >=20 > Thank you very much for your answer! >=20 > > Please get a backtrace. Simple way is to attach gdb to that process. =20 >=20 > I have recompiled siitperf with the "-g" compiler option and executed it= =20 > from gdb. When the program stopped, I pressed Ctrl-C and issued a "bt"=20 > command, but of course, it displayed the call stack of the main thread.=20 > Then I collected some information about the threads using the "info=20 > threads" command and after that I switched to all available threads, and= =20 > issued a "bt" command for those that represented my send() and receive()= =20 > functions (I identified them using their LWP number). Here are the result= s: >=20 > root@x033:~/siitperf# gdb ./build/siitperf-tp > GNU gdb (Ubuntu 12.1-0ubuntu1~22.04) 12.1 > Copyright (C) 2022 Free Software Foundation, Inc. > License GPLv3+: GNU GPL version 3 or later=20 > > This is free software: you are free to change and redistribute it. > There is NO WARRANTY, to the extent permitted by law. > Type "show copying" and "show warranty" for details. > This GDB was configured as "x86_64-linux-gnu". > Type "show configuration" for configuration details. > For bug reporting instructions, please see: > . > Find the GDB manual and other documentation resources online at: > =C2=A0=C2=A0=C2=A0 . >=20 > For help, type "help". > Type "apropos word" to search for commands related to "word"... > Reading symbols from ./build/siitperf-tp... > (gdb) set args 84 8000000 60 2000 2 2 > (gdb) run > Starting program: /root/siitperf/build/siitperf-tp 84 8000000 60 2000 2 2 > [Thread debugging using libthread_db enabled] > Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". > EAL: Detected CPU lcores: 56 > EAL: Detected NUMA nodes: 4 > EAL: Detected shared linkage of DPDK > [New Thread 0x7ffff49c0640 (LWP 24747)] > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > [New Thread 0x7ffff41bf640 (LWP 24748)] > EAL: Selected IOVA mode 'PA' > EAL: No free 2048 kB hugepages reported on node 0 > EAL: No free 2048 kB hugepages reported on node 1 > EAL: No free 2048 kB hugepages reported on node 2 > EAL: No free 2048 kB hugepages reported on node 3 > EAL: No available 2048 kB hugepages reported > EAL: VFIO support initialized > [New Thread 0x7ffff39be640 (LWP 24749)] > [New Thread 0x7ffff31bd640 (LWP 24750)] > [New Thread 0x7ffff29bc640 (LWP 24751)] > [New Thread 0x7ffff21bb640 (LWP 24752)] > EAL: Probe PCI driver: net_ice (8086:159b) device: 0000:98:00.0 (socket 2) > ice_load_pkg_type(): Active package is: 1.3.26.0, ICE OS Default Package= =20 > (single VLAN mode) > EAL: Probe PCI driver: net_ice (8086:159b) device: 0000:98:00.1 (socket 2) > ice_load_pkg_type(): Active package is: 1.3.26.0, ICE OS Default Package= =20 > (single VLAN mode) > [New Thread 0x7ffff19ba640 (LWP 24753)] > TELEMETRY: No legacy callbacks, legacy socket not created > ice_set_rx_function(): Using AVX2 Vector Rx (port 0). > ice_set_rx_function(): Using AVX2 Vector Rx (port 1). > Info: Left port and Left Sender CPU core belong to the same NUMA node: 2 > Info: Right port and Right Receiver CPU core belong to the same NUMA node= : 2 > Info: Right port and Right Sender CPU core belong to the same NUMA node: 2 > Info: Left port and Left Receiver CPU core belong to the same NUMA node: 2 > Info: Testing initiated at 2023-09-15 18:06:05 > Reverse frames received: 394340224 > Forward frames received: 421381420 > Info: Forward sender's sending took 70.3073795726 seconds. > EAL: Error - exiting with code: 1 > =C2=A0 Cause: Forward sending exceeded the 60.0006000000 seconds limit, = the=20 > test is invalid. > Info: Reverse sender's sending took 74.9384769772 seconds. > EAL: Error - exiting with code: 1 > =C2=A0 Cause: Reverse sending exceeded the 60.0006000000 seconds limit, = the=20 > test is invalid. > ^C > Thread 1 "siitperf-tp" received signal SIGINT, Interrupt. > 0x00007ffff7d99dd2 in rte_eal_wait_lcore () from=20 > /lib/x86_64-linux-gnu/librte_eal.so.22 > (gdb) bt > #0=C2=A0 0x00007ffff7d99dd2 in rte_eal_wait_lcore () from=20 > /lib/x86_64-linux-gnu/librte_eal.so.22 > #1=C2=A0 0x000055555559929e in Throughput::measure (this=3D0x7fffffffe300= ,=20 > leftport=3D0, rightport=3D1) at throughput.cc:3743 > #2=C2=A0 0x0000555555557b20 in main (argc=3D7, argv=3D0x7fffffffe5b8) at= =20 > main-tp.cc:34 > (gdb) info threads > =C2=A0 Id=C2=A0=C2=A0 Target Id=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Frame > * 1=C2=A0=C2=A0=C2=A0 Thread 0x7ffff77cac00 (LWP 24744) "siitperf-tp"=20 > 0x00007ffff7d99dd2 in rte_eal_wait_lcore () > =C2=A0=C2=A0 from /lib/x86_64-linux-gnu/librte_eal.so.22 > =C2=A0 2=C2=A0=C2=A0=C2=A0 Thread 0x7ffff49c0640 (LWP 24747) "eal-intr-t= hread"=20 > 0x00007ffff7a32fde in epoll_wait (epfd=3D6, events=3D0x7ffff49978d0, > =C2=A0=C2=A0=C2=A0 maxevents=3D3, timeout=3D-1) at ../sysdeps/unix/sysv/= linux/epoll_wait.c:30 > =C2=A0 3=C2=A0=C2=A0=C2=A0 Thread 0x7ffff41bf640 (LWP 24748) "rte_mp_han= dle"=20 > __recvmsg_syscall (flags=3D0, msg=3D0x7ffff41965c0, fd=3D9) > =C2=A0=C2=A0=C2=A0 at ../sysdeps/unix/sysv/linux/recvmsg.c:27 > =C2=A0 4=C2=A0=C2=A0=C2=A0 Thread 0x7ffff39be640 (LWP 24749) "lcore-work= er-1"=20 > 0x00007ffff7d99dd2 in rte_eal_wait_lcore () > =C2=A0=C2=A0 from /lib/x86_64-linux-gnu/librte_eal.so.22 > =C2=A0 5=C2=A0=C2=A0=C2=A0 Thread 0x7ffff31bd640 (LWP 24750) "lcore-work= er-5"=20 > __GI___libc_read (nbytes=3D1, buf=3D0x7ffff31947cf, fd=3D40) > =C2=A0=C2=A0=C2=A0 at ../sysdeps/unix/sysv/linux/read.c:26 > =C2=A0 6=C2=A0=C2=A0=C2=A0 Thread 0x7ffff29bc640 (LWP 24751) "lcore-work= er-9"=20 > 0x00007ffff7d99dd2 in rte_eal_wait_lcore () > =C2=A0=C2=A0 from /lib/x86_64-linux-gnu/librte_eal.so.22 > =C2=A0 7=C2=A0=C2=A0=C2=A0 Thread 0x7ffff21bb640 (LWP 24752) "lcore-work= er-13"=20 > __GI___libc_read (nbytes=3D1, buf=3D0x7ffff21927cf, fd=3D48) > =C2=A0=C2=A0=C2=A0 at ../sysdeps/unix/sysv/linux/read.c:26 > =C2=A0 8=C2=A0=C2=A0=C2=A0 Thread 0x7ffff19ba640 (LWP 24753) "telemetry-= v2"=20 > 0x00007ffff7a3460f in __libc_accept (fd=3D58, addr=3D..., len=3D0x0) > =C2=A0=C2=A0=C2=A0 at ../sysdeps/unix/sysv/linux/accept.c:26 > (gdb) thread 1 > [Switching to thread 1 (Thread 0x7ffff77cac00 (LWP 24744))] > #0=C2=A0 0x00007ffff7d99dd2 in rte_eal_wait_lcore () from=20 > /lib/x86_64-linux-gnu/librte_eal.so.22 > (gdb) thread 2 > [Switching to thread 2 (Thread 0x7ffff49c0640 (LWP 24747))] > #0=C2=A0 0x00007ffff7a32fde in epoll_wait (epfd=3D6, events=3D0x7ffff4997= 8d0,=20 > maxevents=3D3, timeout=3D-1) > =C2=A0=C2=A0=C2=A0 at ../sysdeps/unix/sysv/linux/epoll_wait.c:30 > 30=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ../sysdeps/unix/sysv/linux/epoll_wait.c:= No such file or directory. > (gdb) thread 3 > [Switching to thread 3 (Thread 0x7ffff41bf640 (LWP 24748))] > #0=C2=A0 __recvmsg_syscall (flags=3D0, msg=3D0x7ffff41965c0, fd=3D9) at=20 > ../sysdeps/unix/sysv/linux/recvmsg.c:27 > 27=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ../sysdeps/unix/sysv/linux/recvmsg.c: No= such file or directory. > (gdb) thread 4 > [Switching to thread 4 (Thread 0x7ffff39be640 (LWP 24749))] > #0=C2=A0 0x00007ffff7d99dd2 in rte_eal_wait_lcore () from=20 > /lib/x86_64-linux-gnu/librte_eal.so.22 > (gdb) bt > #0=C2=A0 0x00007ffff7d99dd2 in rte_eal_wait_lcore () from=20 > /lib/x86_64-linux-gnu/librte_eal.so.22 > #1=C2=A0 0x00007ffff7d99f97 in rte_eal_mp_wait_lcore () from=20 > /lib/x86_64-linux-gnu/librte_eal.so.22 > #2=C2=A0 0x00007ffff7da99ee in rte_service_finalize () from=20 > /lib/x86_64-linux-gnu/librte_eal.so.22 > #3=C2=A0 0x00007ffff7db0404 in rte_eal_cleanup () from=20 > /lib/x86_64-linux-gnu/librte_eal.so.22 > #4=C2=A0 0x00007ffff7d9d0b7 in rte_exit () from=20 > /lib/x86_64-linux-gnu/librte_eal.so.22 > #5=C2=A0 0x000055555558e685 in send (par=3D0x7fffffffde00) at throughput.= cc:1562 > #6=C2=A0 0x00007ffff7d94a18 in ?? () from /lib/x86_64-linux-gnu/librte_ea= l.so.22 > #7=C2=A0 0x00007ffff79a1b43 in start_thread (arg=3D) at=20 > ./nptl/pthread_create.c:442 > #8=C2=A0 0x00007ffff7a33a00 in clone3 () at=20 > ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81 > (gdb) thread 5 > [Switching to thread 5 (Thread 0x7ffff31bd640 (LWP 24750))] > #0=C2=A0 __GI___libc_read (nbytes=3D1, buf=3D0x7ffff31947cf, fd=3D40) at= =20 > ../sysdeps/unix/sysv/linux/read.c:26 > 26=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ../sysdeps/unix/sysv/linux/read.c: No su= ch file or directory. > (gdb) bt > #0=C2=A0 __GI___libc_read (nbytes=3D1, buf=3D0x7ffff31947cf, fd=3D40) at= =20 > ../sysdeps/unix/sysv/linux/read.c:26 > #1=C2=A0 __GI___libc_read (fd=3D40, buf=3D0x7ffff31947cf, nbytes=3D1) at= =20 > ../sysdeps/unix/sysv/linux/read.c:24 > #2=C2=A0 0x00007ffff7d9490c in ?? () from /lib/x86_64-linux-gnu/librte_ea= l.so.22 > #3=C2=A0 0x00007ffff79a1b43 in start_thread (arg=3D) at=20 > ./nptl/pthread_create.c:442 > #4=C2=A0 0x00007ffff7a33a00 in clone3 () at=20 > ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81 > (gdb) thread 6 > [Switching to thread 6 (Thread 0x7ffff29bc640 (LWP 24751))] > #0=C2=A0 0x00007ffff7d99dd2 in rte_eal_wait_lcore () from=20 > /lib/x86_64-linux-gnu/librte_eal.so.22 > (gdb) bt > #0=C2=A0 0x00007ffff7d99dd2 in rte_eal_wait_lcore () from=20 > /lib/x86_64-linux-gnu/librte_eal.so.22 > #1=C2=A0 0x00007ffff7d99f97 in rte_eal_mp_wait_lcore () from=20 > /lib/x86_64-linux-gnu/librte_eal.so.22 > #2=C2=A0 0x00007ffff7da99ee in rte_service_finalize () from=20 > /lib/x86_64-linux-gnu/librte_eal.so.22 > #3=C2=A0 0x00007ffff7db0404 in rte_eal_cleanup () from=20 > /lib/x86_64-linux-gnu/librte_eal.so.22 > #4=C2=A0 0x00007ffff7d9d0b7 in rte_exit () from=20 > /lib/x86_64-linux-gnu/librte_eal.so.22 > #5=C2=A0 0x000055555558e685 in send (par=3D0x7fffffffde80) at throughput.= cc:1562 > #6=C2=A0 0x00007ffff7d94a18 in ?? () from /lib/x86_64-linux-gnu/librte_ea= l.so.22 > #7=C2=A0 0x00007ffff79a1b43 in start_thread (arg=3D) at=20 > ./nptl/pthread_create.c:442 > #8=C2=A0 0x00007ffff7a33a00 in clone3 () at=20 > ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81 > (gdb) thread 7 > [Switching to thread 7 (Thread 0x7ffff21bb640 (LWP 24752))] > #0=C2=A0 __GI___libc_read (nbytes=3D1, buf=3D0x7ffff21927cf, fd=3D48) at= =20 > ../sysdeps/unix/sysv/linux/read.c:26 > 26=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 in ../sysdeps/unix/sysv/linux/read.c > (gdb) bt > #0=C2=A0 __GI___libc_read (nbytes=3D1, buf=3D0x7ffff21927cf, fd=3D48) at= =20 > ../sysdeps/unix/sysv/linux/read.c:26 > #1=C2=A0 __GI___libc_read (fd=3D48, buf=3D0x7ffff21927cf, nbytes=3D1) at= =20 > ../sysdeps/unix/sysv/linux/read.c:24 > #2=C2=A0 0x00007ffff7d9490c in ?? () from /lib/x86_64-linux-gnu/librte_ea= l.so.22 > #3=C2=A0 0x00007ffff79a1b43 in start_thread (arg=3D) at=20 > ./nptl/pthread_create.c:442 > #4=C2=A0 0x00007ffff7a33a00 in clone3 () at=20 > ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81 > (gdb) thread 8 > [Switching to thread 8 (Thread 0x7ffff19ba640 (LWP 24753))] > #0=C2=A0 0x00007ffff7a3460f in __libc_accept (fd=3D58, addr=3D..., len=3D= 0x0) at=20 > ../sysdeps/unix/sysv/linux/accept.c:26 > 26=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ../sysdeps/unix/sysv/linux/accept.c: No = such file or directory. > (gdb) thread 9 > Unknown thread 9. > (gdb) >=20 > Some additional information from the siitperf.conf file: >=20 > CPU-L-Send 1 # Left Sender runs on this core > CPU-R-Recv 5 # Right Receiver runs on this core > CPU-R-Send 9 # Right Sender runs on this core > CPU-L-Recv 13 # Left Receiver runs on this core >=20 > Therefore, the "send()" functions are the ones that remain running on=20 > CPU cores 1 and 9. And they fully utilize their cores (as well as the=20 > main program does with core 0), I have checked it earlier. >=20 > This is a -- perhaps -- relevant part from the code of the main program: >=20 > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 // wait until active senders and receiver= s finish > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( forward ) { > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 rte_eal_wait_lcore(cpu_left_s= ender); > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 rte_eal_wait_lcore(cpu_right_= receiver); > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( reverse ) { > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 rte_eal_wait_lcore(cpu_right_= sender); > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 rte_eal_wait_lcore(cpu_left_r= eceiver); > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } >=20 > It seems to me as if the two send functions and also the main program=20 > would be actively waiting in the rte_eal_wait_lcore() function. But I=20 > have no idea, why. If the sender that sent frames in the forward=20 > direction is there and the main program is there, then, IMHO, the=20 > rte_eal_wait_lcore(cpu_left_sender); function should finish. >=20 > Am I wrong? >=20 > > I suspect that since rte_exit() call the internal eal_cleanup function > > and that calls close in the dirver, that ICE driver close function has > > a bug. Perhaps ice close function does not correctly handle case where > > the device has not started. =20 >=20 > Yes, your hypothesis was confirmed: my both send() functions were in the= =20 > rte_eal_cleanup() function. :-) >=20 > However, I am not sure, which device you meant. But I think that they=20 > are initialized properly, because I can ensure a successful execution=20 > (and finishing) of the program by halving the frame rate: >=20 > root@x033:~/siitperf# ./build/siitperf-tp 84 4000000 6 2000 2 2 > EAL: Detected CPU lcores: 56 > EAL: Detected NUMA nodes: 4 > EAL: Detected shared linkage of DPDK > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > EAL: Selected IOVA mode 'PA' > EAL: No free 2048 kB hugepages reported on node 0 > EAL: No free 2048 kB hugepages reported on node 1 > EAL: No free 2048 kB hugepages reported on node 2 > EAL: No free 2048 kB hugepages reported on node 3 > EAL: No available 2048 kB hugepages reported > EAL: VFIO support initialized > EAL: Probe PCI driver: net_ice (8086:159b) device: 0000:98:00.0 (socket 2) > ice_load_pkg_type(): Active package is: 1.3.26.0, ICE OS Default Package= =20 > (single VLAN mode) > EAL: Probe PCI driver: net_ice (8086:159b) device: 0000:98:00.1 (socket 2) > ice_load_pkg_type(): Active package is: 1.3.26.0, ICE OS Default Package= =20 > (single VLAN mode) > TELEMETRY: No legacy callbacks, legacy socket not created > ice_set_rx_function(): Using AVX2 Vector Rx (port 0). > ice_set_rx_function(): Using AVX2 Vector Rx (port 1). > Info: Left port and Left Sender CPU core belong to the same NUMA node: 2 > Info: Right port and Right Receiver CPU core belong to the same NUMA node= : 2 > Info: Right port and Right Sender CPU core belong to the same NUMA node: 2 > Info: Left port and Left Receiver CPU core belong to the same NUMA node: 2 > Info: Testing initiated at 2023-09-15 17:43:11 > Info: Reverse sender's sending took 5.9999998420 seconds. > Reverse frames sent: 24000000 > Info: Forward sender's sending took 5.9999999023 seconds. > Forward frames sent: 24000000 > Forward frames received: 24000000 > Reverse frames received: 24000000 > Info: Test finished. > root@x033:~/siitperf# >=20 > (The only problem with this trick is that I want to use a binary search=20 > to determine the performance limit of the tester. And if the tester does= =20 > not stop, then my bash shell script waits for it forever.) >=20 > So, what should I do next? Not sure what the tx and rx polling loops look like in your application. But they need to have some way of forcing exit, and you need to set that flag before calling rte_exit(). See l2fwd and force_quit flag for an example.