From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E3D9F425C6 for ; Sun, 17 Sep 2023 23:27:58 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AB3E240272; Sun, 17 Sep 2023 23:27:58 +0200 (CEST) Received: from mail-pg1-f179.google.com (mail-pg1-f179.google.com [209.85.215.179]) by mails.dpdk.org (Postfix) with ESMTP id C97354025E for ; Sun, 17 Sep 2023 23:27:56 +0200 (CEST) Received: by mail-pg1-f179.google.com with SMTP id 41be03b00d2f7-53fa455cd94so2745994a12.2 for ; Sun, 17 Sep 2023 14:27:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1694986076; x=1695590876; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:from:to:cc:subject:date :message-id:reply-to; bh=ACUZv/HZSNq5k903tNFTKkOrU6E3uPGwJCipqAA5s3E=; b=obHK/3pE3KMwbX+NvUQV4Xz6GHhXB+EVjug9hDpHzjpetC0BZMq30z0pqoaRqyuPB0 FW8barFDVWuY4oWQv5zXp2pnrhAvh62ux7G+3+JngOBa42DM0MjKw+Ijn6IfVqmgZl9Z MSm/VkYrCNVedONgeO1aE/svjVaL3KlixNEMdAZJOjDMQRHEHJ9UuXaEDmvrs8RNiwLc PcPmc4zlSHcoqPOXbWe1pBkOmpCxHyCd+TpLRowksp8a8aSnMzSa7Kk8pImUXdzUK9To OElqRDWW+gse2TAZRRdAm8D317LG5PEZBIXQl7kjHZsXCULh4CTM0pOT7EJrf4W+1kFl 0Kog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694986076; x=1695590876; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ACUZv/HZSNq5k903tNFTKkOrU6E3uPGwJCipqAA5s3E=; b=cCi+9mzy+8wG+cJi5ZAKLTyGlYPWNrn36IwX3Zfs8HsQuIyhk3IW/XG/rv9pILCEjw Rgk2foU0i5BKGFm7/k82F2AymK7mAesJMYopr1+M1VzJPDNmdoickjmhsz9xXn5m8hkc f4QU8d0xduqzM1W7hiE/RGNmb9iLppW6M5Vi2ndfIivOLnvKchuHHFKFf/d0QO3/cbye K1vcxJdaZGjLMc+cyIbJS4XGj7TduAK1kU08yWWcj/cpqFRv1b9KIwfN5J18XXH/Mk8e KidwYKEg4WHe9pZyEfmbRhZpJFVpFZJ67hdAtvfvCzQ5DYLb3VgaAg5GZpaXW41RWQyX PQyQ== X-Gm-Message-State: AOJu0YxHXDEd6DusPrG0ocol4UFNUr48UM7dhCOu2sHP0RnFy+r457Y8 RTccRgtsABJ5PIvia37CswYHlQ== X-Google-Smtp-Source: AGHT+IEVEpsCSseVUyMiriJW2taeerR4fhTOh55o27Lp7ZDh98mcKtdTFgbpazfTjI9RyFOftwd78A== X-Received: by 2002:a17:903:455:b0:1bb:cd10:823a with SMTP id iw21-20020a170903045500b001bbcd10823amr5614786plb.39.1694986075763; Sun, 17 Sep 2023 14:27:55 -0700 (PDT) Received: from hermes.local (204-195-112-131.wavecable.com. [204.195.112.131]) by smtp.gmail.com with ESMTPSA id ik26-20020a170902ab1a00b001b89b7e208fsm7144443plb.88.2023.09.17.14.27.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 17 Sep 2023 14:27:55 -0700 (PDT) Date: Sun, 17 Sep 2023 14:27:53 -0700 From: Stephen Hemminger To: Gabor LENCSE Cc: users@dpdk.org Subject: Re: rte_exit() does not terminate the program -- is it a bug or a new feature? Message-ID: <20230917142753.596c988a@hermes.local> In-Reply-To: <930f2885-caec-7297-65f7-0959dd6d550c@hit.bme.hu> References: <3ef90e53-c28b-09e8-e3f3-2b78727114ff@hit.bme.hu> <20230915080608.724f0102@hermes.local> <35b55d11-bb67-2363-6f0a-0fb9667ebe6d@hit.bme.hu> <20230915143305.0ac313c6@hermes.local> <930f2885-caec-7297-65f7-0959dd6d550c@hit.bme.hu> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org On Sun, 17 Sep 2023 21:37:30 +0200 Gabor LENCSE wrote: > However, l2fwd also uses the "rte_exit()" function to terminate the > program. The only difference is that it calls the "rte_exit()" function > from the main program, and I do so in a thread started by the > "rte_eal_remote_launch()" function. Calling rte_exit in a thread other than main thread won't work because the cleanup code is calling rte_eal_cleanup, and inside that it ends up waiting for all workers. Since the thread you are calling from is a worker, it ends up waiting for itself. rte_exit() rte_eal_cleanup() rte_service_finalize() rte_eal_mp_wait_lcore() void rte_eal_mp_wait_lcore(void) { unsigned lcore_id; RTE_LCORE_FOREACH_WORKER(lcore_id) { rte_eal_wait_lcore(lcore_id); } } Either service handling needs to be smarter, the rte_exit() function check if it is called from main lcore, and/or documentation needs update. Not a simple fix because in order to safely do the cleanup logic all threads have to gone to a quiescent state.