From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6E36D423A1; Tue, 10 Jan 2023 11:17:15 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4921840E6E; Tue, 10 Jan 2023 11:17:15 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 73A0A40E6E for ; Tue, 10 Jan 2023 11:17:14 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1673345834; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=b3aJXOtkaDSSRj0vjpmY7sZSWLbWVymdW+APjePQWq0=; b=ieLcv9M9pYIaobNVHWrnixo9zM8AOW6r2WaOgHmq1O7/bC94oPrOUYxaut1o2SoXLR7mlg 7hs7sdKdnIR1TTshA1s/XMOWyxQrz2sQcFlvqrIh4IAMGSB6oEmSCYVJn0UdjYIEEBeinZ IXzYTqanaGrmQzUfJD9IZbvCAEmNEuQ= Received: from mail-pg1-f200.google.com (mail-pg1-f200.google.com [209.85.215.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-668-S6Y9p1noNPmNWYiHamPGPA-1; Tue, 10 Jan 2023 05:17:10 -0500 X-MC-Unique: S6Y9p1noNPmNWYiHamPGPA-1 Received: by mail-pg1-f200.google.com with SMTP id g18-20020a63f412000000b004aef17e314cso3383436pgi.21 for ; Tue, 10 Jan 2023 02:17:10 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=b3aJXOtkaDSSRj0vjpmY7sZSWLbWVymdW+APjePQWq0=; b=7RHZZ/n5y9aY7v82ZCMbSQ+YWzCQEOBoRjrN6fdPLq3J+LfA341QZk3QD/2yEzSyUj 7zWUFR18sajSIB9V7aYF6gwUI+bp9bIlvPHGP9elJOF4sgfCcJdHmtr268Zr7gmZOx/Y svUwFFJaCRjIQpmK1qpUbKtn7lP8z2p345mzfn7JMzz9Xyipb1baV9aQ5UVvHl1iqCpC mKA8rodT0S/ns3NTfMpjfMXGz+VSV9q7J3NYdXJck+Q6wGEbMN1fK66DCidkS4fabry9 UnDdl4SfZqcKk93npn9HxXV21hRZepBMQ10YHk9xcnIqdg41fmPzWN8nmR5HYWPUEX9Q MISw== X-Gm-Message-State: AFqh2kqdGdU4Cvy+VeFGjv0XLCkQtlkllujOePHtCZA9gdoqm+Oq6AfD FJOF7K3TCvada0rl5JDPyrWw+irGZ7rXzYphRafUnLsFhG5kLBV99cUre6/1aKp18k+59lSKgjO dVljxEZCP4jUwgVBvWoU= X-Received: by 2002:a63:1d46:0:b0:478:e150:e028 with SMTP id d6-20020a631d46000000b00478e150e028mr4754806pgm.284.1673345829803; Tue, 10 Jan 2023 02:17:09 -0800 (PST) X-Google-Smtp-Source: AMrXdXttTApT/J27bnnupMEQVVxKjSwuQY9sCJkcCTyejt6mXV3N36wEafNrM0Dv3hW7yakO8Z0u94BFFQnSo7mLy3s= X-Received: by 2002:a63:1d46:0:b0:478:e150:e028 with SMTP id d6-20020a631d46000000b00478e150e028mr4754797pgm.284.1673345829471; Tue, 10 Jan 2023 02:17:09 -0800 (PST) MIME-Version: 1.0 References: <20221117065726.277672-1-kaisenx.you@intel.com> <3ad04278-59c0-0c60-5c8c-9e57f33bb0de@amd.com> In-Reply-To: From: David Marchand Date: Tue, 10 Jan 2023 11:16:58 +0100 Message-ID: Subject: Re: [PATCH] net/iavf:fix slow memory allocation To: "You, KaisenX" Cc: Ferruh Yigit , "dev@dpdk.org" , "Burakov, Anatoly" , "stable@dpdk.org" , "Yang, Qiming" , "Zhou, YidingX" , "Wu, Jingjing" , "Xing, Beilei" , "Zhang, Qi Z" , Luca Boccassi , "Mcnamara, John" , Kevin Traynor X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Hello, On Tue, Dec 27, 2022 at 7:06 AM You, KaisenX wrote: > > > > > > > I tried to play a bit with a E810 nic on a dual numa and I > > > > > > > can't see anything wrong for now. > > > > > > > Can you provide a simple and small reproducer of your issue? > > > > > > > > > > > > > > Thanks. > > > > > > > > > > > > > This is my environment: > > > > > > Enter "lscpu" on the command line: > > > > > > NUMA: > > > > > > NUMA node(s): 2 > > > > > > NUMA node0 CPU(S) : 0-27,56-83 > > > > > > NUMA node1 CPU(S) : 28-55,84-111 > > > > > > > > > > > > List the steps to reproduce the issue: > > > > > > > > > > > > 1. create vf and blind to dpdk > > > > > > echo 1 > /sys/bus/pci/devices/0000\:ca\:00.0/sriov_ numvfs > > > > > > ./usertools/dpdk-devbind. py -b vfio-pci 0000:ca:01.0 2. launch > > > > > > testpmd ./x86_ 64-native-linuxapp-clang/app/dpdk-testpmd -l > > > > > > 28-48 -n 4 -a 0000:ca:01.0 --file-prefix=dpdk_ 525342_ > > > > > > 20221104042659 -- -i > > > > > > --rxq=256 --txq=256 > > > > > > --total-num-mbufs=500000 > > > > > > > > > > > > Parameter Description: > > > > > > "-l 28-48":The range of parameter values after "-l" must be on > > > > > > "NUMA > > > > > node1 CPU(S)" > > > > > > "0000:ca:01.0":inset on node1 > > > > > - Back to your topic. > > > > > Can you try this simple hack: > > > > > > > > > > diff --git a/lib/eal/common/eal_common_thread.c > > > > > b/lib/eal/common/eal_common_thread.c > > > > > index c5d8b4327d..92160c7fa6 100644 > > > > > --- a/lib/eal/common/eal_common_thread.c > > > > > +++ b/lib/eal/common/eal_common_thread.c > > > > > @@ -253,6 +253,7 @@ static void *ctrl_thread_init(void *arg) > > > > > void *routine_arg = params->arg; > > > > > > > > > > __rte_thread_init(rte_lcore_id(), cpuset); > > > > > + RTE_PER_LCORE(_socket_id) = SOCKET_ID_ANY; > > > > > params->ret = pthread_setaffinity_np(pthread_self(), > > > sizeof(*cpuset), > > > > > cpuset); > > > > > if (params->ret != 0) { > > > > > > > > > Thanks for your advice. > > > > > > > > But this issue still exists after I tried. > > > > > > Ok, I think I understand what is wrong... but I am still guessing as I > > > am not sure what your "issue" is. > > > Can you have a try with: > > > https://patchwork.dpdk.org/project/dpdk/patch/20221221104858.296530- > > 1- > > > david.marchand@redhat.com/ > > > > > > Thanks. > > > > > I think this issue is similar to the description in the patch you gave me. > > > > when the DPDK application is started only on one numa node, Interrupt > > thread find memory on another numa node. This leads to a whole set of > > memory allocation/release operations every time when "rte_malloc" is called. > > This is the root cause of this issue. > > > > This issue can be solved after I tried. > > Thanks for your advice. > > After further testing in a different environment, we found the issue still > existed in your last patch. After troubleshooting, it is found that in the > "malloc_get_numa_socket()" API, if the return value of "rte_socket_id()" > is "SOCKET_ID_ANY" (- 1), the API will return > "rte_lcore_to_socket_id (rte_get_main_lcore())"; > Otherwise, "malloc_get_numa_socket()" API will directly return > "the return value of rte_socket_id()",in this case, the issue cannot be solved. > > And the return value of "rte_socket_id()" is modified by the solution you > suggested in your last email (RTE_PER_LCORE (_socket_id)=SOCKET_ ID_ ANY;). > Therefore, I think merging your two suggestions together could completely solve this issue. > > Can you please update your accordingly? Please try the last revision and report back. https://patchwork.dpdk.org/project/dpdk/list/?series=26362 Thanks. -- David Marchand