From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7FD33A0545 for ; Tue, 20 Dec 2022 10:33:54 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6F377410DE; Tue, 20 Dec 2022 10:33:54 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 9495240A7A for ; Tue, 20 Dec 2022 10:33:52 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1671528832; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=FUIZ76E+UBaAbaxzcFiiv6EPcgNtcNDrdIO0H62CH0Q=; b=CzmqJULpWtzfqs8MtHWwKwcYH0X27TRLAVE0n/jGTNZcLln5UaIFIMYf5CBO69ksoF6S88 Ea8UIjfl70t9OJvGxWCqrCTcM91GOu4UHd7C0eAe2UETIrdcrkcEUAQ/r/4cBemQxPz2S6 WrCWpjwIUT8LNAwfmg0fyHy4UemAsPI= Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-658-AyE1l_VtOxe-R7Rp6MGeig-1; Tue, 20 Dec 2022 04:33:48 -0500 X-MC-Unique: AyE1l_VtOxe-R7Rp6MGeig-1 Received: by mail-pl1-f199.google.com with SMTP id d7-20020a170902b70700b0018f4bf00569so8810267pls.4 for ; Tue, 20 Dec 2022 01:33:48 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=FUIZ76E+UBaAbaxzcFiiv6EPcgNtcNDrdIO0H62CH0Q=; b=CZTtdHNDyPhx/7y42PNadygr3Z2ZixLaUr0eWqHXPWqNl6RyxWemLo2u42CcXSs8Fl 6IiIRwcHQXmPs0iUgeOAIA8JNAF8UNpz5JcgZVdfoCj5xj/x0a6z2cYN8bUpMK46r5ic LTfxXJtvj6nYzRgYhDiph1ohKO5eLlZ8ZSsN2CwHloCJVbbilYVMweym7xTHCztX5B6T PVZv9ghOb9gbd/5eISLERCtoBdONgI3R/UEAX0aBIv9yKmMj8wFGK9S/kXkU9I4DIO5i VBgcq3MdQ4iATmzSZCuEUbTjiB+9Mr1V5wopnMVWF4qfXPTG5fJObpIt4kUTUVYipUB7 nvWw== X-Gm-Message-State: AFqh2kqNOgGw9fzIt+z3+OVfXXCXJnIxYcrTCBbUyXzhlqJCCnmTvabx 6513H276hNsuaj091qzaXMECnjCUJmF5dR77584muRSLTDAxbg2CX95/6XxuxBuWu2sTtg2fRNn GQnSzmp9z3WJOuEgroETWlDU= X-Received: by 2002:a17:90a:7106:b0:219:cc70:4726 with SMTP id h6-20020a17090a710600b00219cc704726mr1668121pjk.30.1671528827925; Tue, 20 Dec 2022 01:33:47 -0800 (PST) X-Google-Smtp-Source: AMrXdXv8Vfkd2VaHLSg/1WLZM49G8mz53UZdWshp5aEk2HMaCYpCwGXLENg583X422nJHi08dp1BJsGKZuxmrdtMdks= X-Received: by 2002:a17:90a:7106:b0:219:cc70:4726 with SMTP id h6-20020a17090a710600b00219cc704726mr1668118pjk.30.1671528827666; Tue, 20 Dec 2022 01:33:47 -0800 (PST) MIME-Version: 1.0 References: <20221117065726.277672-1-kaisenx.you@intel.com> <3ad04278-59c0-0c60-5c8c-9e57f33bb0de@amd.com> In-Reply-To: From: David Marchand Date: Tue, 20 Dec 2022 10:33:36 +0100 Message-ID: Subject: Re: [PATCH] net/iavf:fix slow memory allocation To: "You, KaisenX" Cc: Ferruh Yigit , "dev@dpdk.org" , "Burakov, Anatoly" , "stable@dpdk.org" , "Yang, Qiming" , "Zhou, YidingX" , "Wu, Jingjing" , "Xing, Beilei" , "Zhang, Qi Z" , Luca Boccassi , "Mcnamara, John" , Kevin Traynor X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org On Tue, Dec 20, 2022 at 7:52 AM You, KaisenX wrote: > > >> As to the reason for not using rte_malloc_socket. I thought > > >> rte_malloc_socket() could solve the problem too. And the appropriate > > >> parameter should be the socket_id that created the memory pool for > > >> DPDK initialization. Assuming that> the socket_id of the initially > > >> allocated memory = 1, first let the > > > eal_intr_thread > > >> determine if it is on the socket_id, then record this socket_id in > > >> the eal_intr_thread and pass it to the iavf_event_thread. But there > > >> seems no way to link this parameter to the iavf_dev_event_post() > > function. That is why rte_malloc_socket is not used. > > >> > > > > > > I was thinking socket id of device can be used, but that won't help if > > > the core that interrupt handler runs is in different socket. > > > And I also don't know if there is a way to get socket that interrupt > > > thread is on. @David may help perhaps. > > > > > > So question is why interrupt thread is not running on main lcore. > > > > > > > OK after some talk with David, what I am missing is 'rte_ctrl_thread_create()' > > does NOT run on main lcore, it can run on any core except data plane cores. > > > > Driver "iavf-event-thread" thread (iavf_dev_event_handle()) and interrupt > > thread (so driver interrupt callback iavf_dev_event_post()) can run on any > > core, making it hard to manage. > > And it seems it is not possible to control where interrupt thread to run. > > > > One option can be allocating hugepages for all sockets, but this requires user > > involvement, and can't happen transparently. > > > > Other option can be to control where "iavf-event-thread" run, like using > > 'rte_thread_create()' to create thread and provide attribute to run it on main > > lcore (rte_lcore_cpuset(rte_get_main_lcore()))? > > > > Can you please test above option? > > > > > The first option can solve this issue. but to borrow from your previous saying, > "in a dual socket system, if all used cores are in socket 1 and the NIC is in socket 1, > no memory is allocated for socket 0. This is to optimize memory consumption." > I think it's unreasonable to do so. > > About other option. In " rte_eal_intr_init" function, After the thread is created, > I set the thread affinity for eal-intr-thread, but it does not solve this issue. Jumping in this thread. I tried to play a bit with a E810 nic on a dual numa and I can't see anything wrong for now. Can you provide a simple and small reproducer of your issue? Thanks. -- David Marchand