From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C6E4EA0545; Tue, 20 Dec 2022 10:33:52 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 718E640685; Tue, 20 Dec 2022 10:33:52 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id BDD9A40395 for ; Tue, 20 Dec 2022 10:33:50 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1671528830; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=FUIZ76E+UBaAbaxzcFiiv6EPcgNtcNDrdIO0H62CH0Q=; b=bcxgeHCReunmY3CDBTrpHh5CWXqh/x3HLtwwCxYlER7VmLMoDr4m+Gw7XKClEnLCMUb7At sDn7XvWF7rXG334kTAIQ6nmhbdJHKrBj9oEYQJj3Ht25tR02bmnq5NtNp3uqVXcl4AJBWk 2azB9bA6uFz15iq+OZQHJCQ5vSzM+ww= Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-210-L267wEJiPs-5sCwcl-9eIw-1; Tue, 20 Dec 2022 04:33:49 -0500 X-MC-Unique: L267wEJiPs-5sCwcl-9eIw-1 Received: by mail-pf1-f197.google.com with SMTP id x21-20020a62fb15000000b0057451601be4so6501100pfm.19 for ; Tue, 20 Dec 2022 01:33:48 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=FUIZ76E+UBaAbaxzcFiiv6EPcgNtcNDrdIO0H62CH0Q=; b=EIRA7UBO5n2gKQyQQqC7Zr7hurD3NDhpLZrnJiQkCGt1PYS8R2K7mShHT+DbSCBhzU 7upFbRNCrBrNvpagTPb74TNGYygs474aZRSvZTzHDy2XeZ5KQRsa7DOv8XpRMGEZxaCR m0ZV+XG6xe3IXoIvDMZzWuujHYBbNuCp9dNvDEEh3fyM6lBBFOK+EZ8sstHbpR14ljmu vF/9QP+bK66M9e5tqwjW8txFZltsc8LfuhtZQb3hPd+Q8a4VVQQ1KnE2HRqnu2y0lQpj +AVs+ESjmU1njsdByAJurnmAYgZWjWGWvXJDkPI50PZcxhIsaQKuOjwHiOgDOoUDpvW+ ANEg== X-Gm-Message-State: AFqh2krGD4CcwCh0HNkqGiA3gcrgwR84vWKNHGUA+3AUjPCKmLUyJVZp WZSJ6eAoNNqrqwgJbjCuUAtGbOS19kQalHZ459g2NhmulRIJmhLeMKDtvD2ChNrcF1TaXPmfjfF xgJ6q+As29XATMn9RlEs= X-Received: by 2002:a17:90a:7106:b0:219:cc70:4726 with SMTP id h6-20020a17090a710600b00219cc704726mr1668132pjk.30.1671528827958; Tue, 20 Dec 2022 01:33:47 -0800 (PST) X-Google-Smtp-Source: AMrXdXv8Vfkd2VaHLSg/1WLZM49G8mz53UZdWshp5aEk2HMaCYpCwGXLENg583X422nJHi08dp1BJsGKZuxmrdtMdks= X-Received: by 2002:a17:90a:7106:b0:219:cc70:4726 with SMTP id h6-20020a17090a710600b00219cc704726mr1668118pjk.30.1671528827666; Tue, 20 Dec 2022 01:33:47 -0800 (PST) MIME-Version: 1.0 References: <20221117065726.277672-1-kaisenx.you@intel.com> <3ad04278-59c0-0c60-5c8c-9e57f33bb0de@amd.com> In-Reply-To: From: David Marchand Date: Tue, 20 Dec 2022 10:33:36 +0100 Message-ID: Subject: Re: [PATCH] net/iavf:fix slow memory allocation To: "You, KaisenX" Cc: Ferruh Yigit , "dev@dpdk.org" , "Burakov, Anatoly" , "stable@dpdk.org" , "Yang, Qiming" , "Zhou, YidingX" , "Wu, Jingjing" , "Xing, Beilei" , "Zhang, Qi Z" , Luca Boccassi , "Mcnamara, John" , Kevin Traynor X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Tue, Dec 20, 2022 at 7:52 AM You, KaisenX wrote: > > >> As to the reason for not using rte_malloc_socket. I thought > > >> rte_malloc_socket() could solve the problem too. And the appropriate > > >> parameter should be the socket_id that created the memory pool for > > >> DPDK initialization. Assuming that> the socket_id of the initially > > >> allocated memory = 1, first let the > > > eal_intr_thread > > >> determine if it is on the socket_id, then record this socket_id in > > >> the eal_intr_thread and pass it to the iavf_event_thread. But there > > >> seems no way to link this parameter to the iavf_dev_event_post() > > function. That is why rte_malloc_socket is not used. > > >> > > > > > > I was thinking socket id of device can be used, but that won't help if > > > the core that interrupt handler runs is in different socket. > > > And I also don't know if there is a way to get socket that interrupt > > > thread is on. @David may help perhaps. > > > > > > So question is why interrupt thread is not running on main lcore. > > > > > > > OK after some talk with David, what I am missing is 'rte_ctrl_thread_create()' > > does NOT run on main lcore, it can run on any core except data plane cores. > > > > Driver "iavf-event-thread" thread (iavf_dev_event_handle()) and interrupt > > thread (so driver interrupt callback iavf_dev_event_post()) can run on any > > core, making it hard to manage. > > And it seems it is not possible to control where interrupt thread to run. > > > > One option can be allocating hugepages for all sockets, but this requires user > > involvement, and can't happen transparently. > > > > Other option can be to control where "iavf-event-thread" run, like using > > 'rte_thread_create()' to create thread and provide attribute to run it on main > > lcore (rte_lcore_cpuset(rte_get_main_lcore()))? > > > > Can you please test above option? > > > > > The first option can solve this issue. but to borrow from your previous saying, > "in a dual socket system, if all used cores are in socket 1 and the NIC is in socket 1, > no memory is allocated for socket 0. This is to optimize memory consumption." > I think it's unreasonable to do so. > > About other option. In " rte_eal_intr_init" function, After the thread is created, > I set the thread affinity for eal-intr-thread, but it does not solve this issue. Jumping in this thread. I tried to play a bit with a E810 nic on a dual numa and I can't see anything wrong for now. Can you provide a simple and small reproducer of your issue? Thanks. -- David Marchand