From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EC04542D65; Fri, 30 Jun 2023 05:47:49 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7B6C8406B5; Fri, 30 Jun 2023 05:47:49 +0200 (CEST) Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) by mails.dpdk.org (Postfix) with ESMTP id 964664021F for ; Fri, 30 Jun 2023 05:47:48 +0200 (CEST) Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-1b8303cd32aso11594535ad.2 for ; Thu, 29 Jun 2023 20:47:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20221208.gappssmtp.com; s=20221208; t=1688096868; x=1690688868; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:from:to:cc:subject:date :message-id:reply-to; bh=BaT3J/W9al5HkZ4HS1Kw22IhnR/66sA4W/KGII1M8Ao=; b=2773asxnuNuOt7OrYT903O/MvXztsMRgBQofZ2PtgQiGQWbispG+pC8agIOfEllhSg 7i6xTNt0t+sIdrQljds/sR10iRKDxKCmPSfNqDLIUsZibWBgWvAeLXfEGPyUSMjOkxIu mhkGWbtm3VHAEbpDzF36B/UOGfCeqdpw2b5GezpnZGvdPKc/ZVmjWPlRIKcLh834vZAF XKw8GsnMN55E8hUD0Kz5X4Vsg10QLjHUGh01zC8X/0h9v/uXQ2u1mE+Lec49qP0CmZ+Q 3pKWDhRROykl6nZOIBNv9uiXDEvD2XFi6NqTZIklFg1i0olkgGPsM5V/GdGmdICcd7wn 9MGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688096868; x=1690688868; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BaT3J/W9al5HkZ4HS1Kw22IhnR/66sA4W/KGII1M8Ao=; b=U59bE1EAaOGr7I+stq+8Z2vBtWICDcowlCaalijLD6girbs4N+tu/DpYJvg5V0aXyv Dpgb35Jp2Hh6Qe6PQ1/8GA1zvhrlcU0ILRL46y9UPF5R10Yp85xSPZAP8itqNtsVK4nB 9AzSdNx5qlpS1LnVqCef0aiD5t8g7bkrW4UHjEGa7smRS+VAF50hjK/OGRVBt5uTJw5F 8tAgepwFZgETangQhNEp0emieG9AR3nPkJBu1W7+HOeakGFc0ZD0CJJ2HvqquXmaNmp+ 8zyo/EpomBeZA+xLpUou0OZURFyY8L+8reDuTvldTLJpFfHi3nLTR3/+ctAavOpk1HKi 3TcQ== X-Gm-Message-State: ABy/qLbiH2i2P/TH2iTyEsmnsKybGukXIAYHMCLO/fWXhJg2fov2Ffh2 58bAWL8pZlCDqSU88kexgAkv7g== X-Google-Smtp-Source: APBJJlH9XxW6diKuoZPQEEUhD+efCE+Dwj8Gl2bt/sUoS88Fr3jCPpBEFPkLANgmIpCcP+e9u+nTRw== X-Received: by 2002:a17:903:2441:b0:1b7:f24c:3b9a with SMTP id l1-20020a170903244100b001b7f24c3b9amr1103779pls.47.1688096867749; Thu, 29 Jun 2023 20:47:47 -0700 (PDT) Received: from hermes.local (204-195-120-218.wavecable.com. [204.195.120.218]) by smtp.gmail.com with ESMTPSA id jm24-20020a17090304d800b001aaecc15d66sm9739545plb.289.2023.06.29.20.47.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Jun 2023 20:47:47 -0700 (PDT) Date: Thu, 29 Jun 2023 20:47:45 -0700 From: Stephen Hemminger To: "Burakov, Anatoly" Cc: xiangxia.m.yue@gmail.com, dev@dpdk.org Subject: Re: [dpdk-dev] [PATCH] eal/linux: add operation LOCK_NB to flock() Message-ID: <20230629204745.0abfe19e@hermes.local> In-Reply-To: <13b8ee99-39a7-c0a6-39fc-e126802d656d@intel.com> References: <20210325082125.37488-1-xiangxia.m.yue@gmail.com> <13b8ee99-39a7-c0a6-39fc-e126802d656d@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Thu, 15 Apr 2021 15:24:01 +0100 "Burakov, Anatoly" wrote: > On 25-Mar-21 8:21 AM, xiangxia.m.yue@gmail.com wrote: > > From: Tonghao Zhang > > > > The hugepage of different size, 2MB, 1GB may be mounted on > > the same directory (e.g /dev/hugepages). Then dpdk > > primary process will be blocked. To address this issue, > > add the LOCK_NB flags to flock(). > > > > $ cat /proc/mounts > > ... > > none /dev/hugepages hugetlbfs rw,seclabel,relatime,pagesize=1024M 0 0 > > none /dev/hugepages hugetlbfs rw,seclabel,relatime,pagesize=2M 0 0 > > > > Add more details for err logs. > > > > Signed-off-by: Tonghao Zhang > > --- > > lib/librte_eal/linux/eal_hugepage_info.c | 7 +++++-- > > 1 file changed, 5 insertions(+), 2 deletions(-) > > > > diff --git a/lib/librte_eal/linux/eal_hugepage_info.c b/lib/librte_eal/linux/eal_hugepage_info.c > > index d97792cadeb6..1ff76e539053 100644 > > --- a/lib/librte_eal/linux/eal_hugepage_info.c > > +++ b/lib/librte_eal/linux/eal_hugepage_info.c > > @@ -451,9 +451,12 @@ hugepage_info_init(void) > > hpi->lock_descriptor = open(hpi->hugedir, O_RDONLY); > > > > /* if blocking lock failed */ > > - if (flock(hpi->lock_descriptor, LOCK_EX) == -1) { > > + if (flock(hpi->lock_descriptor, LOCK_EX | LOCK_NB) == -1) { > > RTE_LOG(CRIT, EAL, > > - "Failed to lock hugepage directory!\n"); > > + "Failed to lock hugepage directory! " > > + "The hugepage dir (%s) was locked by " > > + "other processes or self twice.\n", > > + hpi->hugedir); > > break; > > } > > /* clear out the hugepages dir from unused pages */ > > > > Use cases such as "having two hugetlbfs page sizes on the same hugetlbfs > mountpoint" are user error, but i agree that deadlocking is probably not > the way we want to go about it. > > An alternative way would be to check if we already have a mountpoint > with the same path, and this would produce a better error message (as a > user, "hugepage dir is locked by self twice" doesn't tell me anything > useful), at a cost of slightly more complicated code. > > I'm not sure which way i want to go here. Normally, hugetlbfs shouldn't > be staying locked for long, so i'm wary of adding a LOCK_NB here, so i > feel slightly uneasy about this patch. Do you have any opinions? > > Also, do other OS's EALs need similar fix? > Dropping this patch. It is one of those: "It hurts when I do this stupid thing" patches.