From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2DF0D43E19; Sat, 6 Apr 2024 21:20:46 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B418B402C2; Sat, 6 Apr 2024 21:20:45 +0200 (CEST) Received: from mail-ot1-f43.google.com (mail-ot1-f43.google.com [209.85.210.43]) by mails.dpdk.org (Postfix) with ESMTP id A3FF7402BC for ; Sat, 6 Apr 2024 21:20:43 +0200 (CEST) Received: by mail-ot1-f43.google.com with SMTP id 46e09a7af769-6e0f43074edso2078519a34.1 for ; Sat, 06 Apr 2024 12:20:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=iol.unh.edu; s=unh-iol; t=1712431243; x=1713036043; darn=dpdk.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=rHTACLqNI5K5a+/dcQy3DemcoOzCLsbdgAcEsVC39XY=; b=Syt6yzf5UnBjnELVgqO6vg5f8K9uRWklAhU+jEAeiTXAmqMuhXyQ3bwTGZAhWGx1TA AW8kQLAHP14Pjy5JSoy14K+g9wC5rOGTEmYacIeDgoPZMA99ayauGlhXxlx/bB9mT0Sh T06jPnzzMLi75ECLMrRLkYiJmSoc5tfH/4pVY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712431243; x=1713036043; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rHTACLqNI5K5a+/dcQy3DemcoOzCLsbdgAcEsVC39XY=; b=xDGXIXYxZ0YIlupZGv+g1EFfrhAtldCj2lDhH/xIqLsfSACxQgyNuKmZEYXxAvbj8H 78Nq+BCd6prI5c15QTcG7oBM/UJEtQKHYP7wZZfOHI++NAli5eN2tr72j+b8msYzuDSy 5pIg2jM9mzvegkY2eudVzA3yIo6p1tmSlnAwRJbayupMWIt1XOjMyEIBL4F8wz44dBHs 8nBCkw0cI1zc28yWCOVEPupHH2B82aiysw3ythtwwaYi6EVGYgnxkSG1faJYPISzs2tr vD9scC+HpuF+1OazBI22DnwMq+itIU3J0bNPPe8nyw1/FUEsGvFXRloYgzHfaetxBJSP vg7A== X-Forwarded-Encrypted: i=1; AJvYcCWwAqDAxWQ4y2BQadVK85QyY16CBdbn3A8dK057a/kkSHQ5PkguiGsDmu5REcjUfTsPNDvAQm1m5LQb2yI= X-Gm-Message-State: AOJu0Ywmd0f1eIy3epShHX3l3nYZfai6wivnpVPdAVZKy6w1InrWbiFi qNIwukx20L6AXGwRxSz91R7wc6cGpKb/PYkCg6QiLjy51RNOTjMBZmuThlQlN7tfPJgjtOLuYS5 OUGMVvS6MFIc9P6TjSkhJVirJUS1J4aZ5Awhpfw== X-Google-Smtp-Source: AGHT+IFmLJE2s1wE5W2UhV1Izubq+qNUhFCaL1L3Qve3/DNRU5u+1Stak3TJ68aSLEkcD/UC1DI4P8GBfqzGaJW702I= X-Received: by 2002:a05:6870:972b:b0:22e:d79c:a16a with SMTP id n43-20020a056870972b00b0022ed79ca16amr5410155oaq.53.1712431242807; Sat, 06 Apr 2024 12:20:42 -0700 (PDT) MIME-Version: 1.0 References: <20240404153106.19047-1-npratte@iol.unh.edu> <98CBD80474FA8B44BF855DF32C47DC35E9F36D@smartserver.smartshare.dk> In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35E9F36D@smartserver.smartshare.dk> From: Patrick Robb Date: Sat, 6 Apr 2024 15:20:32 -0400 Message-ID: Subject: Re: [PATCH] dts: Change hugepage runtime config to 2MB Exclusively To: =?UTF-8?Q?Morten_Br=C3=B8rup?= Cc: Nicholas Pratte , paul.szczepanek@arm.com, juraj.linkes@pantheon.tech, yoan.picchi@foss.arm.com, thomas@monjalon.net, wathsala.vithanage@arm.com, Honnappa.Nagarahalli@arm.com, dev@dpdk.org, Jeremy Spewock Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Sat, Apr 6, 2024 at 4:47=E2=80=AFAM Morten Br=C3=B8rup wrote: > > > This change seems very CPU specific. > > E.g. in x86 32-bit mode, the hugepage size is 4 MB, not 2 MB. > > I don't know the recommended hugepage size on other architectures. > Thanks Morten, that's an important insight which we weren't aware of when we initially discussed this ticket. We read on some dpdk docs that 1gb hugepages should be set at boot (I think the reason is because that's when you can guarantee there is gbs of contiguous available memory), and that after boot, only 2gb hugepages should be set. I assume that even for other arches which don't support the 2mb pages specifically, we still want to allocate hugepages using the smallest page size possible per arch (for the same reason). So I think we can add some dict which stores the smallest valid hugepage size per arch. Then during config init, use the config's arch value to determine that size, and set the total hugepages allocation based on that size and the hugepages count set in the conf.yaml. Or maybe we can store the list of all valid hugepgage sizes per arch (which are also valid to be set post boot), allow for a new hugepage_size value on the conf.yaml, validate the input at config init, and then set according to those values. I prefer the former option though as I don't think the added flexibility offered by the latter seems important.