* Re: [PATCH] dts: Change hugepage runtime config to 2MB Exclusively
2024-04-04 15:31 [PATCH] dts: Change hugepage runtime config to 2MB Exclusively Nicholas Pratte
@ 2024-04-05 14:27 ` Patrick Robb
2024-04-06 8:47 ` Morten Brørup
` (3 subsequent siblings)
4 siblings, 0 replies; 20+ messages in thread
From: Patrick Robb @ 2024-04-05 14:27 UTC (permalink / raw)
To: Nicholas Pratte
Cc: paul.szczepanek, juraj.linkes, yoan.picchi, thomas,
wathsala.vithanage, Honnappa.Nagarahalli, dev, Jeremy Spewock
Recheck-request: iol-intel-Functional
checking for lab infra failure
On Thu, Apr 4, 2024 at 11:31 AM Nicholas Pratte <npratte@iol.unh.edu> wrote:
>
> The previous implementation configures and allocates hugepage sizes
> based on a system default. This can lead to two problems: overallocation of
> hugepages (which may crash the remote host), and configuration of hugepage
> sizes that are not recommended during runtime. This new implementation
> allows only 2MB hugepage allocation during runtime; any other unique
> hugepage size must be configured by the end-user for initializing DTS.
>
> If the amount of 2MB hugepages requested exceeds the amount of 2MB
> hugepages already configured on the system, then the system will remount
> hugepages to cover the difference. If the amount of hugepages requested is
> either less than or equal to the amount already configured on the system,
> then nothing is done.
>
> Bugzilla ID: 1370
> Signed-off-by: Nicholas Pratte <npratte@iol.unh.edu>
> Reviewed-by: Jeremy Spewock <jspewock@iol.unh.edu>
> ---
> dts/conf.yaml | 8 ++++----
> dts/framework/config/__init__.py | 4 ++--
> dts/framework/config/conf_yaml_schema.json | 6 +++---
> dts/framework/config/types.py | 2 +-
> dts/framework/testbed_model/linux_session.py | 21 +++++++++-----------
> 5 files changed, 19 insertions(+), 22 deletions(-)
>
> diff --git a/dts/conf.yaml b/dts/conf.yaml
> index 8068345dd5..3690e77ee5 100644
> --- a/dts/conf.yaml
> +++ b/dts/conf.yaml
> @@ -35,8 +35,8 @@ nodes:
> lcores: "" # use all the available logical cores
> use_first_core: false # tells DPDK to use any physical core
> memory_channels: 4 # tells DPDK to use 4 memory channels
> - hugepages: # optional; if removed, will use system hugepage configuration
> - amount: 256
> + hugepages_2mb: # optional; if removed, will use system hugepage configuration
> + amount: 2560
> force_first_numa: false
> ports:
> # sets up the physical link between "SUT 1"@000:00:08.0 and "TG 1"@0000:00:08.0
> @@ -71,8 +71,8 @@ nodes:
> os_driver: rdma
> peer_node: "SUT 1"
> peer_pci: "0000:00:08.1"
> - hugepages: # optional; if removed, will use system hugepage configuration
> - amount: 256
> + hugepages_2mb: # optional; if removed, will use system hugepage configuration
> + amount: 2560
> force_first_numa: false
> traffic_generator:
> type: SCAPY
> diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
> index 4cb5c74059..b6f820e39e 100644
> --- a/dts/framework/config/__init__.py
> +++ b/dts/framework/config/__init__.py
> @@ -255,8 +255,8 @@ def from_dict(
> Either an SUT or TG configuration instance.
> """
> hugepage_config = None
> - if "hugepages" in d:
> - hugepage_config_dict = d["hugepages"]
> + if "hugepages_2mb" in d:
> + hugepage_config_dict = d["hugepages_2mb"]
> if "force_first_numa" not in hugepage_config_dict:
> hugepage_config_dict["force_first_numa"] = False
> hugepage_config = HugepageConfiguration(**hugepage_config_dict)
> diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
> index 4731f4511d..f4d7199523 100644
> --- a/dts/framework/config/conf_yaml_schema.json
> +++ b/dts/framework/config/conf_yaml_schema.json
> @@ -146,7 +146,7 @@
> "compiler"
> ]
> },
> - "hugepages": {
> + "hugepages_2mb": {
> "type": "object",
> "description": "Optional hugepage configuration. If not specified, hugepages won't be configured and DTS will use system configuration.",
> "properties": {
> @@ -253,8 +253,8 @@
> "type": "integer",
> "description": "How many memory channels to use. Optional, defaults to 1."
> },
> - "hugepages": {
> - "$ref": "#/definitions/hugepages"
> + "hugepages_2mb": {
> + "$ref": "#/definitions/hugepages_2mb"
> },
> "ports": {
> "type": "array",
> diff --git a/dts/framework/config/types.py b/dts/framework/config/types.py
> index 1927910d88..016e0c3dbd 100644
> --- a/dts/framework/config/types.py
> +++ b/dts/framework/config/types.py
> @@ -46,7 +46,7 @@ class NodeConfigDict(TypedDict):
> """Allowed keys and values."""
>
> #:
> - hugepages: HugepageConfigurationDict
> + hugepages_2mb: HugepageConfigurationDict
> #:
> name: str
> #:
> diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py
> index 5d24030c3d..ad3c811301 100644
> --- a/dts/framework/testbed_model/linux_session.py
> +++ b/dts/framework/testbed_model/linux_session.py
> @@ -15,7 +15,7 @@
>
> from typing_extensions import NotRequired
>
> -from framework.exception import RemoteCommandExecutionError
> +from framework.exception import ConfigurationError, RemoteCommandExecutionError
> from framework.utils import expand_range
>
> from .cpu import LogicalCore
> @@ -87,25 +87,22 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
> def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
> """Overrides :meth:`~.os_session.OSSession.setup_hugepages`."""
> self._logger.info("Getting Hugepage information.")
> - hugepage_size = self._get_hugepage_size()
> + if "hugepages-2048kB" not in self.send_command("ls /sys/kernel/mm/hugepages").stdout:
> + raise ConfigurationError("2MB hugepages not supported by operating system")
> hugepages_total = self._get_hugepages_total()
> self._numa_nodes = self._get_numa_nodes()
>
> - if force_first_numa or hugepages_total != hugepage_count:
> + if force_first_numa or hugepages_total < hugepage_count:
> # when forcing numa, we need to clear existing hugepages regardless
> # of size, so they can be moved to the first numa node
> - self._configure_huge_pages(hugepage_count, hugepage_size, force_first_numa)
> + self._configure_huge_pages(hugepage_count, force_first_numa)
> else:
> self._logger.info("Hugepages already configured.")
> self._mount_huge_pages()
>
> - def _get_hugepage_size(self) -> int:
> - hugepage_size = self.send_command("awk '/Hugepagesize/ {print $2}' /proc/meminfo").stdout
> - return int(hugepage_size)
> -
> def _get_hugepages_total(self) -> int:
> hugepages_total = self.send_command(
> - "awk '/HugePages_Total/ { print $2 }' /proc/meminfo"
> + "cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages"
> ).stdout
> return int(hugepages_total)
>
> @@ -136,15 +133,15 @@ def _supports_numa(self) -> bool:
> # there's no reason to do any numa specific configuration)
> return len(self._numa_nodes) > 1
>
> - def _configure_huge_pages(self, amount: int, size: int, force_first_numa: bool) -> None:
> + def _configure_huge_pages(self, amount: int, force_first_numa: bool) -> None:
> self._logger.info("Configuring Hugepages.")
> - hugepage_config_path = f"/sys/kernel/mm/hugepages/hugepages-{size}kB/nr_hugepages"
> + hugepage_config_path = "/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages"
> if force_first_numa and self._supports_numa():
> # clear non-numa hugepages
> self.send_command(f"echo 0 | tee {hugepage_config_path}", privileged=True)
> hugepage_config_path = (
> f"/sys/devices/system/node/node{self._numa_nodes[0]}/hugepages"
> - f"/hugepages-{size}kB/nr_hugepages"
> + f"/hugepages-2048kB/nr_hugepages"
> )
>
> self.send_command(f"echo {amount} | tee {hugepage_config_path}", privileged=True)
> --
> 2.44.0
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [PATCH] dts: Change hugepage runtime config to 2MB Exclusively
2024-04-04 15:31 [PATCH] dts: Change hugepage runtime config to 2MB Exclusively Nicholas Pratte
2024-04-05 14:27 ` Patrick Robb
@ 2024-04-06 8:47 ` Morten Brørup
2024-04-06 19:20 ` Patrick Robb
2024-04-09 17:28 ` [PATCH v2] " Nicholas Pratte
` (2 subsequent siblings)
4 siblings, 1 reply; 20+ messages in thread
From: Morten Brørup @ 2024-04-06 8:47 UTC (permalink / raw)
To: Nicholas Pratte, paul.szczepanek, juraj.linkes, yoan.picchi,
probb, thomas, wathsala.vithanage, Honnappa.Nagarahalli
Cc: dev, Jeremy Spewock
> From: Nicholas Pratte [mailto:npratte@iol.unh.edu]
> Sent: Thursday, 4 April 2024 17.31
>
> The previous implementation configures and allocates hugepage sizes
> based on a system default. This can lead to two problems: overallocation
> of
> hugepages (which may crash the remote host), and configuration of
> hugepage
> sizes that are not recommended during runtime. This new implementation
> allows only 2MB hugepage allocation during runtime; any other unique
> hugepage size must be configured by the end-user for initializing DTS.
>
> If the amount of 2MB hugepages requested exceeds the amount of 2MB
> hugepages already configured on the system, then the system will remount
> hugepages to cover the difference. If the amount of hugepages requested
> is
> either less than or equal to the amount already configured on the
> system,
> then nothing is done.
>
> Bugzilla ID: 1370
> Signed-off-by: Nicholas Pratte <npratte@iol.unh.edu>
> Reviewed-by: Jeremy Spewock <jspewock@iol.unh.edu>
> ---
This change seems very CPU specific.
E.g. in x86 32-bit mode, the hugepage size is 4 MB, not 2 MB.
I don't know the recommended hugepage size on other architectures.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH] dts: Change hugepage runtime config to 2MB Exclusively
2024-04-06 8:47 ` Morten Brørup
@ 2024-04-06 19:20 ` Patrick Robb
2024-04-06 22:04 ` Morten Brørup
0 siblings, 1 reply; 20+ messages in thread
From: Patrick Robb @ 2024-04-06 19:20 UTC (permalink / raw)
To: Morten Brørup
Cc: Nicholas Pratte, paul.szczepanek, juraj.linkes, yoan.picchi,
thomas, wathsala.vithanage, Honnappa.Nagarahalli, dev,
Jeremy Spewock
On Sat, Apr 6, 2024 at 4:47 AM Morten Brørup <mb@smartsharesystems.com> wrote:
>
>
> This change seems very CPU specific.
>
> E.g. in x86 32-bit mode, the hugepage size is 4 MB, not 2 MB.
>
> I don't know the recommended hugepage size on other architectures.
>
Thanks Morten, that's an important insight which we weren't aware of
when we initially discussed this ticket.
We read on some dpdk docs that 1gb hugepages should be set at boot (I
think the reason is because that's when you can guarantee there is gbs
of contiguous available memory), and that after boot, only 2gb
hugepages should be set. I assume that even for other arches which
don't support the 2mb pages specifically, we still want to allocate
hugepages using the smallest page size possible per arch (for the same
reason).
So I think we can add some dict which stores the smallest valid
hugepage size per arch. Then during config init, use the config's arch
value to determine that size, and set the total hugepages allocation
based on that size and the hugepages count set in the conf.yaml. Or
maybe we can store the list of all valid hugepgage sizes per arch
(which are also valid to be set post boot), allow for a new
hugepage_size value on the conf.yaml, validate the input at config
init, and then set according to those values. I prefer the former
option though as I don't think the added flexibility offered by the
latter seems important.
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [PATCH] dts: Change hugepage runtime config to 2MB Exclusively
2024-04-06 19:20 ` Patrick Robb
@ 2024-04-06 22:04 ` Morten Brørup
2024-04-08 8:10 ` Bruce Richardson
0 siblings, 1 reply; 20+ messages in thread
From: Morten Brørup @ 2024-04-06 22:04 UTC (permalink / raw)
To: Patrick Robb
Cc: Nicholas Pratte, paul.szczepanek, juraj.linkes, yoan.picchi,
thomas, wathsala.vithanage, Honnappa.Nagarahalli, dev,
Jeremy Spewock
> From: Patrick Robb [mailto:probb@iol.unh.edu]
> Sent: Saturday, 6 April 2024 21.21
>
> On Sat, Apr 6, 2024 at 4:47 AM Morten Brørup <mb@smartsharesystems.com>
> wrote:
> >
> >
> > This change seems very CPU specific.
> >
> > E.g. in x86 32-bit mode, the hugepage size is 4 MB, not 2 MB.
> >
> > I don't know the recommended hugepage size on other architectures.
> >
>
> Thanks Morten, that's an important insight which we weren't aware of
> when we initially discussed this ticket.
>
> We read on some dpdk docs that 1gb hugepages should be set at boot (I
> think the reason is because that's when you can guarantee there is gbs
> of contiguous available memory), and that after boot, only 2gb
> hugepages should be set. I assume that even for other arches which
> don't support the 2mb pages specifically, we still want to allocate
> hugepages using the smallest page size possible per arch (for the same
> reason).
Correct; for very large hugepages, they need to be set at boot.
I don't remember why, but that's the way the Linux kernel works.
2 MB is way below that threshold, and 1 GB is above.
You can also not set nr_overcommit_hugepages for those very large hugepages, only nr_hugepages.
>
> So I think we can add some dict which stores the smallest valid
> hugepage size per arch. Then during config init, use the config's arch
> value to determine that size, and set the total hugepages allocation
> based on that size and the hugepages count set in the conf.yaml. Or
> maybe we can store the list of all valid hugepgage sizes per arch
> (which are also valid to be set post boot), allow for a new
> hugepage_size value on the conf.yaml, validate the input at config
> init, and then set according to those values. I prefer the former
> option though as I don't think the added flexibility offered by the
> latter seems important.
I agree; the former option suffices.
A tiny detail...
ARM supports multiple (4, I think) different hugepage sizes, where the smallest size is 64 KB.
So you might want to choose another hugepage size than the smallest; but I still agree with your proposed concept of using one specific hugepage size per arch.
Like x86_64, ARM also supports 2 MB and 1 GB hugepage sizes, and 2 MB hugepages is also the typical default on Linux.
I don't know which hugepage sizes are supported by other architectures.
It might only be 32-bit x86 that needs a different hugepage size than 2 MB.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH] dts: Change hugepage runtime config to 2MB Exclusively
2024-04-06 22:04 ` Morten Brørup
@ 2024-04-08 8:10 ` Bruce Richardson
2024-04-08 9:35 ` Morten Brørup
0 siblings, 1 reply; 20+ messages in thread
From: Bruce Richardson @ 2024-04-08 8:10 UTC (permalink / raw)
To: Morten Brørup
Cc: Patrick Robb, Nicholas Pratte, paul.szczepanek, juraj.linkes,
yoan.picchi, thomas, wathsala.vithanage, Honnappa.Nagarahalli,
dev, Jeremy Spewock
On Sun, Apr 07, 2024 at 12:04:24AM +0200, Morten Brørup wrote:
> > From: Patrick Robb [mailto:probb@iol.unh.edu]
> > Sent: Saturday, 6 April 2024 21.21
> >
> > On Sat, Apr 6, 2024 at 4:47 AM Morten Brørup <mb@smartsharesystems.com>
> > wrote:
> > >
> > >
> > > This change seems very CPU specific.
> > >
> > > E.g. in x86 32-bit mode, the hugepage size is 4 MB, not 2 MB.
> > >
> > > I don't know the recommended hugepage size on other architectures.
> > >
> >
> > Thanks Morten, that's an important insight which we weren't aware of
> > when we initially discussed this ticket.
> >
> > We read on some dpdk docs that 1gb hugepages should be set at boot (I
> > think the reason is because that's when you can guarantee there is gbs
> > of contiguous available memory), and that after boot, only 2gb
> > hugepages should be set. I assume that even for other arches which
> > don't support the 2mb pages specifically, we still want to allocate
> > hugepages using the smallest page size possible per arch (for the same
> > reason).
>
> Correct; for very large hugepages, they need to be set at boot.
> I don't remember why, but that's the way the Linux kernel works.
> 2 MB is way below that threshold, and 1 GB is above.
>
> You can also not set nr_overcommit_hugepages for those very large hugepages, only nr_hugepages.
>
> >
> > So I think we can add some dict which stores the smallest valid
> > hugepage size per arch. Then during config init, use the config's arch
> > value to determine that size, and set the total hugepages allocation
> > based on that size and the hugepages count set in the conf.yaml. Or
> > maybe we can store the list of all valid hugepgage sizes per arch
> > (which are also valid to be set post boot), allow for a new
> > hugepage_size value on the conf.yaml, validate the input at config
> > init, and then set according to those values. I prefer the former
> > option though as I don't think the added flexibility offered by the
> > latter seems important.
>
> I agree; the former option suffices.
>
> A tiny detail...
> ARM supports multiple (4, I think) different hugepage sizes, where the smallest size is 64 KB.
> So you might want to choose another hugepage size than the smallest; but I still agree with your proposed concept of using one specific hugepage size per arch.
>
> Like x86_64, ARM also supports 2 MB and 1 GB hugepage sizes, and 2 MB hugepages is also the typical default on Linux.
>
> I don't know which hugepage sizes are supported by other architectures.
> It might only be 32-bit x86 that needs a different hugepage size than 2 MB.
>
Even for 32-bit x86, I think most distros now use PAE mode to allow physical
addresses >4GB, so even then 32-bit hugepages are 2MB rather than 4MB.
/Bruce
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [PATCH] dts: Change hugepage runtime config to 2MB Exclusively
2024-04-08 8:10 ` Bruce Richardson
@ 2024-04-08 9:35 ` Morten Brørup
2024-04-09 16:57 ` Nicholas Pratte
0 siblings, 1 reply; 20+ messages in thread
From: Morten Brørup @ 2024-04-08 9:35 UTC (permalink / raw)
To: Bruce Richardson, Patrick Robb
Cc: Nicholas Pratte, paul.szczepanek, juraj.linkes, yoan.picchi,
thomas, wathsala.vithanage, Honnappa.Nagarahalli, dev,
Jeremy Spewock
> From: Bruce Richardson [mailto:bruce.richardson@intel.com]
> Sent: Monday, 8 April 2024 10.11
>
> On Sun, Apr 07, 2024 at 12:04:24AM +0200, Morten Brørup wrote:
> > > From: Patrick Robb [mailto:probb@iol.unh.edu]
> > > Sent: Saturday, 6 April 2024 21.21
> > >
> > > On Sat, Apr 6, 2024 at 4:47 AM Morten Brørup <mb@smartsharesystems.com>
> > > wrote:
> > > >
> > > >
> > > > This change seems very CPU specific.
> > > >
> > > > E.g. in x86 32-bit mode, the hugepage size is 4 MB, not 2 MB.
> > > >
> > > > I don't know the recommended hugepage size on other architectures.
> > > >
> > >
> > > Thanks Morten, that's an important insight which we weren't aware of
> > > when we initially discussed this ticket.
> > >
> > > We read on some dpdk docs that 1gb hugepages should be set at boot (I
> > > think the reason is because that's when you can guarantee there is gbs
> > > of contiguous available memory), and that after boot, only 2gb
> > > hugepages should be set. I assume that even for other arches which
> > > don't support the 2mb pages specifically, we still want to allocate
> > > hugepages using the smallest page size possible per arch (for the same
> > > reason).
> >
> > Correct; for very large hugepages, they need to be set at boot.
> > I don't remember why, but that's the way the Linux kernel works.
> > 2 MB is way below that threshold, and 1 GB is above.
> >
> > You can also not set nr_overcommit_hugepages for those very large hugepages,
> only nr_hugepages.
> >
> > >
> > > So I think we can add some dict which stores the smallest valid
> > > hugepage size per arch. Then during config init, use the config's arch
> > > value to determine that size, and set the total hugepages allocation
> > > based on that size and the hugepages count set in the conf.yaml. Or
> > > maybe we can store the list of all valid hugepgage sizes per arch
> > > (which are also valid to be set post boot), allow for a new
> > > hugepage_size value on the conf.yaml, validate the input at config
> > > init, and then set according to those values. I prefer the former
> > > option though as I don't think the added flexibility offered by the
> > > latter seems important.
> >
> > I agree; the former option suffices.
> >
> > A tiny detail...
> > ARM supports multiple (4, I think) different hugepage sizes, where the
> smallest size is 64 KB.
> > So you might want to choose another hugepage size than the smallest; but I
> still agree with your proposed concept of using one specific hugepage size per
> arch.
> >
> > Like x86_64, ARM also supports 2 MB and 1 GB hugepage sizes, and 2 MB
> hugepages is also the typical default on Linux.
> >
> > I don't know which hugepage sizes are supported by other architectures.
> > It might only be 32-bit x86 that needs a different hugepage size than 2 MB.
> >
>
> Even for 32-bit x86, I think most distros now use PAE mode to allow physical
> addresses >4GB, so even then 32-bit hugepages are 2MB rather than 4MB.
In order to save you from more work on this patch, we could assume/hope that all architectures support 2 MB hugepages, and drop the discussed per-architecture size dict.
At least for until the assumption doesn't hold true anymore.
Then I only have two further comments to the patch:
1. Don't increase from 256 to 2560 hugepages (in dts/conf.yaml), unless there is a reason for it. If there is, please mention the increase and reason in the patch description.
2. Don't remove the size parameter from _configure_huge_pages() (in /dts/framework/testbed_model/linux_session.py); it might be useful later.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH] dts: Change hugepage runtime config to 2MB Exclusively
2024-04-08 9:35 ` Morten Brørup
@ 2024-04-09 16:57 ` Nicholas Pratte
0 siblings, 0 replies; 20+ messages in thread
From: Nicholas Pratte @ 2024-04-09 16:57 UTC (permalink / raw)
To: Morten Brørup
Cc: Bruce Richardson, Patrick Robb, paul.szczepanek, juraj.linkes,
yoan.picchi, thomas, wathsala.vithanage, Honnappa.Nagarahalli,
dev, Jeremy Spewock
I will change the default hugepage total back to 256. Much of the
justification for changing the default amount was based on guidelines
in the DPDK documentation. In the system requirements section, an
example is provided demonstrating how to allocate 2GB of 2MB
hugepages. We used this reasoning when setting the default amount as
2560 creates 5GB, or 2GB for each NUMA node, and some additional
wiggle room. Much of this was also based on the Juraj's initial
suggestion of 256 hugepages largely being arbitrary, but he had also
referenced the aforementioned DPDK documentation during discussion of
how many hugepages should be configured by default if nothing is set.
Regardless, our suggestion of changing the default hugepage size was
more of a light suggestion than anything else, so I will just remove
this.
I will also make the changes you suggested for linux_session.py. The
plan I have is to add the parameter back in and hardcode the
2048 hugepage size to continue our 2MB logic, with the understanding
that this can be changed in the future.
On Mon, Apr 8, 2024 at 5:35 AM Morten Brørup <mb@smartsharesystems.com> wrote:
>
> > From: Bruce Richardson [mailto:bruce.richardson@intel.com]
> > Sent: Monday, 8 April 2024 10.11
> >
> > On Sun, Apr 07, 2024 at 12:04:24AM +0200, Morten Brørup wrote:
> > > > From: Patrick Robb [mailto:probb@iol.unh.edu]
> > > > Sent: Saturday, 6 April 2024 21.21
> > > >
> > > > On Sat, Apr 6, 2024 at 4:47 AM Morten Brørup <mb@smartsharesystems.com>
> > > > wrote:
> > > > >
> > > > >
> > > > > This change seems very CPU specific.
> > > > >
> > > > > E.g. in x86 32-bit mode, the hugepage size is 4 MB, not 2 MB.
> > > > >
> > > > > I don't know the recommended hugepage size on other architectures.
> > > > >
> > > >
> > > > Thanks Morten, that's an important insight which we weren't aware of
> > > > when we initially discussed this ticket.
> > > >
> > > > We read on some dpdk docs that 1gb hugepages should be set at boot (I
> > > > think the reason is because that's when you can guarantee there is gbs
> > > > of contiguous available memory), and that after boot, only 2gb
> > > > hugepages should be set. I assume that even for other arches which
> > > > don't support the 2mb pages specifically, we still want to allocate
> > > > hugepages using the smallest page size possible per arch (for the same
> > > > reason).
> > >
> > > Correct; for very large hugepages, they need to be set at boot.
> > > I don't remember why, but that's the way the Linux kernel works.
> > > 2 MB is way below that threshold, and 1 GB is above.
> > >
> > > You can also not set nr_overcommit_hugepages for those very large hugepages,
> > only nr_hugepages.
> > >
> > > >
> > > > So I think we can add some dict which stores the smallest valid
> > > > hugepage size per arch. Then during config init, use the config's arch
> > > > value to determine that size, and set the total hugepages allocation
> > > > based on that size and the hugepages count set in the conf.yaml. Or
> > > > maybe we can store the list of all valid hugepgage sizes per arch
> > > > (which are also valid to be set post boot), allow for a new
> > > > hugepage_size value on the conf.yaml, validate the input at config
> > > > init, and then set according to those values. I prefer the former
> > > > option though as I don't think the added flexibility offered by the
> > > > latter seems important.
> > >
> > > I agree; the former option suffices.
> > >
> > > A tiny detail...
> > > ARM supports multiple (4, I think) different hugepage sizes, where the
> > smallest size is 64 KB.
> > > So you might want to choose another hugepage size than the smallest; but I
> > still agree with your proposed concept of using one specific hugepage size per
> > arch.
> > >
> > > Like x86_64, ARM also supports 2 MB and 1 GB hugepage sizes, and 2 MB
> > hugepages is also the typical default on Linux.
> > >
> > > I don't know which hugepage sizes are supported by other architectures.
> > > It might only be 32-bit x86 that needs a different hugepage size than 2 MB.
> > >
> >
> > Even for 32-bit x86, I think most distros now use PAE mode to allow physical
> > addresses >4GB, so even then 32-bit hugepages are 2MB rather than 4MB.
>
> In order to save you from more work on this patch, we could assume/hope that all architectures support 2 MB hugepages, and drop the discussed per-architecture size dict.
> At least for until the assumption doesn't hold true anymore.
>
> Then I only have two further comments to the patch:
>
> 1. Don't increase from 256 to 2560 hugepages (in dts/conf.yaml), unless there is a reason for it. If there is, please mention the increase and reason in the patch description.
> 2. Don't remove the size parameter from _configure_huge_pages() (in /dts/framework/testbed_model/linux_session.py); it might be useful later.
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v2] dts: Change hugepage runtime config to 2MB Exclusively
2024-04-04 15:31 [PATCH] dts: Change hugepage runtime config to 2MB Exclusively Nicholas Pratte
2024-04-05 14:27 ` Patrick Robb
2024-04-06 8:47 ` Morten Brørup
@ 2024-04-09 17:28 ` Nicholas Pratte
2024-04-10 7:23 ` Morten Brørup
2024-04-10 10:43 ` Juraj Linkeš
2024-04-16 18:18 ` [PATCH v3] " Nicholas Pratte
2024-04-18 16:10 ` [PATCH v4] " Nicholas Pratte
4 siblings, 2 replies; 20+ messages in thread
From: Nicholas Pratte @ 2024-04-09 17:28 UTC (permalink / raw)
To: paul.szczepanek, bruce.richardson, yoan.picchi, juraj.linkes,
jspewock, thomas, mb, wathsala.vithanage, probb,
Honnappa.Nagarahalli
Cc: dev, Nicholas Pratte
The previous implementation configures and allocates hugepage sizes
based on a system default. This can lead to two problems: overallocation of
hugepages (which may crash the remote host), and configuration of hugepage
sizes that are not recommended during runtime. This new implementation
allows only 2MB hugepage allocation during runtime; any other unique
hugepage size must be configured by the end-user for initializing DTS.
If the amount of 2MB hugepages requested exceeds the amount of 2MB
hugepages already configured on the system, then the system will remount
hugepages to cover the difference. If the amount of hugepages requested is
either less than or equal to the amount already configured on the system,
then nothing is done.
Bugzilla ID: 1370
Signed-off-by: Nicholas Pratte <npratte@iol.unh.edu>
Reviewed-by: Jeremy Spewock <jspewock@iol.unh.edu>
---
dts/conf.yaml | 4 ++--
dts/framework/config/__init__.py | 4 ++--
dts/framework/config/conf_yaml_schema.json | 6 +++---
dts/framework/config/types.py | 2 +-
dts/framework/testbed_model/linux_session.py | 15 ++++++---------
5 files changed, 14 insertions(+), 17 deletions(-)
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 8068345dd5..56c3ae6f4c 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -35,7 +35,7 @@ nodes:
lcores: "" # use all the available logical cores
use_first_core: false # tells DPDK to use any physical core
memory_channels: 4 # tells DPDK to use 4 memory channels
- hugepages: # optional; if removed, will use system hugepage configuration
+ hugepages_2mb: # optional; if removed, will use system hugepage configuration
amount: 256
force_first_numa: false
ports:
@@ -71,7 +71,7 @@ nodes:
os_driver: rdma
peer_node: "SUT 1"
peer_pci: "0000:00:08.1"
- hugepages: # optional; if removed, will use system hugepage configuration
+ hugepages_2mb: # optional; if removed, will use system hugepage configuration
amount: 256
force_first_numa: false
traffic_generator:
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 4cb5c74059..b6f820e39e 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -255,8 +255,8 @@ def from_dict(
Either an SUT or TG configuration instance.
"""
hugepage_config = None
- if "hugepages" in d:
- hugepage_config_dict = d["hugepages"]
+ if "hugepages_2mb" in d:
+ hugepage_config_dict = d["hugepages_2mb"]
if "force_first_numa" not in hugepage_config_dict:
hugepage_config_dict["force_first_numa"] = False
hugepage_config = HugepageConfiguration(**hugepage_config_dict)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 4731f4511d..f4d7199523 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -146,7 +146,7 @@
"compiler"
]
},
- "hugepages": {
+ "hugepages_2mb": {
"type": "object",
"description": "Optional hugepage configuration. If not specified, hugepages won't be configured and DTS will use system configuration.",
"properties": {
@@ -253,8 +253,8 @@
"type": "integer",
"description": "How many memory channels to use. Optional, defaults to 1."
},
- "hugepages": {
- "$ref": "#/definitions/hugepages"
+ "hugepages_2mb": {
+ "$ref": "#/definitions/hugepages_2mb"
},
"ports": {
"type": "array",
diff --git a/dts/framework/config/types.py b/dts/framework/config/types.py
index 1927910d88..016e0c3dbd 100644
--- a/dts/framework/config/types.py
+++ b/dts/framework/config/types.py
@@ -46,7 +46,7 @@ class NodeConfigDict(TypedDict):
"""Allowed keys and values."""
#:
- hugepages: HugepageConfigurationDict
+ hugepages_2mb: HugepageConfigurationDict
#:
name: str
#:
diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py
index 5d24030c3d..37f5eacb21 100644
--- a/dts/framework/testbed_model/linux_session.py
+++ b/dts/framework/testbed_model/linux_session.py
@@ -15,7 +15,7 @@
from typing_extensions import NotRequired
-from framework.exception import RemoteCommandExecutionError
+from framework.exception import ConfigurationError, RemoteCommandExecutionError
from framework.utils import expand_range
from .cpu import LogicalCore
@@ -87,25 +87,22 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
"""Overrides :meth:`~.os_session.OSSession.setup_hugepages`."""
self._logger.info("Getting Hugepage information.")
- hugepage_size = self._get_hugepage_size()
+ if "hugepages-2048kB" not in self.send_command("ls /sys/kernel/mm/hugepages").stdout:
+ raise ConfigurationError("2MB hugepages not supported by operating system")
hugepages_total = self._get_hugepages_total()
self._numa_nodes = self._get_numa_nodes()
- if force_first_numa or hugepages_total != hugepage_count:
+ if force_first_numa or hugepages_total < hugepage_count:
# when forcing numa, we need to clear existing hugepages regardless
# of size, so they can be moved to the first numa node
- self._configure_huge_pages(hugepage_count, hugepage_size, force_first_numa)
+ self._configure_huge_pages(hugepage_count, 2048, force_first_numa)
else:
self._logger.info("Hugepages already configured.")
self._mount_huge_pages()
- def _get_hugepage_size(self) -> int:
- hugepage_size = self.send_command("awk '/Hugepagesize/ {print $2}' /proc/meminfo").stdout
- return int(hugepage_size)
-
def _get_hugepages_total(self) -> int:
hugepages_total = self.send_command(
- "awk '/HugePages_Total/ { print $2 }' /proc/meminfo"
+ "cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages"
).stdout
return int(hugepages_total)
--
2.44.0
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [PATCH v2] dts: Change hugepage runtime config to 2MB Exclusively
2024-04-09 17:28 ` [PATCH v2] " Nicholas Pratte
@ 2024-04-10 7:23 ` Morten Brørup
2024-04-10 13:50 ` Patrick Robb
2024-04-10 10:43 ` Juraj Linkeš
1 sibling, 1 reply; 20+ messages in thread
From: Morten Brørup @ 2024-04-10 7:23 UTC (permalink / raw)
To: Nicholas Pratte
Cc: dev, paul.szczepanek, bruce.richardson, yoan.picchi,
juraj.linkes, jspewock, thomas, wathsala.vithanage, probb,
Honnappa.Nagarahalli
> From: Nicholas Pratte [mailto:npratte@iol.unh.edu]
> Sent: Tuesday, 9 April 2024 19.28
>
> The previous implementation configures and allocates hugepage sizes
> based on a system default. This can lead to two problems: overallocation of
> hugepages (which may crash the remote host), and configuration of hugepage
> sizes that are not recommended during runtime. This new implementation
> allows only 2MB hugepage allocation during runtime; any other unique
> hugepage size must be configured by the end-user for initializing DTS.
>
> If the amount of 2MB hugepages requested exceeds the amount of 2MB
> hugepages already configured on the system, then the system will remount
> hugepages to cover the difference. If the amount of hugepages requested is
> either less than or equal to the amount already configured on the system,
> then nothing is done.
>
> Bugzilla ID: 1370
> Signed-off-by: Nicholas Pratte <npratte@iol.unh.edu>
> Reviewed-by: Jeremy Spewock <jspewock@iol.unh.edu>
> ---
> dts/conf.yaml | 4 ++--
> dts/framework/config/__init__.py | 4 ++--
> dts/framework/config/conf_yaml_schema.json | 6 +++---
> dts/framework/config/types.py | 2 +-
> dts/framework/testbed_model/linux_session.py | 15 ++++++---------
> 5 files changed, 14 insertions(+), 17 deletions(-)
>
> diff --git a/dts/conf.yaml b/dts/conf.yaml
> index 8068345dd5..56c3ae6f4c 100644
> --- a/dts/conf.yaml
> +++ b/dts/conf.yaml
> @@ -35,7 +35,7 @@ nodes:
> lcores: "" # use all the available logical cores
> use_first_core: false # tells DPDK to use any physical core
> memory_channels: 4 # tells DPDK to use 4 memory channels
> - hugepages: # optional; if removed, will use system hugepage
> configuration
> + hugepages_2mb: # optional; if removed, will use system hugepage
> configuration
> amount: 256
> force_first_numa: false
> ports:
> @@ -71,7 +71,7 @@ nodes:
> os_driver: rdma
> peer_node: "SUT 1"
> peer_pci: "0000:00:08.1"
> - hugepages: # optional; if removed, will use system hugepage
> configuration
> + hugepages_2mb: # optional; if removed, will use system hugepage
> configuration
> amount: 256
> force_first_numa: false
> traffic_generator:
> diff --git a/dts/framework/config/__init__.py
> b/dts/framework/config/__init__.py
> index 4cb5c74059..b6f820e39e 100644
> --- a/dts/framework/config/__init__.py
> +++ b/dts/framework/config/__init__.py
> @@ -255,8 +255,8 @@ def from_dict(
> Either an SUT or TG configuration instance.
> """
> hugepage_config = None
> - if "hugepages" in d:
> - hugepage_config_dict = d["hugepages"]
> + if "hugepages_2mb" in d:
> + hugepage_config_dict = d["hugepages_2mb"]
> if "force_first_numa" not in hugepage_config_dict:
> hugepage_config_dict["force_first_numa"] = False
> hugepage_config = HugepageConfiguration(**hugepage_config_dict)
> diff --git a/dts/framework/config/conf_yaml_schema.json
> b/dts/framework/config/conf_yaml_schema.json
> index 4731f4511d..f4d7199523 100644
> --- a/dts/framework/config/conf_yaml_schema.json
> +++ b/dts/framework/config/conf_yaml_schema.json
> @@ -146,7 +146,7 @@
> "compiler"
> ]
> },
> - "hugepages": {
> + "hugepages_2mb": {
> "type": "object",
> "description": "Optional hugepage configuration. If not specified,
> hugepages won't be configured and DTS will use system configuration.",
> "properties": {
> @@ -253,8 +253,8 @@
> "type": "integer",
> "description": "How many memory channels to use. Optional,
> defaults to 1."
> },
> - "hugepages": {
> - "$ref": "#/definitions/hugepages"
> + "hugepages_2mb": {
> + "$ref": "#/definitions/hugepages_2mb"
> },
> "ports": {
> "type": "array",
> diff --git a/dts/framework/config/types.py b/dts/framework/config/types.py
> index 1927910d88..016e0c3dbd 100644
> --- a/dts/framework/config/types.py
> +++ b/dts/framework/config/types.py
> @@ -46,7 +46,7 @@ class NodeConfigDict(TypedDict):
> """Allowed keys and values."""
>
> #:
> - hugepages: HugepageConfigurationDict
> + hugepages_2mb: HugepageConfigurationDict
> #:
> name: str
> #:
> diff --git a/dts/framework/testbed_model/linux_session.py
> b/dts/framework/testbed_model/linux_session.py
> index 5d24030c3d..37f5eacb21 100644
> --- a/dts/framework/testbed_model/linux_session.py
> +++ b/dts/framework/testbed_model/linux_session.py
> @@ -15,7 +15,7 @@
>
> from typing_extensions import NotRequired
>
> -from framework.exception import RemoteCommandExecutionError
> +from framework.exception import ConfigurationError,
> RemoteCommandExecutionError
> from framework.utils import expand_range
>
> from .cpu import LogicalCore
> @@ -87,25 +87,22 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
> def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) ->
> None:
You should either rename this to setup_hugepages_2mb() or preferably add a hugepage_size parameter.
> """Overrides :meth:`~.os_session.OSSession.setup_hugepages`."""
> self._logger.info("Getting Hugepage information.")
> - hugepage_size = self._get_hugepage_size()
> + if "hugepages-2048kB" not in self.send_command("ls
> /sys/kernel/mm/hugepages").stdout:
> + raise ConfigurationError("2MB hugepages not supported by
> operating system")
> hugepages_total = self._get_hugepages_total()
> self._numa_nodes = self._get_numa_nodes()
>
> - if force_first_numa or hugepages_total != hugepage_count:
> + if force_first_numa or hugepages_total < hugepage_count:
> # when forcing numa, we need to clear existing hugepages
> regardless
> # of size, so they can be moved to the first numa node
> - self._configure_huge_pages(hugepage_count, hugepage_size,
> force_first_numa)
> + self._configure_huge_pages(hugepage_count, 2048,
> force_first_numa)
> else:
> self._logger.info("Hugepages already configured.")
> self._mount_huge_pages()
>
> - def _get_hugepage_size(self) -> int:
> - hugepage_size = self.send_command("awk '/Hugepagesize/ {print $2}'
> /proc/meminfo").stdout
> - return int(hugepage_size)
> -
Removing _get_hugepage_size() is OK; alternatively rename it to _get_hugepage_default_size().
> def _get_hugepages_total(self) -> int:
Also here, preferably add a size parameter, or rename it to _get_hugepages_2mb_total().
> hugepages_total = self.send_command(
> - "awk '/HugePages_Total/ { print $2 }' /proc/meminfo"
> + "cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages"
> ).stdout
> return int(hugepages_total)
>
> --
> 2.44.0
With suggested changes,
Acked-by: Morten Brørup <mb@smartsharesystems.com>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2] dts: Change hugepage runtime config to 2MB Exclusively
2024-04-10 7:23 ` Morten Brørup
@ 2024-04-10 13:50 ` Patrick Robb
0 siblings, 0 replies; 20+ messages in thread
From: Patrick Robb @ 2024-04-10 13:50 UTC (permalink / raw)
To: Morten Brørup
Cc: Nicholas Pratte, dev, paul.szczepanek, bruce.richardson,
yoan.picchi, juraj.linkes, jspewock, thomas, wathsala.vithanage,
Honnappa.Nagarahalli
On Wed, Apr 10, 2024 at 3:23 AM Morten Brørup <mb@smartsharesystems.com> wrote:
>
> > From: Nicholas Pratte [mailto:npratte@iol.unh.edu]
> > Sent: Tuesday, 9 April 2024 19.28
> >
> > The previous implementation configures and allocates hugepage sizes
> > based on a system default. This can lead to two problems: overallocation of
> > hugepages (which may crash the remote host), and configuration of hugepage
> > sizes that are not recommended during runtime. This new implementation
> > allows only 2MB hugepage allocation during runtime; any other unique
> > hugepage size must be configured by the end-user for initializing DTS.
> >
> > If the amount of 2MB hugepages requested exceeds the amount of 2MB
> > hugepages already configured on the system, then the system will remount
> > hugepages to cover the difference. If the amount of hugepages requested is
> > either less than or equal to the amount already configured on the system,
> > then nothing is done.
> >
> > Bugzilla ID: 1370
> > Signed-off-by: Nicholas Pratte <npratte@iol.unh.edu>
> > Reviewed-by: Jeremy Spewock <jspewock@iol.unh.edu>
> > ---
> > dts/conf.yaml | 4 ++--
> > dts/framework/config/__init__.py | 4 ++--
> > dts/framework/config/conf_yaml_schema.json | 6 +++---
> > dts/framework/config/types.py | 2 +-
> > dts/framework/testbed_model/linux_session.py | 15 ++++++---------
> > 5 files changed, 14 insertions(+), 17 deletions(-)
> >
> > diff --git a/dts/conf.yaml b/dts/conf.yaml
> > index 8068345dd5..56c3ae6f4c 100644
> > --- a/dts/conf.yaml
> > +++ b/dts/conf.yaml
> > @@ -35,7 +35,7 @@ nodes:
> > lcores: "" # use all the available logical cores
> > use_first_core: false # tells DPDK to use any physical core
> > memory_channels: 4 # tells DPDK to use 4 memory channels
> > - hugepages: # optional; if removed, will use system hugepage
> > configuration
> > + hugepages_2mb: # optional; if removed, will use system hugepage
> > configuration
> > amount: 256
> > force_first_numa: false
> > ports:
> > @@ -71,7 +71,7 @@ nodes:
> > os_driver: rdma
> > peer_node: "SUT 1"
> > peer_pci: "0000:00:08.1"
> > - hugepages: # optional; if removed, will use system hugepage
> > configuration
> > + hugepages_2mb: # optional; if removed, will use system hugepage
> > configuration
> > amount: 256
> > force_first_numa: false
> > traffic_generator:
> > diff --git a/dts/framework/config/__init__.py
> > b/dts/framework/config/__init__.py
> > index 4cb5c74059..b6f820e39e 100644
> > --- a/dts/framework/config/__init__.py
> > +++ b/dts/framework/config/__init__.py
> > @@ -255,8 +255,8 @@ def from_dict(
> > Either an SUT or TG configuration instance.
> > """
> > hugepage_config = None
> > - if "hugepages" in d:
> > - hugepage_config_dict = d["hugepages"]
> > + if "hugepages_2mb" in d:
> > + hugepage_config_dict = d["hugepages_2mb"]
> > if "force_first_numa" not in hugepage_config_dict:
> > hugepage_config_dict["force_first_numa"] = False
> > hugepage_config = HugepageConfiguration(**hugepage_config_dict)
> > diff --git a/dts/framework/config/conf_yaml_schema.json
> > b/dts/framework/config/conf_yaml_schema.json
> > index 4731f4511d..f4d7199523 100644
> > --- a/dts/framework/config/conf_yaml_schema.json
> > +++ b/dts/framework/config/conf_yaml_schema.json
> > @@ -146,7 +146,7 @@
> > "compiler"
> > ]
> > },
> > - "hugepages": {
> > + "hugepages_2mb": {
> > "type": "object",
> > "description": "Optional hugepage configuration. If not specified,
> > hugepages won't be configured and DTS will use system configuration.",
> > "properties": {
> > @@ -253,8 +253,8 @@
> > "type": "integer",
> > "description": "How many memory channels to use. Optional,
> > defaults to 1."
> > },
> > - "hugepages": {
> > - "$ref": "#/definitions/hugepages"
> > + "hugepages_2mb": {
> > + "$ref": "#/definitions/hugepages_2mb"
> > },
> > "ports": {
> > "type": "array",
> > diff --git a/dts/framework/config/types.py b/dts/framework/config/types.py
> > index 1927910d88..016e0c3dbd 100644
> > --- a/dts/framework/config/types.py
> > +++ b/dts/framework/config/types.py
> > @@ -46,7 +46,7 @@ class NodeConfigDict(TypedDict):
> > """Allowed keys and values."""
> >
> > #:
> > - hugepages: HugepageConfigurationDict
> > + hugepages_2mb: HugepageConfigurationDict
> > #:
> > name: str
> > #:
> > diff --git a/dts/framework/testbed_model/linux_session.py
> > b/dts/framework/testbed_model/linux_session.py
> > index 5d24030c3d..37f5eacb21 100644
> > --- a/dts/framework/testbed_model/linux_session.py
> > +++ b/dts/framework/testbed_model/linux_session.py
> > @@ -15,7 +15,7 @@
> >
> > from typing_extensions import NotRequired
> >
> > -from framework.exception import RemoteCommandExecutionError
> > +from framework.exception import ConfigurationError,
> > RemoteCommandExecutionError
> > from framework.utils import expand_range
> >
> > from .cpu import LogicalCore
> > @@ -87,25 +87,22 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
> > def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) ->
> > None:
>
> You should either rename this to setup_hugepages_2mb() or preferably add a hugepage_size parameter.
>
> > """Overrides :meth:`~.os_session.OSSession.setup_hugepages`."""
> > self._logger.info("Getting Hugepage information.")
> > - hugepage_size = self._get_hugepage_size()
> > + if "hugepages-2048kB" not in self.send_command("ls
> > /sys/kernel/mm/hugepages").stdout:
> > + raise ConfigurationError("2MB hugepages not supported by
> > operating system")
> > hugepages_total = self._get_hugepages_total()
> > self._numa_nodes = self._get_numa_nodes()
> >
> > - if force_first_numa or hugepages_total != hugepage_count:
> > + if force_first_numa or hugepages_total < hugepage_count:
> > # when forcing numa, we need to clear existing hugepages
> > regardless
> > # of size, so they can be moved to the first numa node
> > - self._configure_huge_pages(hugepage_count, hugepage_size,
> > force_first_numa)
> > + self._configure_huge_pages(hugepage_count, 2048,
> > force_first_numa)
> > else:
> > self._logger.info("Hugepages already configured.")
> > self._mount_huge_pages()
> >
> > - def _get_hugepage_size(self) -> int:
> > - hugepage_size = self.send_command("awk '/Hugepagesize/ {print $2}'
> > /proc/meminfo").stdout
> > - return int(hugepage_size)
> > -
>
> Removing _get_hugepage_size() is OK; alternatively rename it to _get_hugepage_default_size().
Agree with Morten's ideas and am okay with keeping the function if
Nick likes, but want to note to Nick that _get_hugepage_size will have
a different process even if we move from assuming 2mb in all cases (as
Morten suggested we may have to do in the far future). In that case,
we are going to make a decision based on arch, not on the current info
from /proc/meminfo. One of the important ideas behind this change is
that user cannot configure the hugepage size on the system that DTS
will use, outside of DTS.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2] dts: Change hugepage runtime config to 2MB Exclusively
2024-04-09 17:28 ` [PATCH v2] " Nicholas Pratte
2024-04-10 7:23 ` Morten Brørup
@ 2024-04-10 10:43 ` Juraj Linkeš
1 sibling, 0 replies; 20+ messages in thread
From: Juraj Linkeš @ 2024-04-10 10:43 UTC (permalink / raw)
To: Nicholas Pratte
Cc: paul.szczepanek, bruce.richardson, yoan.picchi, jspewock, thomas,
mb, wathsala.vithanage, probb, Honnappa.Nagarahalli, dev
> diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py
> index 5d24030c3d..37f5eacb21 100644
> --- a/dts/framework/testbed_model/linux_session.py
> +++ b/dts/framework/testbed_model/linux_session.py
> @@ -15,7 +15,7 @@
>
> from typing_extensions import NotRequired
>
> -from framework.exception import RemoteCommandExecutionError
> +from framework.exception import ConfigurationError, RemoteCommandExecutionError
> from framework.utils import expand_range
>
> from .cpu import LogicalCore
> @@ -87,25 +87,22 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
> def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
> """Overrides :meth:`~.os_session.OSSession.setup_hugepages`."""
> self._logger.info("Getting Hugepage information.")
> - hugepage_size = self._get_hugepage_size()
> + if "hugepages-2048kB" not in self.send_command("ls /sys/kernel/mm/hugepages").stdout:
I have one extra point on top of Morten's suggestions (which I like).
Let's create a class variable where we store the hugepage size (2048)
and use that across the code.
> + raise ConfigurationError("2MB hugepages not supported by operating system")
> hugepages_total = self._get_hugepages_total()
> self._numa_nodes = self._get_numa_nodes()
>
> - if force_first_numa or hugepages_total != hugepage_count:
> + if force_first_numa or hugepages_total < hugepage_count:
> # when forcing numa, we need to clear existing hugepages regardless
> # of size, so they can be moved to the first numa node
> - self._configure_huge_pages(hugepage_count, hugepage_size, force_first_numa)
> + self._configure_huge_pages(hugepage_count, 2048, force_first_numa)
> else:
> self._logger.info("Hugepages already configured.")
> self._mount_huge_pages()
>
> - def _get_hugepage_size(self) -> int:
> - hugepage_size = self.send_command("awk '/Hugepagesize/ {print $2}' /proc/meminfo").stdout
> - return int(hugepage_size)
> -
> def _get_hugepages_total(self) -> int:
> hugepages_total = self.send_command(
> - "awk '/HugePages_Total/ { print $2 }' /proc/meminfo"
> + "cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages"
> ).stdout
> return int(hugepages_total)
>
> --
> 2.44.0
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v3] dts: Change hugepage runtime config to 2MB Exclusively
2024-04-04 15:31 [PATCH] dts: Change hugepage runtime config to 2MB Exclusively Nicholas Pratte
` (2 preceding siblings ...)
2024-04-09 17:28 ` [PATCH v2] " Nicholas Pratte
@ 2024-04-16 18:18 ` Nicholas Pratte
2024-04-16 18:25 ` Morten Brørup
` (2 more replies)
2024-04-18 16:10 ` [PATCH v4] " Nicholas Pratte
4 siblings, 3 replies; 20+ messages in thread
From: Nicholas Pratte @ 2024-04-16 18:18 UTC (permalink / raw)
To: paul.szczepanek, wathsala.vithanage, bruce.richardson,
juraj.linkes, yoan.picchi, mb, jspewock, Honnappa.Nagarahalli,
thomas, probb
Cc: dev, Nicholas Pratte
The previous implementation configures and allocates hugepage sizes
based on a system default. This can lead to two problems: overallocation of
hugepages (which may crash the remote host), and configuration of hugepage
sizes that are not recommended during runtime. This new implementation
allows only 2MB hugepage allocation during runtime; any other unique
hugepage size must be configured by the end-user for initializing DTS.
If the amount of 2MB hugepages requested exceeds the amount of 2MB
hugepages already configured on the system, then the system will remount
hugepages to cover the difference. If the amount of hugepages requested is
either less than or equal to the amount already configured on the system,
then nothing is done.
Bugzilla ID: 1370
Signed-off-by: Nicholas Pratte <npratte@iol.unh.edu>
Reviewed-by: Jeremy Spewock <jspewock@iol.unh.edu>
---
doc/guides/tools/dts.rst | 7 +++++-
dts/conf.yaml | 4 ++--
dts/framework/config/__init__.py | 4 ++--
dts/framework/config/conf_yaml_schema.json | 6 ++---
dts/framework/config/types.py | 2 +-
dts/framework/testbed_model/linux_session.py | 24 +++++++++++---------
dts/framework/testbed_model/node.py | 8 ++++++-
dts/framework/testbed_model/os_session.py | 4 +++-
8 files changed, 37 insertions(+), 22 deletions(-)
diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index 47b218b2c6..1103db0faa 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -131,7 +131,12 @@ There are two areas that need to be set up on a System Under Test:
You may specify the optional hugepage configuration in the DTS config file.
If you do, DTS will take care of configuring hugepages,
- overwriting your current SUT hugepage configuration.
+ overwriting your current SUT hugepage configuration. Configuration of hugepages via DTS
+ allows only for configuration of 2MB hugepages. Thus, if your needs require hugepage
+ sizes not equal to 2MB, then you these configurations must be done outside of the DTS
+ framework; moreover, if you do not desire the use of 2MB hugepages, and instead perfer
+ other sizes (e.g 1GB), then we assume that hugepages have been manually configued before
+ runtime.
* System under test configuration
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 8068345dd5..56c3ae6f4c 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -35,7 +35,7 @@ nodes:
lcores: "" # use all the available logical cores
use_first_core: false # tells DPDK to use any physical core
memory_channels: 4 # tells DPDK to use 4 memory channels
- hugepages: # optional; if removed, will use system hugepage configuration
+ hugepages_2mb: # optional; if removed, will use system hugepage configuration
amount: 256
force_first_numa: false
ports:
@@ -71,7 +71,7 @@ nodes:
os_driver: rdma
peer_node: "SUT 1"
peer_pci: "0000:00:08.1"
- hugepages: # optional; if removed, will use system hugepage configuration
+ hugepages_2mb: # optional; if removed, will use system hugepage configuration
amount: 256
force_first_numa: false
traffic_generator:
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 4cb5c74059..b6f820e39e 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -255,8 +255,8 @@ def from_dict(
Either an SUT or TG configuration instance.
"""
hugepage_config = None
- if "hugepages" in d:
- hugepage_config_dict = d["hugepages"]
+ if "hugepages_2mb" in d:
+ hugepage_config_dict = d["hugepages_2mb"]
if "force_first_numa" not in hugepage_config_dict:
hugepage_config_dict["force_first_numa"] = False
hugepage_config = HugepageConfiguration(**hugepage_config_dict)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 4731f4511d..f4d7199523 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -146,7 +146,7 @@
"compiler"
]
},
- "hugepages": {
+ "hugepages_2mb": {
"type": "object",
"description": "Optional hugepage configuration. If not specified, hugepages won't be configured and DTS will use system configuration.",
"properties": {
@@ -253,8 +253,8 @@
"type": "integer",
"description": "How many memory channels to use. Optional, defaults to 1."
},
- "hugepages": {
- "$ref": "#/definitions/hugepages"
+ "hugepages_2mb": {
+ "$ref": "#/definitions/hugepages_2mb"
},
"ports": {
"type": "array",
diff --git a/dts/framework/config/types.py b/dts/framework/config/types.py
index 1927910d88..016e0c3dbd 100644
--- a/dts/framework/config/types.py
+++ b/dts/framework/config/types.py
@@ -46,7 +46,7 @@ class NodeConfigDict(TypedDict):
"""Allowed keys and values."""
#:
- hugepages: HugepageConfigurationDict
+ hugepages_2mb: HugepageConfigurationDict
#:
name: str
#:
diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py
index 5d24030c3d..d0f7cfa77c 100644
--- a/dts/framework/testbed_model/linux_session.py
+++ b/dts/framework/testbed_model/linux_session.py
@@ -15,7 +15,7 @@
from typing_extensions import NotRequired
-from framework.exception import RemoteCommandExecutionError
+from framework.exception import ConfigurationError, RemoteCommandExecutionError
from framework.utils import expand_range
from .cpu import LogicalCore
@@ -84,14 +84,20 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
"""Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`."""
return dpdk_prefix
- def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
+ def setup_hugepages(
+ self, hugepage_count: int, hugepage_size: int, force_first_numa: bool
+ ) -> None:
"""Overrides :meth:`~.os_session.OSSession.setup_hugepages`."""
self._logger.info("Getting Hugepage information.")
- hugepage_size = self._get_hugepage_size()
- hugepages_total = self._get_hugepages_total()
+ hugepages_total = self._get_hugepages_total(hugepage_size)
+ if (
+ f"hugepages-{hugepage_size}kB"
+ not in self.send_command("ls /sys/kernel/mm/hugepages").stdout
+ ):
+ raise ConfigurationError("hugepage size not supported by operating system")
self._numa_nodes = self._get_numa_nodes()
- if force_first_numa or hugepages_total != hugepage_count:
+ if force_first_numa or hugepages_total < hugepage_count:
# when forcing numa, we need to clear existing hugepages regardless
# of size, so they can be moved to the first numa node
self._configure_huge_pages(hugepage_count, hugepage_size, force_first_numa)
@@ -99,13 +105,9 @@ def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
self._logger.info("Hugepages already configured.")
self._mount_huge_pages()
- def _get_hugepage_size(self) -> int:
- hugepage_size = self.send_command("awk '/Hugepagesize/ {print $2}' /proc/meminfo").stdout
- return int(hugepage_size)
-
- def _get_hugepages_total(self) -> int:
+ def _get_hugepages_total(self, hugepage_size: int) -> int:
hugepages_total = self.send_command(
- "awk '/HugePages_Total/ { print $2 }' /proc/meminfo"
+ f"cat /sys/kernel/mm/hugepages/hugepages-{hugepage_size}kB/nr_hugepages"
).stdout
return int(hugepages_total)
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index 74061f6262..056a031ca0 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -97,6 +97,10 @@ def __init__(self, node_config: NodeConfiguration):
self.virtual_devices = []
self._init_ports()
+ @property
+ def _hugepage_default_size(self) -> int:
+ return 2048
+
def _init_ports(self) -> None:
self.ports = [Port(self.name, port_config) for port_config in self.config.ports]
self.main_session.update_ports(self.ports)
@@ -266,7 +270,9 @@ def _setup_hugepages(self) -> None:
"""
if self.config.hugepages:
self.main_session.setup_hugepages(
- self.config.hugepages.amount, self.config.hugepages.force_first_numa
+ self.config.hugepages.amount,
+ self._hugepage_default_size,
+ self.config.hugepages.force_first_numa,
)
def configure_port_state(self, port: Port, enable: bool = True) -> None:
diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py
index d5bf7e0401..69f9483f05 100644
--- a/dts/framework/testbed_model/os_session.py
+++ b/dts/framework/testbed_model/os_session.py
@@ -345,7 +345,9 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
"""
@abstractmethod
- def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
+ def setup_hugepages(
+ self, hugepage_count: int, hugepage_size: int, force_first_numa: bool
+ ) -> None:
"""Configure hugepages on the node.
Get the node's Hugepage Size, configure the specified count of hugepages
--
2.44.0
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [PATCH v3] dts: Change hugepage runtime config to 2MB Exclusively
2024-04-16 18:18 ` [PATCH v3] " Nicholas Pratte
@ 2024-04-16 18:25 ` Morten Brørup
2024-04-16 18:26 ` Nicholas Pratte
2024-04-17 13:46 ` Juraj Linkeš
2 siblings, 0 replies; 20+ messages in thread
From: Morten Brørup @ 2024-04-16 18:25 UTC (permalink / raw)
To: Nicholas Pratte, paul.szczepanek, wathsala.vithanage,
bruce.richardson, juraj.linkes, yoan.picchi, jspewock,
Honnappa.Nagarahalli, thomas, probb
Cc: dev
> From: Nicholas Pratte [mailto:npratte@iol.unh.edu]
> Sent: Tuesday, 16 April 2024 20.19
>
> The previous implementation configures and allocates hugepage sizes
> based on a system default. This can lead to two problems: overallocation
> of
> hugepages (which may crash the remote host), and configuration of
> hugepage
> sizes that are not recommended during runtime. This new implementation
> allows only 2MB hugepage allocation during runtime; any other unique
> hugepage size must be configured by the end-user for initializing DTS.
>
> If the amount of 2MB hugepages requested exceeds the amount of 2MB
> hugepages already configured on the system, then the system will remount
> hugepages to cover the difference. If the amount of hugepages requested
> is
> either less than or equal to the amount already configured on the
> system,
> then nothing is done.
>
> Bugzilla ID: 1370
> Signed-off-by: Nicholas Pratte <npratte@iol.unh.edu>
> Reviewed-by: Jeremy Spewock <jspewock@iol.unh.edu>
> ---
This version LGTM.
Acked-by: Morten Brørup <mb@smartsharesystems.com>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3] dts: Change hugepage runtime config to 2MB Exclusively
2024-04-16 18:18 ` [PATCH v3] " Nicholas Pratte
2024-04-16 18:25 ` Morten Brørup
@ 2024-04-16 18:26 ` Nicholas Pratte
2024-04-17 13:46 ` Juraj Linkeš
2 siblings, 0 replies; 20+ messages in thread
From: Nicholas Pratte @ 2024-04-16 18:26 UTC (permalink / raw)
To: paul.szczepanek, wathsala.vithanage, bruce.richardson,
juraj.linkes, yoan.picchi, mb, jspewock, Honnappa.Nagarahalli,
thomas, probb
Cc: dev
I like the changes suggested by Morten, and much such modifications
accordingly. Given that the hugepage size will likely be determined by
arch going forward, looking ahead, when considering patch 1360 and
ongoing config changes, I decided to create a property method in the
Node object that can be modified to our liking in the future (credit
to Jeremy for this idea). This solution is designed in such a way
that, when arch is discovered and removed from the config, hugepage
sizes can be determined right after discovering the arch (this is just
to save me time in the future when working on patch 1360). I also
opted to throw the size variable back into the methods under
linux_session.py (per Morten's suggestions), but given the
implementation Jeremy and I worked on, there is no need to have the
get_hugepage_size() method in linux_session.py as we likely won't need
it going forward, so this has been removed.
On Tue, Apr 16, 2024 at 2:18 PM Nicholas Pratte <npratte@iol.unh.edu> wrote:
>
> The previous implementation configures and allocates hugepage sizes
> based on a system default. This can lead to two problems: overallocation of
> hugepages (which may crash the remote host), and configuration of hugepage
> sizes that are not recommended during runtime. This new implementation
> allows only 2MB hugepage allocation during runtime; any other unique
> hugepage size must be configured by the end-user for initializing DTS.
>
> If the amount of 2MB hugepages requested exceeds the amount of 2MB
> hugepages already configured on the system, then the system will remount
> hugepages to cover the difference. If the amount of hugepages requested is
> either less than or equal to the amount already configured on the system,
> then nothing is done.
>
> Bugzilla ID: 1370
> Signed-off-by: Nicholas Pratte <npratte@iol.unh.edu>
> Reviewed-by: Jeremy Spewock <jspewock@iol.unh.edu>
> ---
> doc/guides/tools/dts.rst | 7 +++++-
> dts/conf.yaml | 4 ++--
> dts/framework/config/__init__.py | 4 ++--
> dts/framework/config/conf_yaml_schema.json | 6 ++---
> dts/framework/config/types.py | 2 +-
> dts/framework/testbed_model/linux_session.py | 24 +++++++++++---------
> dts/framework/testbed_model/node.py | 8 ++++++-
> dts/framework/testbed_model/os_session.py | 4 +++-
> 8 files changed, 37 insertions(+), 22 deletions(-)
>
> diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
> index 47b218b2c6..1103db0faa 100644
> --- a/doc/guides/tools/dts.rst
> +++ b/doc/guides/tools/dts.rst
> @@ -131,7 +131,12 @@ There are two areas that need to be set up on a System Under Test:
>
> You may specify the optional hugepage configuration in the DTS config file.
> If you do, DTS will take care of configuring hugepages,
> - overwriting your current SUT hugepage configuration.
> + overwriting your current SUT hugepage configuration. Configuration of hugepages via DTS
> + allows only for configuration of 2MB hugepages. Thus, if your needs require hugepage
> + sizes not equal to 2MB, then you these configurations must be done outside of the DTS
> + framework; moreover, if you do not desire the use of 2MB hugepages, and instead perfer
> + other sizes (e.g 1GB), then we assume that hugepages have been manually configued before
> + runtime.
>
> * System under test configuration
>
> diff --git a/dts/conf.yaml b/dts/conf.yaml
> index 8068345dd5..56c3ae6f4c 100644
> --- a/dts/conf.yaml
> +++ b/dts/conf.yaml
> @@ -35,7 +35,7 @@ nodes:
> lcores: "" # use all the available logical cores
> use_first_core: false # tells DPDK to use any physical core
> memory_channels: 4 # tells DPDK to use 4 memory channels
> - hugepages: # optional; if removed, will use system hugepage configuration
> + hugepages_2mb: # optional; if removed, will use system hugepage configuration
> amount: 256
> force_first_numa: false
> ports:
> @@ -71,7 +71,7 @@ nodes:
> os_driver: rdma
> peer_node: "SUT 1"
> peer_pci: "0000:00:08.1"
> - hugepages: # optional; if removed, will use system hugepage configuration
> + hugepages_2mb: # optional; if removed, will use system hugepage configuration
> amount: 256
> force_first_numa: false
> traffic_generator:
> diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
> index 4cb5c74059..b6f820e39e 100644
> --- a/dts/framework/config/__init__.py
> +++ b/dts/framework/config/__init__.py
> @@ -255,8 +255,8 @@ def from_dict(
> Either an SUT or TG configuration instance.
> """
> hugepage_config = None
> - if "hugepages" in d:
> - hugepage_config_dict = d["hugepages"]
> + if "hugepages_2mb" in d:
> + hugepage_config_dict = d["hugepages_2mb"]
> if "force_first_numa" not in hugepage_config_dict:
> hugepage_config_dict["force_first_numa"] = False
> hugepage_config = HugepageConfiguration(**hugepage_config_dict)
> diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
> index 4731f4511d..f4d7199523 100644
> --- a/dts/framework/config/conf_yaml_schema.json
> +++ b/dts/framework/config/conf_yaml_schema.json
> @@ -146,7 +146,7 @@
> "compiler"
> ]
> },
> - "hugepages": {
> + "hugepages_2mb": {
> "type": "object",
> "description": "Optional hugepage configuration. If not specified, hugepages won't be configured and DTS will use system configuration.",
> "properties": {
> @@ -253,8 +253,8 @@
> "type": "integer",
> "description": "How many memory channels to use. Optional, defaults to 1."
> },
> - "hugepages": {
> - "$ref": "#/definitions/hugepages"
> + "hugepages_2mb": {
> + "$ref": "#/definitions/hugepages_2mb"
> },
> "ports": {
> "type": "array",
> diff --git a/dts/framework/config/types.py b/dts/framework/config/types.py
> index 1927910d88..016e0c3dbd 100644
> --- a/dts/framework/config/types.py
> +++ b/dts/framework/config/types.py
> @@ -46,7 +46,7 @@ class NodeConfigDict(TypedDict):
> """Allowed keys and values."""
>
> #:
> - hugepages: HugepageConfigurationDict
> + hugepages_2mb: HugepageConfigurationDict
> #:
> name: str
> #:
> diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py
> index 5d24030c3d..d0f7cfa77c 100644
> --- a/dts/framework/testbed_model/linux_session.py
> +++ b/dts/framework/testbed_model/linux_session.py
> @@ -15,7 +15,7 @@
>
> from typing_extensions import NotRequired
>
> -from framework.exception import RemoteCommandExecutionError
> +from framework.exception import ConfigurationError, RemoteCommandExecutionError
> from framework.utils import expand_range
>
> from .cpu import LogicalCore
> @@ -84,14 +84,20 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
> """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`."""
> return dpdk_prefix
>
> - def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
> + def setup_hugepages(
> + self, hugepage_count: int, hugepage_size: int, force_first_numa: bool
> + ) -> None:
> """Overrides :meth:`~.os_session.OSSession.setup_hugepages`."""
> self._logger.info("Getting Hugepage information.")
> - hugepage_size = self._get_hugepage_size()
> - hugepages_total = self._get_hugepages_total()
> + hugepages_total = self._get_hugepages_total(hugepage_size)
> + if (
> + f"hugepages-{hugepage_size}kB"
> + not in self.send_command("ls /sys/kernel/mm/hugepages").stdout
> + ):
> + raise ConfigurationError("hugepage size not supported by operating system")
> self._numa_nodes = self._get_numa_nodes()
>
> - if force_first_numa or hugepages_total != hugepage_count:
> + if force_first_numa or hugepages_total < hugepage_count:
> # when forcing numa, we need to clear existing hugepages regardless
> # of size, so they can be moved to the first numa node
> self._configure_huge_pages(hugepage_count, hugepage_size, force_first_numa)
> @@ -99,13 +105,9 @@ def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
> self._logger.info("Hugepages already configured.")
> self._mount_huge_pages()
>
> - def _get_hugepage_size(self) -> int:
> - hugepage_size = self.send_command("awk '/Hugepagesize/ {print $2}' /proc/meminfo").stdout
> - return int(hugepage_size)
> -
> - def _get_hugepages_total(self) -> int:
> + def _get_hugepages_total(self, hugepage_size: int) -> int:
> hugepages_total = self.send_command(
> - "awk '/HugePages_Total/ { print $2 }' /proc/meminfo"
> + f"cat /sys/kernel/mm/hugepages/hugepages-{hugepage_size}kB/nr_hugepages"
> ).stdout
> return int(hugepages_total)
>
> diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
> index 74061f6262..056a031ca0 100644
> --- a/dts/framework/testbed_model/node.py
> +++ b/dts/framework/testbed_model/node.py
> @@ -97,6 +97,10 @@ def __init__(self, node_config: NodeConfiguration):
> self.virtual_devices = []
> self._init_ports()
>
> + @property
> + def _hugepage_default_size(self) -> int:
> + return 2048
> +
> def _init_ports(self) -> None:
> self.ports = [Port(self.name, port_config) for port_config in self.config.ports]
> self.main_session.update_ports(self.ports)
> @@ -266,7 +270,9 @@ def _setup_hugepages(self) -> None:
> """
> if self.config.hugepages:
> self.main_session.setup_hugepages(
> - self.config.hugepages.amount, self.config.hugepages.force_first_numa
> + self.config.hugepages.amount,
> + self._hugepage_default_size,
> + self.config.hugepages.force_first_numa,
> )
>
> def configure_port_state(self, port: Port, enable: bool = True) -> None:
> diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py
> index d5bf7e0401..69f9483f05 100644
> --- a/dts/framework/testbed_model/os_session.py
> +++ b/dts/framework/testbed_model/os_session.py
> @@ -345,7 +345,9 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
> """
>
> @abstractmethod
> - def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
> + def setup_hugepages(
> + self, hugepage_count: int, hugepage_size: int, force_first_numa: bool
> + ) -> None:
> """Configure hugepages on the node.
>
> Get the node's Hugepage Size, configure the specified count of hugepages
> --
> 2.44.0
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3] dts: Change hugepage runtime config to 2MB Exclusively
2024-04-16 18:18 ` [PATCH v3] " Nicholas Pratte
2024-04-16 18:25 ` Morten Brørup
2024-04-16 18:26 ` Nicholas Pratte
@ 2024-04-17 13:46 ` Juraj Linkeš
2 siblings, 0 replies; 20+ messages in thread
From: Juraj Linkeš @ 2024-04-17 13:46 UTC (permalink / raw)
To: Nicholas Pratte
Cc: paul.szczepanek, wathsala.vithanage, bruce.richardson,
yoan.picchi, mb, jspewock, Honnappa.Nagarahalli, thomas, probb,
dev
Don't forget to run the ../devtools/dts-check-format.sh script, it
found one issue:
framework/testbed_model/os_session.py:351: [D] D417 Missing argument
descriptions in the docstring [pydocstyle]
On Tue, Apr 16, 2024 at 8:18 PM Nicholas Pratte <npratte@iol.unh.edu> wrote:
>
> The previous implementation configures and allocates hugepage sizes
> based on a system default. This can lead to two problems: overallocation of
> hugepages (which may crash the remote host), and configuration of hugepage
> sizes that are not recommended during runtime. This new implementation
> allows only 2MB hugepage allocation during runtime; any other unique
> hugepage size must be configured by the end-user for initializing DTS.
>
> If the amount of 2MB hugepages requested exceeds the amount of 2MB
> hugepages already configured on the system, then the system will remount
> hugepages to cover the difference. If the amount of hugepages requested is
> either less than or equal to the amount already configured on the system,
> then nothing is done.
>
> Bugzilla ID: 1370
> Signed-off-by: Nicholas Pratte <npratte@iol.unh.edu>
> Reviewed-by: Jeremy Spewock <jspewock@iol.unh.edu>
> ---
> doc/guides/tools/dts.rst | 7 +++++-
> dts/conf.yaml | 4 ++--
> dts/framework/config/__init__.py | 4 ++--
> dts/framework/config/conf_yaml_schema.json | 6 ++---
> dts/framework/config/types.py | 2 +-
> dts/framework/testbed_model/linux_session.py | 24 +++++++++++---------
> dts/framework/testbed_model/node.py | 8 ++++++-
> dts/framework/testbed_model/os_session.py | 4 +++-
> 8 files changed, 37 insertions(+), 22 deletions(-)
>
> diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
> index 47b218b2c6..1103db0faa 100644
> --- a/doc/guides/tools/dts.rst
> +++ b/doc/guides/tools/dts.rst
> @@ -131,7 +131,12 @@ There are two areas that need to be set up on a System Under Test:
>
> You may specify the optional hugepage configuration in the DTS config file.
> If you do, DTS will take care of configuring hugepages,
> - overwriting your current SUT hugepage configuration.
> + overwriting your current SUT hugepage configuration. Configuration of hugepages via DTS
> + allows only for configuration of 2MB hugepages.
I'd like to see a quick explanation of why we only do 2MB hugepages,
like you mentioned in the commit message.
> Thus, if your needs require hugepage
> + sizes not equal to 2MB, then you these configurations must be done outside of the DTS
typo - you is extra. And I think this should be configuration singular
- this configuration must be done.
> + framework; moreover, if you do not desire the use of 2MB hugepages, and instead perfer
prefer
> + other sizes (e.g 1GB), then we assume that hugepages have been manually configued before
configured
> + runtime.
Looks like the two sentences (separated by semicolon) are saying the same thing.
>
> * System under test configuration
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v4] dts: Change hugepage runtime config to 2MB Exclusively
2024-04-04 15:31 [PATCH] dts: Change hugepage runtime config to 2MB Exclusively Nicholas Pratte
` (3 preceding siblings ...)
2024-04-16 18:18 ` [PATCH v3] " Nicholas Pratte
@ 2024-04-18 16:10 ` Nicholas Pratte
2024-04-25 8:00 ` Juraj Linkeš
4 siblings, 1 reply; 20+ messages in thread
From: Nicholas Pratte @ 2024-04-18 16:10 UTC (permalink / raw)
To: jspewock, wathsala.vithanage, Honnappa.Nagarahalli, probb,
thomas, mb, bruce.richardson, juraj.linkes, yoan.picchi,
paul.szczepanek
Cc: dev, Nicholas Pratte
The previous implementation configures and allocates hugepage sizes
based on a system default. This can lead to two problems: overallocation of
hugepages (which may crash the remote host), and configuration of hugepage
sizes that are not recommended during runtime. This new implementation
allows only 2MB hugepage allocation during runtime; any other unique
hugepage size must be configured by the end-user for initializing DTS.
If the amount of 2MB hugepages requested exceeds the amount of 2MB
hugepages already configured on the system, then the system will remount
hugepages to cover the difference. If the amount of hugepages requested is
either less than or equal to the amount already configured on the system,
then nothing is done.
Bugzilla ID: 1370
Signed-off-by: Nicholas Pratte <npratte@iol.unh.edu>
Reviewed-by: Jeremy Spewock <jspewock@iol.unh.edu>
---
v4:
* dts.rst punctuation/grammar corrections and 2mb exclusivity
justifications included in documentation
---
doc/guides/tools/dts.rst | 6 ++++-
dts/conf.yaml | 4 ++--
dts/framework/config/__init__.py | 4 ++--
dts/framework/config/conf_yaml_schema.json | 6 ++---
dts/framework/config/types.py | 2 +-
dts/framework/testbed_model/linux_session.py | 24 +++++++++++---------
dts/framework/testbed_model/node.py | 8 ++++++-
dts/framework/testbed_model/os_session.py | 5 +++-
8 files changed, 37 insertions(+), 22 deletions(-)
diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index 47b218b2c6..71473dbb3d 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -131,7 +131,11 @@ There are two areas that need to be set up on a System Under Test:
You may specify the optional hugepage configuration in the DTS config file.
If you do, DTS will take care of configuring hugepages,
- overwriting your current SUT hugepage configuration.
+ overwriting your current SUT hugepage configuration. Configuration of hugepages via DTS
+ allows only for allocation of 2MB hugepages, as doing so prevents accidental/over
+ allocation of hugepages and hugepages sizes not recommended during runtime due to
+ contiguous memory space requirements. Thus, if you require hugepage
+ sizes not equal to 2MB, then this configuration must be done outside of the DTS framework.
* System under test configuration
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 8068345dd5..56c3ae6f4c 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -35,7 +35,7 @@ nodes:
lcores: "" # use all the available logical cores
use_first_core: false # tells DPDK to use any physical core
memory_channels: 4 # tells DPDK to use 4 memory channels
- hugepages: # optional; if removed, will use system hugepage configuration
+ hugepages_2mb: # optional; if removed, will use system hugepage configuration
amount: 256
force_first_numa: false
ports:
@@ -71,7 +71,7 @@ nodes:
os_driver: rdma
peer_node: "SUT 1"
peer_pci: "0000:00:08.1"
- hugepages: # optional; if removed, will use system hugepage configuration
+ hugepages_2mb: # optional; if removed, will use system hugepage configuration
amount: 256
force_first_numa: false
traffic_generator:
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 4cb5c74059..b6f820e39e 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -255,8 +255,8 @@ def from_dict(
Either an SUT or TG configuration instance.
"""
hugepage_config = None
- if "hugepages" in d:
- hugepage_config_dict = d["hugepages"]
+ if "hugepages_2mb" in d:
+ hugepage_config_dict = d["hugepages_2mb"]
if "force_first_numa" not in hugepage_config_dict:
hugepage_config_dict["force_first_numa"] = False
hugepage_config = HugepageConfiguration(**hugepage_config_dict)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 4731f4511d..f4d7199523 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -146,7 +146,7 @@
"compiler"
]
},
- "hugepages": {
+ "hugepages_2mb": {
"type": "object",
"description": "Optional hugepage configuration. If not specified, hugepages won't be configured and DTS will use system configuration.",
"properties": {
@@ -253,8 +253,8 @@
"type": "integer",
"description": "How many memory channels to use. Optional, defaults to 1."
},
- "hugepages": {
- "$ref": "#/definitions/hugepages"
+ "hugepages_2mb": {
+ "$ref": "#/definitions/hugepages_2mb"
},
"ports": {
"type": "array",
diff --git a/dts/framework/config/types.py b/dts/framework/config/types.py
index 1927910d88..016e0c3dbd 100644
--- a/dts/framework/config/types.py
+++ b/dts/framework/config/types.py
@@ -46,7 +46,7 @@ class NodeConfigDict(TypedDict):
"""Allowed keys and values."""
#:
- hugepages: HugepageConfigurationDict
+ hugepages_2mb: HugepageConfigurationDict
#:
name: str
#:
diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py
index 5d24030c3d..d0f7cfa77c 100644
--- a/dts/framework/testbed_model/linux_session.py
+++ b/dts/framework/testbed_model/linux_session.py
@@ -15,7 +15,7 @@
from typing_extensions import NotRequired
-from framework.exception import RemoteCommandExecutionError
+from framework.exception import ConfigurationError, RemoteCommandExecutionError
from framework.utils import expand_range
from .cpu import LogicalCore
@@ -84,14 +84,20 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
"""Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`."""
return dpdk_prefix
- def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
+ def setup_hugepages(
+ self, hugepage_count: int, hugepage_size: int, force_first_numa: bool
+ ) -> None:
"""Overrides :meth:`~.os_session.OSSession.setup_hugepages`."""
self._logger.info("Getting Hugepage information.")
- hugepage_size = self._get_hugepage_size()
- hugepages_total = self._get_hugepages_total()
+ hugepages_total = self._get_hugepages_total(hugepage_size)
+ if (
+ f"hugepages-{hugepage_size}kB"
+ not in self.send_command("ls /sys/kernel/mm/hugepages").stdout
+ ):
+ raise ConfigurationError("hugepage size not supported by operating system")
self._numa_nodes = self._get_numa_nodes()
- if force_first_numa or hugepages_total != hugepage_count:
+ if force_first_numa or hugepages_total < hugepage_count:
# when forcing numa, we need to clear existing hugepages regardless
# of size, so they can be moved to the first numa node
self._configure_huge_pages(hugepage_count, hugepage_size, force_first_numa)
@@ -99,13 +105,9 @@ def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
self._logger.info("Hugepages already configured.")
self._mount_huge_pages()
- def _get_hugepage_size(self) -> int:
- hugepage_size = self.send_command("awk '/Hugepagesize/ {print $2}' /proc/meminfo").stdout
- return int(hugepage_size)
-
- def _get_hugepages_total(self) -> int:
+ def _get_hugepages_total(self, hugepage_size: int) -> int:
hugepages_total = self.send_command(
- "awk '/HugePages_Total/ { print $2 }' /proc/meminfo"
+ f"cat /sys/kernel/mm/hugepages/hugepages-{hugepage_size}kB/nr_hugepages"
).stdout
return int(hugepages_total)
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index 74061f6262..056a031ca0 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -97,6 +97,10 @@ def __init__(self, node_config: NodeConfiguration):
self.virtual_devices = []
self._init_ports()
+ @property
+ def _hugepage_default_size(self) -> int:
+ return 2048
+
def _init_ports(self) -> None:
self.ports = [Port(self.name, port_config) for port_config in self.config.ports]
self.main_session.update_ports(self.ports)
@@ -266,7 +270,9 @@ def _setup_hugepages(self) -> None:
"""
if self.config.hugepages:
self.main_session.setup_hugepages(
- self.config.hugepages.amount, self.config.hugepages.force_first_numa
+ self.config.hugepages.amount,
+ self._hugepage_default_size,
+ self.config.hugepages.force_first_numa,
)
def configure_port_state(self, port: Port, enable: bool = True) -> None:
diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py
index d5bf7e0401..5d58400cbe 100644
--- a/dts/framework/testbed_model/os_session.py
+++ b/dts/framework/testbed_model/os_session.py
@@ -345,7 +345,9 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
"""
@abstractmethod
- def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
+ def setup_hugepages(
+ self, hugepage_count: int, hugepage_size: int, force_first_numa: bool
+ ) -> None:
"""Configure hugepages on the node.
Get the node's Hugepage Size, configure the specified count of hugepages
@@ -353,6 +355,7 @@ def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
Args:
hugepage_count: Configure this many hugepages.
+ hugepage_size: Configure hugepages of this size (currently not used in the config)
force_first_numa: If :data:`True`, configure hugepages just on the first numa node.
"""
--
2.44.0
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v4] dts: Change hugepage runtime config to 2MB Exclusively
2024-04-18 16:10 ` [PATCH v4] " Nicholas Pratte
@ 2024-04-25 8:00 ` Juraj Linkeš
2024-04-29 17:26 ` Nicholas Pratte
0 siblings, 1 reply; 20+ messages in thread
From: Juraj Linkeš @ 2024-04-25 8:00 UTC (permalink / raw)
To: Nicholas Pratte
Cc: jspewock, wathsala.vithanage, Honnappa.Nagarahalli, probb,
thomas, mb, bruce.richardson, yoan.picchi, paul.szczepanek, dev
Just a few minor points, otherwise this looks good.
> diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
> index 47b218b2c6..71473dbb3d 100644
> --- a/doc/guides/tools/dts.rst
> +++ b/doc/guides/tools/dts.rst
> @@ -131,7 +131,11 @@ There are two areas that need to be set up on a System Under Test:
>
> You may specify the optional hugepage configuration in the DTS config file.
> If you do, DTS will take care of configuring hugepages,
> - overwriting your current SUT hugepage configuration.
> + overwriting your current SUT hugepage configuration. Configuration of hugepages via DTS
> + allows only for allocation of 2MB hugepages, as doing so prevents accidental/over
> + allocation of hugepages and hugepages sizes not recommended during runtime due to
This wording is a bit confusing. What would make sense to me is
"allocation of hugepages with hugepage sizes not recommended". Or am I
missing something?
> + contiguous memory space requirements. Thus, if you require hugepage
> + sizes not equal to 2MB, then this configuration must be done outside of the DTS framework.
>
> * System under test configuration
>
> diff --git a/dts/conf.yaml b/dts/conf.yaml
> index 8068345dd5..56c3ae6f4c 100644
> --- a/dts/conf.yaml
> +++ b/dts/conf.yaml
> @@ -35,7 +35,7 @@ nodes:
> lcores: "" # use all the available logical cores
> use_first_core: false # tells DPDK to use any physical core
> memory_channels: 4 # tells DPDK to use 4 memory channels
> - hugepages: # optional; if removed, will use system hugepage configuration
> + hugepages_2mb: # optional; if removed, will use system hugepage configuration
> amount: 256
I noticed this mistake I made a while back, but this looks like a good
opportunity to point it out. Amount is used with uncountable nouns and
hugepages is countable. We should correct this mistake (rename to
number in all places, I think that's used somewhere in Linux) and this
patch could be a good place to do it. Or maybe a separate one or even
a separate patchset. What do you think, would you please fix this
while you're making hugepage changes?
> force_first_numa: false
> ports:
> @@ -71,7 +71,7 @@ nodes:
> os_driver: rdma
> peer_node: "SUT 1"
> peer_pci: "0000:00:08.1"
> - hugepages: # optional; if removed, will use system hugepage configuration
> + hugepages_2mb: # optional; if removed, will use system hugepage configuration
> amount: 256
> force_first_numa: false
> traffic_generator:
<snip>
> diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
> index 74061f6262..056a031ca0 100644
> --- a/dts/framework/testbed_model/node.py
> +++ b/dts/framework/testbed_model/node.py
> @@ -97,6 +97,10 @@ def __init__(self, node_config: NodeConfiguration):
> self.virtual_devices = []
> self._init_ports()
>
> + @property
> + def _hugepage_default_size(self) -> int:
> + return 2048
> +
I'm thinking about the placement of this and I'm kinda torn between
Node and OSSession. In Node, it makes sense as it basically means "no
matter what sort of node we're connected to, we're going to use this
hugepage size". In OSSession, it would make more sense if we need a
different hugepage size based on the OS (and possibly arch, which
could be passed to it). For now, leaving it in Node is probably
better. We can always move it if need be.
But I don't really get why it's a property. Using properties doesn't
really make sense if we're not putting any logic into it. This could
just as easily be self._hugepage_default_size = 2048 in __init__().
> def _init_ports(self) -> None:
> self.ports = [Port(self.name, port_config) for port_config in self.config.ports]
> self.main_session.update_ports(self.ports)
> @@ -266,7 +270,9 @@ def _setup_hugepages(self) -> None:
> """
> if self.config.hugepages:
> self.main_session.setup_hugepages(
> - self.config.hugepages.amount, self.config.hugepages.force_first_numa
> + self.config.hugepages.amount,
> + self._hugepage_default_size,
> + self.config.hugepages.force_first_numa,
> )
>
> def configure_port_state(self, port: Port, enable: bool = True) -> None:
> diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py
> index d5bf7e0401..5d58400cbe 100644
> --- a/dts/framework/testbed_model/os_session.py
> +++ b/dts/framework/testbed_model/os_session.py
> @@ -345,7 +345,9 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
> """
>
> @abstractmethod
> - def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
> + def setup_hugepages(
> + self, hugepage_count: int, hugepage_size: int, force_first_numa: bool
> + ) -> None:
> """Configure hugepages on the node.
>
> Get the node's Hugepage Size, configure the specified count of hugepages
> @@ -353,6 +355,7 @@ def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
>
> Args:
> hugepage_count: Configure this many hugepages.
> + hugepage_size: Configure hugepages of this size (currently not used in the config)
The dot at the end of the sentence is missing. The reminder in the
parentheses is not needed, the OSSession class is decoupled from the
rest of the classes (and is thus not aware of the config).
> force_first_numa: If :data:`True`, configure hugepages just on the first numa node.
> """
>
> --
> 2.44.0
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v4] dts: Change hugepage runtime config to 2MB Exclusively
2024-04-25 8:00 ` Juraj Linkeš
@ 2024-04-29 17:26 ` Nicholas Pratte
2024-04-30 8:42 ` Juraj Linkeš
0 siblings, 1 reply; 20+ messages in thread
From: Nicholas Pratte @ 2024-04-29 17:26 UTC (permalink / raw)
To: Juraj Linkeš
Cc: jspewock, wathsala.vithanage, Honnappa.Nagarahalli, probb,
thomas, mb, bruce.richardson, yoan.picchi, paul.szczepanek, dev
I fixed the docstring under setup_hugepages in os_session, and I also
made a quick fix to the dts.rst documentation. For the dts.rst
documentation, I think the following changes make more sense, based on
the concerns outlined:
(here is a snip of the documentation with the change I made)
"as doing so prevents accidental/over
allocation of with hugepage sizes not recommended during runtime due to
contiguous memory space requirements."
With regard to the wording used for the total number of hugepages, I
could change the wording to either "quantity" or "number;" I think
quantity makes more sense and is less ambiguous, but I'm curious what
you think. With reference to your comments about putting this in a
different patch set, I think a good argument could be made to put this
kind of a change in out currently existing patch, but I understand the
argument at both ends. Personally, I am in favor of adding this fix to
the current patch since we're renaming key/value pairs in the schema
and yaml already.
As far as the property is concerned, when Jeremy and I discussed how
to best implement this fix, he suggested that a property might make
more sense here because of the potential changes that we might make to
the default size in the future (whether that be by OS, arch or
otherwise). Ultimately, we settled on inserting a property that
returns 2048 for now with the understanding that, in the future,
developers can add logic to the property as needed. Initially, I had
the hugepage size configured in the manner you described, so the
property implementation is not something I'm adamant on. I can make
the suggested change you gave above, or alternatively, if my provided
reasoning makes sense, I can insert a comment exclaiming the existence
of the property.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v4] dts: Change hugepage runtime config to 2MB Exclusively
2024-04-29 17:26 ` Nicholas Pratte
@ 2024-04-30 8:42 ` Juraj Linkeš
0 siblings, 0 replies; 20+ messages in thread
From: Juraj Linkeš @ 2024-04-30 8:42 UTC (permalink / raw)
To: Nicholas Pratte
Cc: jspewock, wathsala.vithanage, Honnappa.Nagarahalli, probb,
thomas, mb, bruce.richardson, yoan.picchi, paul.szczepanek, dev
Please leave the context you're addressing. Reading this was a bit
confusing and also hard to understand.
On Mon, Apr 29, 2024 at 7:26 PM Nicholas Pratte <npratte@iol.unh.edu> wrote:
>
> I fixed the docstring under setup_hugepages in os_session, and I also
> made a quick fix to the dts.rst documentation. For the dts.rst
> documentation, I think the following changes make more sense, based on
> the concerns outlined:
>
> (here is a snip of the documentation with the change I made)
> "as doing so prevents accidental/over
> allocation of with hugepage sizes not recommended during runtime due to
> contiguous memory space requirements."
>
Looks like there's a typo: allocation of with hugepage sizes
> With regard to the wording used for the total number of hugepages, I
> could change the wording to either "quantity" or "number;" I think
> quantity makes more sense and is less ambiguous, but I'm curious what
> you think. With reference to your comments about putting this in a
> different patch set, I think a good argument could be made to put this
> kind of a change in out currently existing patch, but I understand the
> argument at both ends. Personally, I am in favor of adding this fix to
> the current patch since we're renaming key/value pairs in the schema
> and yaml already.
>
It looks like quantity is used with both countable and uncountable
nouns, whereas number is only used with countable nouns, so quantity
is also fine.
Let's put it into this patch series.
> As far as the property is concerned, when Jeremy and I discussed how
> to best implement this fix, he suggested that a property might make
> more sense here because of the potential changes that we might make to
> the default size in the future (whether that be by OS, arch or
> otherwise). Ultimately, we settled on inserting a property that
> returns 2048 for now with the understanding that, in the future,
> developers can add logic to the property as needed. Initially, I had
> the hugepage size configured in the manner you described, so the
> property implementation is not something I'm adamant on. I can make
> the suggested change you gave above, or alternatively, if my provided
> reasoning makes sense, I can insert a comment exclaiming the existence
> of the property.
The current expectation, based on the previous discussion, is we won't
be needing any logic, so I'd just make it a class variable (defined in
OSSession, as it's the same for all sessions). We can change it in the
future if we uncover a case where we might need it.
^ permalink raw reply [flat|nested] 20+ messages in thread