DPDK patches and discussions
 help / color / mirror / Atom feed
* DPDK packaging and OpenWrt
@ 2023-05-16 19:18 Philip Prindeville
  2023-05-16 20:43 ` Stephen Hemminger
  2023-05-16 23:06 ` Garrett D'Amore
  0 siblings, 2 replies; 7+ messages in thread
From: Philip Prindeville @ 2023-05-16 19:18 UTC (permalink / raw)
  To: dev

Hi,

I'm a packaging maintainer for some of the OpenWrt ancillary packages, and I'm looking at upstreaming DPDK support into OpenWrt.  Apologies in advance if this is a bit dense (a brain dump) or hops between too many topics.

I was looking at what's been done to date, and it seems there are a few shortcomings which I hope to address.

Amongst them are:

* getting DPDK support into OpenWrt's main repo for the kmod's and into the packages repo for the user-space support;

* making DPDK supported on all available architectures (i.e. agnostic, not just x86_64 specific);

* integrating into the OpenWrt "make menuconfig" system, so that editing packages directly isn't required;

* supporting cross-building and avoiding the [flawed] assumption that the micro-architecture of the build server is in any way related to the processor type of the target machine(s);

* integration into the OpenWrt CI/CD tests;

* making the kernel support as secure/robust as possible (i.e. avoiding an ill-behaved application taking down the kernel, since this is a firewall after all);

* avoiding conflict with other existing module or package functionality;

* avoiding, to the extent possible, introducing one-off toolchain dependencies that unnecessarily complicate the build ecosystem;

To this end, I'm asking the mailing list for guidance on the best packaging practices that satisfy these goals.  I'm willing to do whatever heavy lifting that's within my ability.

I have some related questions.

1. OpenWrt already bakes "vfio.ko" and "vfio-pci.ko" here:

https://github.com/openwrt/openwrt/blob/master/package/kernel/linux/modules/virt.mk#L77-L117

   but does so with "CONFIG_VFIO_PCI_IGD=y", which seems to be specifically for VGA acceleration of guests in a virtualized environment.  That seems to be an odd corner case, and unlikely given that OpenWrt almost always runs on headless hardware.  Should this be reverted?

2. "vfio.ko" is built with "CONFIG_VFIO_NOIOMMU=y", which seems insecure.  Can this be reverted?

3. Is "uio_pci_generic.ko" worth the potential insecurity/instability of a misbehaving application?  My understanding is that it's only needed on SR-IOV hardware where MSI/MSI-X interrupts aren't supported: is there even any current hardware that doesn't support MSI/MSI-X?  Or am I misunderstanding the use case?

4. Can most functionality be achieved with VFIO + IOMMU support?

5. I've seen packaging for the "iommu_v2.ko" module done here:

https://github.com/k13132/openwrt-dpdk/blob/main/packages/kmod-iommu_v2/Makefile#L22-L42

   Is this potentially useful?  What platforms/architectures use this driver?

6. Hand editing a wrapper for dpdk.tar.gz is a non-starter.  I'd rather add Kconfig adornments to OpenWrt packaging for the wrapper so that options for "-Denable_drivers=" and "-Dcpu_instruction_set=" can be passed in once global build options for OpenWrt have been selected.  Defaulting the instruction set to the build host is going to be wrong at least some of the time, if not most of the time.  For x86_64, what is a decent compromise for a micro-architecture that will build and run on most AMD and Intel hardware?  What's a decent baseline for an ARM64 micro-architecture that will build and run on most ARM hardware?

7. Many embedded systems don't build with glibc because it's too bloated (and because critical fixes sometimes take too long to roll out), and instead use MUSL, eglibc, or uClibc (although the last one seems to be waning).  Only glibc supports <execinfo.h> from what I can tell.  Can broader support for other C runtimes be added?  Can RTE_BACKTRACE be made a parameter or conditionally defined based on the runtime headers?  (autotools would be and HAVE_EXECINFO_H would be really handy here, but I'm not sure how to make this play well with meson/ninja, and truth be told I'm an old school Makefile + autotools knuckle-dragger).

8. How do I validate that DPDK is properly being built with the cross-tools and not native tools?  Even when building x86_64 targets on an x86_64 build host, we end up using a custom toolchain and not the "native" compiler that comes with the distro.

9. What is the parity between AMD64 and ARM64?  Do both platforms offer equal functionality and security, if not performance?

10. Who else is using DPDK with OpenWrt that is open to collaboration?

11. What is the user-space TCP/IP stack of choice (or reference) for use with DPDK?

Thanks,

-Philip



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: DPDK packaging and OpenWrt
  2023-05-16 19:18 DPDK packaging and OpenWrt Philip Prindeville
@ 2023-05-16 20:43 ` Stephen Hemminger
  2023-05-16 23:24   ` Philip Prindeville
  2023-05-16 23:06 ` Garrett D'Amore
  1 sibling, 1 reply; 7+ messages in thread
From: Stephen Hemminger @ 2023-05-16 20:43 UTC (permalink / raw)
  To: Philip Prindeville; +Cc: dev

On Tue, 16 May 2023 13:18:40 -0600
Philip Prindeville <philipp_subx@redfish-solutions.com> wrote:

> Hi,
> 
> I'm a packaging maintainer for some of the OpenWrt ancillary packages, and I'm looking at upstreaming DPDK support into OpenWrt.  Apologies in advance if this is a bit dense (a brain dump) or hops between too many topics.
> 
> I was looking at what's been done to date, and it seems there are a few shortcomings which I hope to address.
> 
> Amongst them are:
> 
> * getting DPDK support into OpenWrt's main repo for the kmod's and into the packages repo for the user-space support;

DPDK kernel modules are deprecated, creating more usage of them is problematic.

> 
> * making DPDK supported on all available architectures (i.e. agnostic, not just x86_64 specific);
> 
> * integrating into the OpenWrt "make menuconfig" system, so that editing packages directly isn't required;

Does openwrt build system support meson?

> * supporting cross-building and avoiding the [flawed] assumption that the micro-architecture of the build server is in any way related to the processor type of the target machine(s);
> 
> * integration into the OpenWrt CI/CD tests;
> 
> * making the kernel support as secure/robust as possible (i.e. avoiding an ill-behaved application taking down the kernel, since this is a firewall after all);

Not a problem with vfio

> 
> * avoiding conflict with other existing module or package functionality;
> 
> * avoiding, to the extent possible, introducing one-off toolchain dependencies that unnecessarily complicate the build ecosystem;
> 
> To this end, I'm asking the mailing list for guidance on the best packaging practices that satisfy these goals.  I'm willing to do whatever heavy lifting that's within my ability.
> 
> I have some related questions.
> 
> 1. OpenWrt already bakes "vfio.ko" and "vfio-pci.ko" here:
> 
> https://github.com/openwrt/openwrt/blob/master/package/kernel/linux/modules/virt.mk#L77-L117
> 
>    but does so with "CONFIG_VFIO_PCI_IGD=y", which seems to be specifically for VGA acceleration of guests in a virtualized environment.  That seems to be an odd corner case, and unlikely given that OpenWrt almost always runs on headless hardware.  Should this be reverted?

Yes.

> 
> 2. "vfio.ko" is built with "CONFIG_VFIO_NOIOMMU=y", which seems insecure.  Can this be reverted?

No. Most of OpenWrt's systems do not have an IOMMU.

> 3. Is "uio_pci_generic.ko" worth the potential insecurity/instability of a misbehaving application?  My understanding is that it's only needed on SR-IOV hardware where MSI/MSI-X interrupts aren't supported: is there even any current hardware that doesn't support MSI/MSI-X?  Or am I misunderstanding the use case?

Use VFIO noiommu, it is more supported and tested upstream.  With PCI generic, no interrupts work.


> 4. Can most functionality be achieved with VFIO + IOMMU support?

Yes.

> 5. I've seen packaging for the "iommu_v2.ko" module done here:
> 
> https://github.com/k13132/openwrt-dpdk/blob/main/packages/kmod-iommu_v2/Makefile#L22-L42
> 
>    Is this potentially useful?  What platforms/architectures use this driver?
> 
> 6. Hand editing a wrapper for dpdk.tar.gz is a non-starter.  I'd rather add Kconfig adornments to OpenWrt packaging for the wrapper so that options for "-Denable_drivers=" and "-Dcpu_instruction_set=" can be passed in once global build options for OpenWrt have been selected.  Defaulting the instruction set to the build host is going to be wrong at least some of the time, if not most of the time.  For x86_64, what is a decent compromise for a micro-architecture that will build and run on most AMD and Intel hardware?  What's a decent baseline for an ARM64 micro-architecture that will build and run on most ARM hardware?
> 
> 7. Many embedded systems don't build with glibc because it's too bloated (and because critical fixes sometimes take too long to roll out), and instead use MUSL, eglibc, or uClibc (although the last one seems to be waning).  Only glibc supports <execinfo.h> from what I can tell.  Can broader support for other C runtimes be added?  Can RTE_BACKTRACE be made a parameter or conditionally defined based on the runtime headers?  (autotools would be and HAVE_EXECINFO_H would be really handy here, but I'm not sure how to make this play well with meson/ninja, and truth be told I'm an old school Makefile + autotools knuckle-dragger).

You could do without backtrace, but then if DPDK applications you are flying blind.


> 8. How do I validate that DPDK is properly being built with the cross-tools and not native tools?  Even when building x86_64 targets on an x86_64 build host, we end up using a custom toolchain and not the "native" compiler that comes with the distro.
> 
> 9. What is the parity between AMD64 and ARM64?  Do both platforms offer equal functionality and security, if not performance?

Apples/Oranges.

> 10. Who else is using DPDK with OpenWrt that is open to collaboration?
> 
> 11. What is the user-space TCP/IP stack of choice (or reference) for use with DPDK?

No user-space TCP/IP stack is really robust and that great.
VPP has one but it is likely to be specific to using VPP and not sure if you want to go there.
I don't think Fedora/Suse/Debian/Ubuntu have packaged any userspace TCP yet.



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: DPDK packaging and OpenWrt
  2023-05-16 19:18 DPDK packaging and OpenWrt Philip Prindeville
  2023-05-16 20:43 ` Stephen Hemminger
@ 2023-05-16 23:06 ` Garrett D'Amore
  2023-05-16 23:38   ` Philip Prindeville
  1 sibling, 1 reply; 7+ messages in thread
From: Garrett D'Amore @ 2023-05-16 23:06 UTC (permalink / raw)
  To: dev, Philip Prindeville

[-- Attachment #1: Type: text/plain, Size: 2653 bytes --]

On May 16, 2023 at 12:19 PM -0700, Philip Prindeville <philipp_subx@redfish-solutions.com>, wrote:
> Hi,
>
> I'm a packaging maintainer for some of the OpenWrt ancillary packages, and I'm looking at upstreaming DPDK support into OpenWrt. Apologies in advance if this is a bit dense (a brain dump) or hops between too many topics.
>
> I was looking at what's been done to date, and it seems there are a few shortcomings which I hope to address.
>
> Amongst them are:
>
>
> I have some related questions.
>
> 1. OpenWrt already bakes "vfio.ko" and "vfio-pci.ko" here:
>
> https://github.com/openwrt/openwrt/blob/master/package/kernel/linux/modules/virt.mk#L77-L117
>
> but does so with "CONFIG_VFIO_PCI_IGD=y", which seems to be specifically for VGA acceleration of guests in a virtualized environment. That seems to be an odd corner case, and unlikely given that OpenWrt almost always runs on headless hardware. Should this be reverted?
>
> 2. "vfio.ko" is built with "CONFIG_VFIO_NOIOMMU=y", which seems insecure. Can this be reverted?
>
> 3. Is "uio_pci_generic.ko" worth the potential insecurity/instability of a misbehaving application? My understanding is that it's only needed on SR-IOV hardware where MSI/MSI-X interrupts aren't supported: is there even any current hardware that doesn't support MSI/MSI-X? Or am I misunderstanding the use case?

*Either* uio_pci_generic *or* vfio with NOIOMMU are required for the vary large number of systems that either lack an IOMMU (btw, that will be nearly all OpenWRT platforms!), or elect to run with the iommu unconfigured (one justification for doing that - apart from numerous software bugs and limitations — is that the IOMMU can slow down I/O. We actually recommend most of our customers that run dedicated systems run with the IOMMU disabled for this reason.)

vfio with  noiommu is preferable.
>
> 4. Can most functionality be achieved with VFIO + IOMMU support?

*If* you have an IOMMU, and you aren’t trying to eke the very last bits of performance, yes.  But as many systems don’t have an IOMMU, and as one of the main points of DPDK are extremely performance sensitive applications, I think the answer is more broadly, “no”.
> 11. What is the user-space TCP/IP stack of choice (or reference) for use with DPDK?

IMO, if you’re using DPDK to run *TCP* applications then you’re probably misusing it — there isn’t a user land TCP stack that I would trust.  IP/UDP is something we do, and it works well, but I can tell you already it’s a pain to do *just* IP, because e.g. routing tables, ARP, etc. all have to be handled.

• Garrett


[-- Attachment #2: Type: text/html, Size: 3536 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: DPDK packaging and OpenWrt
  2023-05-16 20:43 ` Stephen Hemminger
@ 2023-05-16 23:24   ` Philip Prindeville
  2023-05-16 23:41     ` Stephen Hemminger
  2023-05-16 23:43     ` Stephen Hemminger
  0 siblings, 2 replies; 7+ messages in thread
From: Philip Prindeville @ 2023-05-16 23:24 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev

Hey Stephen, it's been a while...


> On May 16, 2023, at 2:43 PM, Stephen Hemminger <stephen@networkplumber.org> wrote:
> 
> On Tue, 16 May 2023 13:18:40 -0600
> Philip Prindeville <philipp_subx@redfish-solutions.com> wrote:
> 
>> Hi,
>> 
>> I'm a packaging maintainer for some of the OpenWrt ancillary packages, and I'm looking at upstreaming DPDK support into OpenWrt.  Apologies in advance if this is a bit dense (a brain dump) or hops between too many topics.
>> 
>> I was looking at what's been done to date, and it seems there are a few shortcomings which I hope to address.
>> 
>> Amongst them are:
>> 
>> * getting DPDK support into OpenWrt's main repo for the kmod's and into the packages repo for the user-space support;
> 
> DPDK kernel modules are deprecated, creating more usage of them is problematic.


Well, some kernel modules are still required, aren't they?  What's been deprecated?


> 
>> 
>> * making DPDK supported on all available architectures (i.e. agnostic, not just x86_64 specific);
>> 
>> * integrating into the OpenWrt "make menuconfig" system, so that editing packages directly isn't required;
> 
> Does openwrt build system support meson?
> 
>> * supporting cross-building and avoiding the [flawed] assumption that the micro-architecture of the build server is in any way related to the processor type of the target machine(s);
>> 
>> * integration into the OpenWrt CI/CD tests;
>> 
>> * making the kernel support as secure/robust as possible (i.e. avoiding an ill-behaved application taking down the kernel, since this is a firewall after all);
> 
> Not a problem with vfio
> 
>> 
>> * avoiding conflict with other existing module or package functionality;
>> 
>> * avoiding, to the extent possible, introducing one-off toolchain dependencies that unnecessarily complicate the build ecosystem;
>> 
>> To this end, I'm asking the mailing list for guidance on the best packaging practices that satisfy these goals.  I'm willing to do whatever heavy lifting that's within my ability.
>> 
>> I have some related questions.
>> 
>> 1. OpenWrt already bakes "vfio.ko" and "vfio-pci.ko" here:
>> 
>> https://github.com/openwrt/openwrt/blob/master/package/kernel/linux/modules/virt.mk#L77-L117
>> 
>>   but does so with "CONFIG_VFIO_PCI_IGD=y", which seems to be specifically for VGA acceleration of guests in a virtualized environment.  That seems to be an odd corner case, and unlikely given that OpenWrt almost always runs on headless hardware.  Should this be reverted?
> 
> Yes.


Okay, thanks.


> 
>> 
>> 2. "vfio.ko" is built with "CONFIG_VFIO_NOIOMMU=y", which seems insecure.  Can this be reverted?
> 
> No. Most of OpenWrt's systems do not have an IOMMU.


And most of OpenWrt isn't going to need DPDK, either.  We're thinking of Xeon, Xeon-D, and Atom64 (C26xx) based Intel hardware, or high-end multicore ARM64 designs like the Traverse Technologies ten64.

In other words, Enterprise-class firewalls and data center appliances.


> 
>> 3. Is "uio_pci_generic.ko" worth the potential insecurity/instability of a misbehaving application?  My understanding is that it's only needed on SR-IOV hardware where MSI/MSI-X interrupts aren't supported: is there even any current hardware that doesn't support MSI/MSI-X?  Or am I misunderstanding the use case?
> 
> Use VFIO noiommu, it is more supported and tested upstream.  With PCI generic, no interrupts work.


What is the risk/reward of using CONFIG_NOIOMMU=n?  It's a non-trivial bit of logic to include in a processor design: it must have had some scenario where it would be useful otherwise that's a lot of wasted gates...


> 
>> 4. Can most functionality be achieved with VFIO + IOMMU support?
> 
> Yes.
> 
>> 5. I've seen packaging for the "iommu_v2.ko" module done here:
>> 
>> https://github.com/k13132/openwrt-dpdk/blob/main/packages/kmod-iommu_v2/Makefile#L22-L42
>> 
>>   Is this potentially useful?  What platforms/architectures use this driver?
>> 
>> 6. Hand editing a wrapper for dpdk.tar.gz is a non-starter.  I'd rather add Kconfig adornments to OpenWrt packaging for the wrapper so that options for "-Denable_drivers=" and "-Dcpu_instruction_set=" can be passed in once global build options for OpenWrt have been selected.  Defaulting the instruction set to the build host is going to be wrong at least some of the time, if not most of the time.  For x86_64, what is a decent compromise for a micro-architecture that will build and run on most AMD and Intel hardware?  What's a decent baseline for an ARM64 micro-architecture that will build and run on most ARM hardware?
>> 
>> 7. Many embedded systems don't build with glibc because it's too bloated (and because critical fixes sometimes take too long to roll out), and instead use MUSL, eglibc, or uClibc (although the last one seems to be waning).  Only glibc supports <execinfo.h> from what I can tell.  Can broader support for other C runtimes be added?  Can RTE_BACKTRACE be made a parameter or conditionally defined based on the runtime headers?  (autotools would be and HAVE_EXECINFO_H would be really handy here, but I'm not sure how to make this play well with meson/ninja, and truth be told I'm an old school Makefile + autotools knuckle-dragger).
> 
> You could do without backtrace, but then if DPDK applications you are flying blind.


Yeah, that occurred to me too, but it seems that libbacktrace couldn't be shimmed in because it does require heap integrity if I remember...


> 
>> 8. How do I validate that DPDK is properly being built with the cross-tools and not native tools?  Even when building x86_64 targets on an x86_64 build host, we end up using a custom toolchain and not the "native" compiler that comes with the distro.
>> 
>> 9. What is the parity between AMD64 and ARM64?  Do both platforms offer equal functionality and security, if not performance?
> 
> Apples/Oranges.
> 
>> 10. Who else is using DPDK with OpenWrt that is open to collaboration?
>> 
>> 11. What is the user-space TCP/IP stack of choice (or reference) for use with DPDK?
> 
> No user-space TCP/IP stack is really robust and that great.
> VPP has one but it is likely to be specific to using VPP and not sure if you want to go there.
> I don't think Fedora/Suse/Debian/Ubuntu have packaged any userspace TCP yet.


Not robust how?  In terms of performance, security, handling error conditions, having complete feature handlings... ?

-Philip



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: DPDK packaging and OpenWrt
  2023-05-16 23:06 ` Garrett D'Amore
@ 2023-05-16 23:38   ` Philip Prindeville
  0 siblings, 0 replies; 7+ messages in thread
From: Philip Prindeville @ 2023-05-16 23:38 UTC (permalink / raw)
  To: Garrett D'Amore; +Cc: dev



> On May 16, 2023, at 5:06 PM, Garrett D'Amore <garrett@damore.org> wrote:
> 
> On May 16, 2023 at 12:19 PM -0700, Philip Prindeville <philipp_subx@redfish-solutions.com>, wrote:
> [snip]
> 
> 3. Is "uio_pci_generic.ko" worth the potential insecurity/instability of a misbehaving application? My understanding is that it's only needed on SR-IOV hardware where MSI/MSI-X interrupts aren't supported: is there even any current hardware that doesn't support MSI/MSI-X? Or am I misunderstanding the use case? 
> *Either* uio_pci_generic *or* vfio with NOIOMMU are required for the vary large number of systems that either lack an IOMMU (btw, that will be nearly all OpenWRT platforms!), or elect to run with the iommu unconfigured (one justification for doing that - apart from numerous software bugs and limitations — is that the IOMMU can slow down I/O. We actually recommend most of our customers that run dedicated systems run with the IOMMU disabled for this reason.)
> 
> vfio with  noiommu is preferable.


I could build with CONFIG_NOIOMMU=n and then package the modprobe .conf file (or the grub command-line, etc) to have enable_unsafe_noiommu_mode=1, right?

This would accomplish the same thing at run-time, but allow the module to be built to be used either with or without an IOMMU?

That's per:

https://dpdk-guide.gitlab.io/dpdk-guide/setup/binding.html

But I don't know how recent that advice is...


> 
> 4. Can most functionality be achieved with VFIO + IOMMU support? 
> *If* you have an IOMMU, and you aren’t trying to eke the very last bits of performance, yes.  But as many systems don’t have an IOMMU, and as one of the main points of DPDK are extremely performance sensitive applications, I think the answer is more broadly, “no”.
> 11. What is the user-space TCP/IP stack of choice (or reference) for use with DPDK? 
> IMO, if you’re using DPDK to run *TCP* applications then you’re probably misusing it — there isn’t a user land TCP stack that I would trust.  IP/UDP is something we do, and it works well, but I can tell you already it’s a pain to do *just* IP, because e.g. routing tables, ARP, etc. all have to be handled. 
>     • Garrett


Yeah, good point.  Are there shims to use with FRR, lldpd, et al for example?



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: DPDK packaging and OpenWrt
  2023-05-16 23:24   ` Philip Prindeville
@ 2023-05-16 23:41     ` Stephen Hemminger
  2023-05-16 23:43     ` Stephen Hemminger
  1 sibling, 0 replies; 7+ messages in thread
From: Stephen Hemminger @ 2023-05-16 23:41 UTC (permalink / raw)
  To: Philip Prindeville; +Cc: dev

On Tue, 16 May 2023 17:24:21 -0600
Philip Prindeville <philipp_subx@redfish-solutions.com> wrote:

> >> * getting DPDK support into OpenWrt's main repo for the kmod's and into the packages repo for the user-space support;  
> > 
> > DPDK kernel modules are deprecated, creating more usage of them is problematic.  
> 
> 
> Well, some kernel modules are still required, aren't they?  What's been deprecated?

igb_uio and kni are on the chopping block

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: DPDK packaging and OpenWrt
  2023-05-16 23:24   ` Philip Prindeville
  2023-05-16 23:41     ` Stephen Hemminger
@ 2023-05-16 23:43     ` Stephen Hemminger
  1 sibling, 0 replies; 7+ messages in thread
From: Stephen Hemminger @ 2023-05-16 23:43 UTC (permalink / raw)
  To: Philip Prindeville; +Cc: dev

On Tue, 16 May 2023 17:24:21 -0600
Philip Prindeville <philipp_subx@redfish-solutions.com> wrote:

> >> 11. What is the user-space TCP/IP stack of choice (or reference) for use with DPDK?  
> > 
> > No user-space TCP/IP stack is really robust and that great.
> > VPP has one but it is likely to be specific to using VPP and not sure if you want to go there.
> > I don't think Fedora/Suse/Debian/Ubuntu have packaged any userspace TCP yet.  
> 
> 
> Not robust how?  In terms of performance, security, handling error conditions, having complete feature handlings... ?

TCP with all the features is very complex, heck Linux is still fixing corner cases.
Sure if you want one socket to send an email or print job, then sure. But doing something
at DPDK kind of performance is quite complex.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2023-05-16 23:43 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-16 19:18 DPDK packaging and OpenWrt Philip Prindeville
2023-05-16 20:43 ` Stephen Hemminger
2023-05-16 23:24   ` Philip Prindeville
2023-05-16 23:41     ` Stephen Hemminger
2023-05-16 23:43     ` Stephen Hemminger
2023-05-16 23:06 ` Garrett D'Amore
2023-05-16 23:38   ` Philip Prindeville

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).