DPDK usage discussions
 help / color / mirror / Atom feed
* Packets cannot reach host's kernel in multiport e-switch mode (mlx5 driver)
@ 2024-04-10  8:17 Tao Li
  2024-04-19 17:30 ` Dariusz Sosnowski
  0 siblings, 1 reply; 3+ messages in thread
From: Tao Li @ 2024-04-10  8:17 UTC (permalink / raw)
  To: users; +Cc: tao.li06

[-- Attachment #1: Type: text/plain, Size: 5018 bytes --]

Hi All,

I am currently experimenting with a feature newly supported by DPDK 23.11, known as "multiport e-switch<https://doc.dpdk.org/guides/nics/mlx5.html#multiport-e-switch>" to improve communication reliability on the server side. During the trials, I encountered an issue in which activating multiport e-switch mode on the NIC disrupts the hypervisor’s software running on the second PF interface (PF1). More specifically, packets coming from the second PF (PF1) cannot be delivered to hypervisor’s kernel network stack, right after setting the multiport e-switch mode for the NIC as guided in documentation. A snapshot of the packet trace comparison on the second PF (PF1, ens2f1np1) before and after setting the multiport e-switch mode is attached here.  Packets marked with the gray color/italic in the second trace are missing under the multiport e-switch mode.

----<test environment>-----
ConnectX-6 Dx with firmware version 22.39.1002
Linux kernel version: 6.6.16
DPDK: 23.11
----</test environment>------

----<packet trace after setting multiport e-switch mode>------
14:37:24.835716 04:3f:72:e8:cf:cb > 33:33:00:00:00:01, ethertype IPv6 (0x86dd), length 78: fe80::63f:72ff:fee8:cfcb > ff02::1: ICMP6, router advertisement, length 24

14:37:28.527829 90:3c:b3:33:83:fb > 33:33:00:00:00:01, ethertype IPv6 (0x86dd), length 78: fe80::923c:b3ff:fe33:83fb > ff02::1: ICMP6, router advertisement, length 24

14:37:28.528359 04:3f:72:e8:cf:cb > 90:3c:b3:33:83:fb, ethertype IPv6 (0x86dd), length 94: fe80::63f:72ff:fee8:cfcb.54096 > fe80::923c:b3ff:fe33:83fb.179: Flags [S], seq 2779843599, win 33120, options [mss 1440,sackOK,TS val 1610632473 ecr 0,nop,wscale 7], length 0 // link-local addresses are used

14:37:29.559918 04:3f:72:e8:cf:cb > 90:3c:b3:33:83:fb, ethertype IPv6 (0x86dd), length 94: fe80::63f:72ff:fee8:cfcb.54096 > fe80::923c:b3ff:fe33:83fb.179: Flags [S], seq 2779843599, win 33120, options [mss 1440,sackOK,TS val 1610633505 ecr 0,nop,wscale 7], length 0

14:37:30.583925 04:3f:72:e8:cf:cb > 90:3c:b3:33:83:fb, ethertype IPv6 (0x86dd), length 94: fe80::63f:72ff:fee8:cfcb.54096 > fe80::923c:b3ff:fe33:83fb.179: Flags [S], seq 2779843599, win 33120, options [mss 1440,sackOK,TS val 1610634529 ecr 0,nop,wscale 7], length 0
----</packet trace after setting multiport e-switch mode>------

----<packet trace before setting multiport e-switch mode> ------
16:09:40.375865 90:3c:b3:33:83:fb > 33:33:00:00:00:01, ethertype IPv6 (0x86dd), length 78: fe80::923c:b3ff:fe33:83fb > ff02::1: ICMP6, router advertisement, length 24

16:09:40.376473 fa:e4:cf:2d:11:b9 > 90:3c:b3:33:83:fb, ethertype IPv6 (0x86dd), length 94: fe80::f8e4:cfff:fe2d:11b9.36168 > fe80::923c:b3ff:fe33:83fb.179: Flags [S], seq 3409227589, win 33120, options [mss 1440,sackOK,TS val 2302010436 ecr 0,nop,wscale 7], length 0

16:09:40.376692 90:3c:b3:33:83:fb > fa:e4:cf:2d:11:b9, ethertype IPv6 (0x86dd), length 94: fe80::923c:b3ff:fe33:83fb.179 > fe80::f8e4:cfff:fe2d:11b9.36168: Flags [S.], seq 3495571820, ack 3409227590, win 63196, options [mss 9040,sackOK,TS val 1054058675 ecr 2302010436,nop,wscale 9], length 0

16:09:40.376711 fa:e4:cf:2d:11:b9 > 90:3c:b3:33:83:fb, ethertype IPv6 (0x86dd), length 86: fe80::f8e4:cfff:fe2d:11b9.36168 > fe80::923c:b3ff:fe33:83fb.179: Flags [.], ack 1, win 259, options [nop,nop,TS val 2302010436 ecr 1054058675], length 0

16:09:40.376865 fa:e4:cf:2d:11:b9 > 90:3c:b3:33:83:fb, ethertype IPv6 (0x86dd), length 193: fe80::f8e4:cfff:fe2d:11b9.36168 > fe80::923c:b3ff:fe33:83fb.179: Flags [P.], seq 1:108, ack 1, win 259, options [nop,nop,TS val 2302010436 ecr 1054058675], length 107: BGP

16:09:40.376986 90:3c:b3:33:83:fb > fa:e4:cf:2d:11:b9, ethertype IPv6 (0x86dd), length 86: fe80::923c:b3ff:fe33:83fb.179 > fe80::f8e4:cfff:fe2d:11b9.36168: Flags [.], ack 108, win 124, options [nop,nop,TS val 1054058676 ecr 2302010436], length 0
----</packet trace before setting multiport e-switch mode> ------

Attempts to ping from another directly connected host to this hypervisor also resulted in incoming ICMP packets not being captured, which is reproducible in another testing environment setup. In the end, I was able to restore communication on the second PF by using a vdev TAP device and performing packet forwarding between the TAP device and PF1, as shown in our public examplary code<https://github.com/byteocean/multiport-eswitch-example>.

Enabling the isolation mode on PF1 by starting testpmd or programmably using `rte_flow_isolate()` leads to no change from the behavior as described above, but only affects whether packets can be captured and processed by the DPDK application.
----<command to start testpmd> ------
sudo ./dpdk-testpmd -a 3b:00.0,dv_flow_en=2,dv_esw_en=1,fdb_def_rule_en=1,representor=pf0-1vf0 -- -i --rxq=1 --txq=1 --flow-isolate-all
----</command to start testpmd> ------

Any experience sharing or comment on the above described issue is very appreciated. Thanks a lot in advance.

Best regards,
Tao Li




[-- Attachment #2: Type: text/html, Size: 21588 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* RE: Packets cannot reach host's kernel in multiport e-switch mode (mlx5 driver)
  2024-04-10  8:17 Packets cannot reach host's kernel in multiport e-switch mode (mlx5 driver) Tao Li
@ 2024-04-19 17:30 ` Dariusz Sosnowski
  2024-04-30 14:30   ` Tao Li
  0 siblings, 1 reply; 3+ messages in thread
From: Dariusz Sosnowski @ 2024-04-19 17:30 UTC (permalink / raw)
  To: Tao Li, users; +Cc: tao.li06

Hi,

I could not reproduce the issue locally with testpmd, with flow isolation enabled. I can see ICMP packets passing both ways to kernel interfaces of PF0 and PF1.
Without flow isolation, it is expected that traffic coming to the host will be hijacked by DPDK (depending on the MAC address, multicast config and promiscuous mode).

Could you please run testpmd with the following command line parameters and execute the following commands?

Testpmd command line:
	dpdk-testpmd -a 3b:00.0,dv_flow_en=2,representor=pf0-1vf0 -- --flow-isolate-all -i

Testpmd commands:
	port stop all
	flow configure 0 queues_number 4 queues_size 64
	flow configure 1 queues_number 4 queues_size 64
	flow configure 2 queues_number 4 queues_size 64
	flow configure 3 queues_number 4 queues_size 64
	port start 0
	port start 1
	port start 2
	port start 3
	set verbose 1
	set fwd rxonly
	start

With this testpmd running, could you please test if both PF0 and PF1 kernel interfaces are reachable and all packets pass?

Best regards,
Dariusz Sosnowski

> From: Tao Li <byteocean@hotmail.com> 
> Sent: Wednesday, April 10, 2024 10:18
> To: users@dpdk.org
> Cc: tao.li06@sap.com
> Subject: Packets cannot reach host's kernel in multiport e-switch mode (mlx5 driver)
> 
> External email: Use caution opening links or attachments 
> 
> Hi All,
>  
> I am currently experimenting with a feature newly supported by DPDK 23.11, known as "https://doc.dpdk.org/guides/nics/mlx5.html#multiport-e-switch" to improve communication reliability on the server side. During the trials, I encountered an issue in which activating multiport e-switch mode on the NIC disrupts the hypervisor's software running on the second PF interface (PF1). More specifically, packets coming from the second PF (PF1) cannot be delivered to hypervisor's kernel network stack, right after setting the multiport e-switch mode for the NIC as guided in documentation. A snapshot of the packet trace comparison on the second PF (PF1, ens2f1np1) before and after setting the multiport e-switch mode is attached here.  Packets marked with the gray color/italic in the second trace are missing under the multiport e-switch mode.
>  
> ----<test environment>-----
> ConnectX-6 Dx with firmware version 22.39.1002
> Linux kernel version: 6.6.16
> DPDK: 23.11
> ----</test environment>------
>  
> ----<packet trace after setting multiport e-switch mode>------
> 14:37:24.835716 04:3f:72:e8:cf:cb > 33:33:00:00:00:01, ethertype IPv6 (0x86dd), length 78: fe80::63f:72ff:fee8:cfcb > ff02::1: ICMP6, router advertisement, length 24
>  
> 14:37:28.527829 90:3c:b3:33:83:fb > 33:33:00:00:00:01, ethertype IPv6 (0x86dd), length 78: fe80::923c:b3ff:fe33:83fb > ff02::1: ICMP6, router advertisement, length 24
>  
> 14:37:28.528359 04:3f:72:e8:cf:cb > 90:3c:b3:33:83:fb, ethertype IPv6 (0x86dd), length 94: fe80::63f:72ff:fee8:cfcb.54096 > fe80::923c:b3ff:fe33:83fb.179: Flags [S], seq 2779843599, win 33120, options [mss 1440,sackOK,TS val 1610632473 ecr 0,nop,wscale 7], length 0 // link-local addresses are used
>  
> 14:37:29.559918 04:3f:72:e8:cf:cb > 90:3c:b3:33:83:fb, ethertype IPv6 (0x86dd), length 94: fe80::63f:72ff:fee8:cfcb.54096 > fe80::923c:b3ff:fe33:83fb.179: Flags [S], seq 2779843599, win 33120, options [mss 1440,sackOK,TS val 1610633505 ecr 0,nop,wscale 7], length 0
>  
> 14:37:30.583925 04:3f:72:e8:cf:cb > 90:3c:b3:33:83:fb, ethertype IPv6 (0x86dd), length 94: fe80::63f:72ff:fee8:cfcb.54096 > fe80::923c:b3ff:fe33:83fb.179: Flags [S], seq 2779843599, win 33120, options [mss 1440,sackOK,TS val 1610634529 ecr 0,nop,wscale 7], length 0
> ----</packet trace after setting multiport e-switch mode>------
>  
> ----<packet trace before setting multiport e-switch mode> ------
> 16:09:40.375865 90:3c:b3:33:83:fb > 33:33:00:00:00:01, ethertype IPv6 (0x86dd), length 78: fe80::923c:b3ff:fe33:83fb > ff02::1: ICMP6, router advertisement, length 24
>  
> 16:09:40.376473 fa:e4:cf:2d:11:b9 > 90:3c:b3:33:83:fb, ethertype IPv6 (0x86dd), length 94: fe80::f8e4:cfff:fe2d:11b9.36168 > fe80::923c:b3ff:fe33:83fb.179: Flags [S], seq 3409227589, win 33120, options [mss 1440,sackOK,TS val 2302010436 ecr 0,nop,wscale 7], length 0
>  
> 16:09:40.376692 90:3c:b3:33:83:fb > fa:e4:cf:2d:11:b9, ethertype IPv6 (0x86dd), length 94: fe80::923c:b3ff:fe33:83fb.179 > fe80::f8e4:cfff:fe2d:11b9.36168: Flags [S.], seq 3495571820, ack 3409227590, win 63196, options [mss 9040,sackOK,TS val 1054058675 ecr 2302010436,nop,wscale 9], length 0
>  
> 16:09:40.376711 fa:e4:cf:2d:11:b9 > 90:3c:b3:33:83:fb, ethertype IPv6 (0x86dd), length 86: fe80::f8e4:cfff:fe2d:11b9.36168 > fe80::923c:b3ff:fe33:83fb.179: Flags [.], ack 1, win 259, options [nop,nop,TS val 2302010436 ecr 1054058675], length 0
>  
> 16:09:40.376865 fa:e4:cf:2d:11:b9 > 90:3c:b3:33:83:fb, ethertype IPv6 (0x86dd), length 193: fe80::f8e4:cfff:fe2d:11b9.36168 > fe80::923c:b3ff:fe33:83fb.179: Flags [P.], seq 1:108, ack 1, win 259, options [nop,nop,TS val 2302010436 ecr 1054058675], length 107: BGP
>  
> 16:09:40.376986 90:3c:b3:33:83:fb > fa:e4:cf:2d:11:b9, ethertype IPv6 (0x86dd), length 86: fe80::923c:b3ff:fe33:83fb.179 > fe80::f8e4:cfff:fe2d:11b9.36168: Flags [.], ack 108, win 124, options [nop,nop,TS val 1054058676 ecr 2302010436], length 0
> ----</packet trace before setting multiport e-switch mode> ------
>  
> Attempts to ping from another directly connected host to this hypervisor also resulted in incoming ICMP packets not being captured, which is reproducible in another testing environment setup. In the end, I was able to restore communication on the second PF by using a vdev TAP device and performing packet forwarding between the TAP device and PF1, as shown in our public examplary https://github.com/byteocean/multiport-eswitch-example.
>  
> Enabling the isolation mode on PF1 by starting testpmd or programmably using `rte_flow_isolate()` leads to no change from the behavior as described above, but only affects whether packets can be captured and processed by the DPDK application.
> ----<command to start testpmd> ------
> sudo ./dpdk-testpmd -a 3b:00.0,dv_flow_en=2,dv_esw_en=1,fdb_def_rule_en=1,representor=pf0-1vf0 -- -i --rxq=1 --txq=1 --flow-isolate-all
> ----</command to start testpmd> ------
>  
> Any experience sharing or comment on the above described issue is very appreciated. Thanks a lot in advance.
>  
> Best regards,
> Tao Li

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Packets cannot reach host's kernel in multiport e-switch mode (mlx5 driver)
  2024-04-19 17:30 ` Dariusz Sosnowski
@ 2024-04-30 14:30   ` Tao Li
  0 siblings, 0 replies; 3+ messages in thread
From: Tao Li @ 2024-04-30 14:30 UTC (permalink / raw)
  To: Dariusz Sosnowski, users; +Cc: tao.li06

[-- Attachment #1: Type: text/plain, Size: 8273 bytes --]

Hi Dariusz,

It is very appreciated that you took a look at the issue and provided suggestions. This time, we again performed tests using two directly connected machines and focused on ICMP (IPv4) packets in addition to ICMPv6 packets mentioned in the original problem description. The issue remains the same. I would like to highlight two points in our setup:


  1.  ICMP packets immediately cannot be captured on PF1 right after setting the nic into the multiport eswitch mode. And if I switch off the multiport eswitch mode by using following two commands, ICMP communication is resumed immediately, which shall prove that configs, such as firewall, on the system are correct. I would also assume it has little to do with a running DPDK application, as communication is already broken before starting an application like testpmd.

sudo devlink dev param set pci/0000:3b:00.0 name esw_multiport value false cmode runtime

sudo devlink dev param set pci/0000:3b:00.1 name esw_multiport value false cmode runtime


  1.  In this setup, we do not use MLNX_OFED drivers but rely on the upstream Mellanox drivers from Linux kernel 6.5.0 (which is greater than the suggested kernel version 6.3). Would that make a difference? Could you share some more detailed information regarding the environment setup on your side? The firmware version we are using for Mellanox ConnectX-6 is 22.39.1002.

Looking forward to your further reply. Thanks in advance.

Best regards,
Tao Li

From: Dariusz Sosnowski <dsosnowski@nvidia.com>
Date: Friday, 19. April 2024 at 19:30
To: Tao Li <byteocean@hotmail.com>, users@dpdk.org <users@dpdk.org>
Cc: tao.li06@sap.com <tao.li06@sap.com>
Subject: RE: Packets cannot reach host's kernel in multiport e-switch mode (mlx5 driver)
Hi,

I could not reproduce the issue locally with testpmd, with flow isolation enabled. I can see ICMP packets passing both ways to kernel interfaces of PF0 and PF1.
Without flow isolation, it is expected that traffic coming to the host will be hijacked by DPDK (depending on the MAC address, multicast config and promiscuous mode).

Could you please run testpmd with the following command line parameters and execute the following commands?

Testpmd command line:
        dpdk-testpmd -a 3b:00.0,dv_flow_en=2,representor=pf0-1vf0 -- --flow-isolate-all -i

Testpmd commands:
        port stop all
        flow configure 0 queues_number 4 queues_size 64
        flow configure 1 queues_number 4 queues_size 64
        flow configure 2 queues_number 4 queues_size 64
        flow configure 3 queues_number 4 queues_size 64
        port start 0
        port start 1
        port start 2
        port start 3
        set verbose 1
        set fwd rxonly
        start

With this testpmd running, could you please test if both PF0 and PF1 kernel interfaces are reachable and all packets pass?

Best regards,
Dariusz Sosnowski

> From: Tao Li <byteocean@hotmail.com>
> Sent: Wednesday, April 10, 2024 10:18
> To: users@dpdk.org
> Cc: tao.li06@sap.com
> Subject: Packets cannot reach host's kernel in multiport e-switch mode (mlx5 driver)
>
> External email: Use caution opening links or attachments
>
> Hi All,
>
> I am currently experimenting with a feature newly supported by DPDK 23.11, known as "https://doc.dpdk.org/guides/nics/mlx5.html#multiport-e-switch" to improve communication reliability on the server side. During the trials, I encountered an issue in which activating multiport e-switch mode on the NIC disrupts the hypervisor's software running on the second PF interface (PF1). More specifically, packets coming from the second PF (PF1) cannot be delivered to hypervisor's kernel network stack, right after setting the multiport e-switch mode for the NIC as guided in documentation. A snapshot of the packet trace comparison on the second PF (PF1, ens2f1np1) before and after setting the multiport e-switch mode is attached here.  Packets marked with the gray color/italic in the second trace are missing under the multiport e-switch mode.
>
> ----<test environment>-----
> ConnectX-6 Dx with firmware version 22.39.1002
> Linux kernel version: 6.6.16
> DPDK: 23.11
> ----</test environment>------
>
> ----<packet trace after setting multiport e-switch mode>------
> 14:37:24.835716 04:3f:72:e8:cf:cb > 33:33:00:00:00:01, ethertype IPv6 (0x86dd), length 78: fe80::63f:72ff:fee8:cfcb > ff02::1: ICMP6, router advertisement, length 24
>
> 14:37:28.527829 90:3c:b3:33:83:fb > 33:33:00:00:00:01, ethertype IPv6 (0x86dd), length 78: fe80::923c:b3ff:fe33:83fb > ff02::1: ICMP6, router advertisement, length 24
>
> 14:37:28.528359 04:3f:72:e8:cf:cb > 90:3c:b3:33:83:fb, ethertype IPv6 (0x86dd), length 94: fe80::63f:72ff:fee8:cfcb.54096 > fe80::923c:b3ff:fe33:83fb.179: Flags [S], seq 2779843599, win 33120, options [mss 1440,sackOK,TS val 1610632473 ecr 0,nop,wscale 7], length 0 // link-local addresses are used
>
> 14:37:29.559918 04:3f:72:e8:cf:cb > 90:3c:b3:33:83:fb, ethertype IPv6 (0x86dd), length 94: fe80::63f:72ff:fee8:cfcb.54096 > fe80::923c:b3ff:fe33:83fb.179: Flags [S], seq 2779843599, win 33120, options [mss 1440,sackOK,TS val 1610633505 ecr 0,nop,wscale 7], length 0
>
> 14:37:30.583925 04:3f:72:e8:cf:cb > 90:3c:b3:33:83:fb, ethertype IPv6 (0x86dd), length 94: fe80::63f:72ff:fee8:cfcb.54096 > fe80::923c:b3ff:fe33:83fb.179: Flags [S], seq 2779843599, win 33120, options [mss 1440,sackOK,TS val 1610634529 ecr 0,nop,wscale 7], length 0
> ----</packet trace after setting multiport e-switch mode>------
>
> ----<packet trace before setting multiport e-switch mode> ------
> 16:09:40.375865 90:3c:b3:33:83:fb > 33:33:00:00:00:01, ethertype IPv6 (0x86dd), length 78: fe80::923c:b3ff:fe33:83fb > ff02::1: ICMP6, router advertisement, length 24
>
> 16:09:40.376473 fa:e4:cf:2d:11:b9 > 90:3c:b3:33:83:fb, ethertype IPv6 (0x86dd), length 94: fe80::f8e4:cfff:fe2d:11b9.36168 > fe80::923c:b3ff:fe33:83fb.179: Flags [S], seq 3409227589, win 33120, options [mss 1440,sackOK,TS val 2302010436 ecr 0,nop,wscale 7], length 0
>
> 16:09:40.376692 90:3c:b3:33:83:fb > fa:e4:cf:2d:11:b9, ethertype IPv6 (0x86dd), length 94: fe80::923c:b3ff:fe33:83fb.179 > fe80::f8e4:cfff:fe2d:11b9.36168: Flags [S.], seq 3495571820, ack 3409227590, win 63196, options [mss 9040,sackOK,TS val 1054058675 ecr 2302010436,nop,wscale 9], length 0
>
> 16:09:40.376711 fa:e4:cf:2d:11:b9 > 90:3c:b3:33:83:fb, ethertype IPv6 (0x86dd), length 86: fe80::f8e4:cfff:fe2d:11b9.36168 > fe80::923c:b3ff:fe33:83fb.179: Flags [.], ack 1, win 259, options [nop,nop,TS val 2302010436 ecr 1054058675], length 0
>
> 16:09:40.376865 fa:e4:cf:2d:11:b9 > 90:3c:b3:33:83:fb, ethertype IPv6 (0x86dd), length 193: fe80::f8e4:cfff:fe2d:11b9.36168 > fe80::923c:b3ff:fe33:83fb.179: Flags [P.], seq 1:108, ack 1, win 259, options [nop,nop,TS val 2302010436 ecr 1054058675], length 107: BGP
>
> 16:09:40.376986 90:3c:b3:33:83:fb > fa:e4:cf:2d:11:b9, ethertype IPv6 (0x86dd), length 86: fe80::923c:b3ff:fe33:83fb.179 > fe80::f8e4:cfff:fe2d:11b9.36168: Flags [.], ack 108, win 124, options [nop,nop,TS val 1054058676 ecr 2302010436], length 0
> ----</packet trace before setting multiport e-switch mode> ------
>
> Attempts to ping from another directly connected host to this hypervisor also resulted in incoming ICMP packets not being captured, which is reproducible in another testing environment setup. In the end, I was able to restore communication on the second PF by using a vdev TAP device and performing packet forwarding between the TAP device and PF1, as shown in our public examplary https://github.com/byteocean/multiport-eswitch-example.
>
> Enabling the isolation mode on PF1 by starting testpmd or programmably using `rte_flow_isolate()` leads to no change from the behavior as described above, but only affects whether packets can be captured and processed by the DPDK application.
> ----<command to start testpmd> ------
> sudo ./dpdk-testpmd -a 3b:00.0,dv_flow_en=2,dv_esw_en=1,fdb_def_rule_en=1,representor=pf0-1vf0 -- -i --rxq=1 --txq=1 --flow-isolate-all
> ----</command to start testpmd> ------
>
> Any experience sharing or comment on the above described issue is very appreciated. Thanks a lot in advance.
>
> Best regards,
> Tao Li

[-- Attachment #2: Type: text/html, Size: 14623 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2024-04-30 14:30 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-10  8:17 Packets cannot reach host's kernel in multiport e-switch mode (mlx5 driver) Tao Li
2024-04-19 17:30 ` Dariusz Sosnowski
2024-04-30 14:30   ` Tao Li

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).