DPDK CI discussions
 help / color / Atom feed
* Re: [dpdk-ci] [dpdk-users] DPDK TX problems
       [not found]   ` <ddc6792a-7efc-ab25-7193-c5ba0c7387f9@zg.ht.hr>
@ 2020-03-26 20:54     ` Thomas Monjalon
  2020-03-27 18:25       ` Lincoln Lavoie
  0 siblings, 1 reply; 2+ messages in thread
From: Thomas Monjalon @ 2020-03-26 20:54 UTC (permalink / raw)
  To: Hrvoje Habjanic; +Cc: users, galco, asafp, olgas, ci

Thanks for the interesting feedback.
It seems we should test this performance use case in our labs.


18/02/2020 09:36, Hrvoje Habjanic:
> On 08. 04. 2019. 11:52, Hrvoje Habjanić wrote:
> > On 29/03/2019 08:24, Hrvoje Habjanić wrote:
> >>> Hi.
> >>>
> >>> I did write an application using dpdk 17.11 (did try also with 18.11),
> >>> and when doing some performance testing, i'm seeing very odd behavior.
> >>> To verify that this is not because of my app, i did the same test with
> >>> l2fwd example app, and i'm still confused by results.
> >>>
> >>> In short, i'm trying to push a lot of L2 packets through dpdk engine -
> >>> packet processing is minimal. When testing, i'm starting with small
> >>> number of packets-per-second, and then gradually increase it to see
> >>> where is the limit. At some point, i do reach this limit - packets start
> >>> to get dropped. And this is when stuff become weird.
> >>>
> >>> When i reach peek packet rate (at which packets start to get dropped), i
> >>> would expect that reducing packet rate will remove packet drops. But,
> >>> this is not the case. For example, let's assume that peek packet rate is
> >>> 3.5Mpps. At this point everything works ok. Increasing pps to 4.0Mpps,
> >>> makes a lot of dropped packets. When reducing pps back to 3.5Mpps, app
> >>> is still broken - packets are still dropped.
> >>>
> >>> At this point, i need to drastically reduce pps (1.4Mpps) to make
> >>> dropped packets go away. Also, app is unable to successfully forward
> >>> anything beyond this 1.4M, despite the fact that in the beginning it did
> >>> forward 3.5M! Only way to recover is to restart the app.
> >>>
> >>> Also, sometimes, the app just stops forwarding any packets - packets are
> >>> received (as seen by counters), but app is unable to send anything back.
> >>>
> >>> As i did mention, i'm seeing the same behavior with l2fwd example app. I
> >>> did test dpdk 17.11 and also dpdk 18.11 - the results are the same.
> >>>
> >>> My test environment is HP DL380G8, with 82599ES 10Gig (ixgbe) cards,
> >>> connected with Cisco nexus 9300 sw. On the other side is ixia test
> >>> appliance. Application is run in virtual machine (VM), using KVM
> >>> (openstack, with sriov enabled, and numa restrictions). I did check that
> >>> VM is using only cpu's from NUMA node on which network card is
> >>> connected, so there is no cross-numa traffic. Openstack is Queens,
> >>> Ubuntu is Bionic release. Virtual machine is also using ubuntu bionic
> >>> as OS.
> >>>
> >>> I do not know how to debug this? Does someone else have the same
> >>> observations?
> >>>
> >>> Regards,
> >>>
> >>> H.
> >> There are additional findings. It seems that when i reach peak pps
> >> rate, application is not fast enough, and i can see rx missed errors
> >> on card statistics on the host. At the same time, tx side starts to
> >> show problems (tx burst starts to show it did not send all packets).
> >> Shortly after that, tx falls apart completely and top pps rate drops.
> >>
> >> Since i did not disable pause frames, i can see on the switch "RX
> >> pause" frame counter is increasing. On the other hand, if i disable
> >> pause frames (on the nic of server), host driver (ixgbe) reports "TX
> >> unit hang" in dmesg, and issues card reset. Of course, after reset
> >> none of the dpdk apps in VM's on this host does not work.
> >>
> >> Is it possible that at time of congestion DPDK does not release mbufs
> >> back to the pool, and tx ring becomes "filled" with zombie packets
> >> (not send by card and also having ref counter as they are in use)?
> >>
> >> Is there a way to check mempool or tx ring for "left-owers"? Is is
> >> possible to somehow "flush" tx ring and/or mempool?
> >>
> >> H.
> > After few more test, things become even weirder - if i do not free mbufs
> > which are not sent, but resend them again, i can "survive" over-the-peek
> > event! But, then peek rate starts to drop gradually ...
> >
> > I would ask if someone can try this on their platform and report back? I
> > would really like to know if this is problem with my deployment, or
> > there is something wrong with dpdk?
> >
> > Test should be simple - use l2fwd or l3fwd, and determine max pps. Then
> > drive pps 30%over max, and then return back and confirm that you can
> > still get max pps.
> >
> > Thanks in advance.
> >
> > H.
> >
> 
> I did receive few mails from users facing this issue, asking how it was
> resolved.
> 
> Unfortunately, there is no real fix. It seems that this issue is related
> to card and hardware used. I'm still not sure which is more to blame,
> but the combination i had is definitely problematic.
> 
> Anyhow, in the end, i did conclude that card driver have some issues
> when it is saturated with packets. My suspicion is that driver/software
> does not properly free packets, and then DPDK mempool becomes
> fragmented, and this causes performance drops. Restarting software
> releases pools, and restores proper functionality.
> 
> After no luck with ixgbe, we migrated to Mellanox (4LX), and now there
> is no more of this permanent performance drop. With mlx, when limit is
> reached, reducing number of packets restores packet forwarding, and this
> limit seems to be stable.
> 
> Also, we moved to newer servers - DL380G10, and got significant
> performance increase. Also, we moved to newer switch (also cisco), with
> 25G ports, which reduced latency - almost by factor of 2!
> 
> I did not try old ixgbe on newer server, but i did try Intel's XL710,
> and it is not as happy as Mellanox. It gives better PPS, but it is more
> unstable in terms of maximum bw (has similar issues as ixgbe).
> 
> Regards,
> 
> H.





^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [dpdk-ci] [dpdk-users] DPDK TX problems
  2020-03-26 20:54     ` [dpdk-ci] [dpdk-users] DPDK TX problems Thomas Monjalon
@ 2020-03-27 18:25       ` Lincoln Lavoie
  0 siblings, 0 replies; 2+ messages in thread
From: Lincoln Lavoie @ 2020-03-27 18:25 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: Hrvoje Habjanic, users, galco, asafp, olgas, ci

[-- Attachment #1: Type: text/plain, Size: 6560 bytes --]

Hi Thomas,

I've captured this as https://bugs.dpdk.org/show_bug.cgi?id=429, so we can
add this to the list of development items for the testing, etc.

Cheers,
Lincoln

On Thu, Mar 26, 2020 at 4:54 PM Thomas Monjalon <thomas@monjalon.net> wrote:

> Thanks for the interesting feedback.
> It seems we should test this performance use case in our labs.
>
>
> 18/02/2020 09:36, Hrvoje Habjanic:
> > On 08. 04. 2019. 11:52, Hrvoje Habjanić wrote:
> > > On 29/03/2019 08:24, Hrvoje Habjanić wrote:
> > >>> Hi.
> > >>>
> > >>> I did write an application using dpdk 17.11 (did try also with
> 18.11),
> > >>> and when doing some performance testing, i'm seeing very odd
> behavior.
> > >>> To verify that this is not because of my app, i did the same test
> with
> > >>> l2fwd example app, and i'm still confused by results.
> > >>>
> > >>> In short, i'm trying to push a lot of L2 packets through dpdk engine
> -
> > >>> packet processing is minimal. When testing, i'm starting with small
> > >>> number of packets-per-second, and then gradually increase it to see
> > >>> where is the limit. At some point, i do reach this limit - packets
> start
> > >>> to get dropped. And this is when stuff become weird.
> > >>>
> > >>> When i reach peek packet rate (at which packets start to get
> dropped), i
> > >>> would expect that reducing packet rate will remove packet drops. But,
> > >>> this is not the case. For example, let's assume that peek packet
> rate is
> > >>> 3.5Mpps. At this point everything works ok. Increasing pps to
> 4.0Mpps,
> > >>> makes a lot of dropped packets. When reducing pps back to 3.5Mpps,
> app
> > >>> is still broken - packets are still dropped.
> > >>>
> > >>> At this point, i need to drastically reduce pps (1.4Mpps) to make
> > >>> dropped packets go away. Also, app is unable to successfully forward
> > >>> anything beyond this 1.4M, despite the fact that in the beginning it
> did
> > >>> forward 3.5M! Only way to recover is to restart the app.
> > >>>
> > >>> Also, sometimes, the app just stops forwarding any packets - packets
> are
> > >>> received (as seen by counters), but app is unable to send anything
> back.
> > >>>
> > >>> As i did mention, i'm seeing the same behavior with l2fwd example
> app. I
> > >>> did test dpdk 17.11 and also dpdk 18.11 - the results are the same.
> > >>>
> > >>> My test environment is HP DL380G8, with 82599ES 10Gig (ixgbe) cards,
> > >>> connected with Cisco nexus 9300 sw. On the other side is ixia test
> > >>> appliance. Application is run in virtual machine (VM), using KVM
> > >>> (openstack, with sriov enabled, and numa restrictions). I did check
> that
> > >>> VM is using only cpu's from NUMA node on which network card is
> > >>> connected, so there is no cross-numa traffic. Openstack is Queens,
> > >>> Ubuntu is Bionic release. Virtual machine is also using ubuntu bionic
> > >>> as OS.
> > >>>
> > >>> I do not know how to debug this? Does someone else have the same
> > >>> observations?
> > >>>
> > >>> Regards,
> > >>>
> > >>> H.
> > >> There are additional findings. It seems that when i reach peak pps
> > >> rate, application is not fast enough, and i can see rx missed errors
> > >> on card statistics on the host. At the same time, tx side starts to
> > >> show problems (tx burst starts to show it did not send all packets).
> > >> Shortly after that, tx falls apart completely and top pps rate drops.
> > >>
> > >> Since i did not disable pause frames, i can see on the switch "RX
> > >> pause" frame counter is increasing. On the other hand, if i disable
> > >> pause frames (on the nic of server), host driver (ixgbe) reports "TX
> > >> unit hang" in dmesg, and issues card reset. Of course, after reset
> > >> none of the dpdk apps in VM's on this host does not work.
> > >>
> > >> Is it possible that at time of congestion DPDK does not release mbufs
> > >> back to the pool, and tx ring becomes "filled" with zombie packets
> > >> (not send by card and also having ref counter as they are in use)?
> > >>
> > >> Is there a way to check mempool or tx ring for "left-owers"? Is is
> > >> possible to somehow "flush" tx ring and/or mempool?
> > >>
> > >> H.
> > > After few more test, things become even weirder - if i do not free
> mbufs
> > > which are not sent, but resend them again, i can "survive"
> over-the-peek
> > > event! But, then peek rate starts to drop gradually ...
> > >
> > > I would ask if someone can try this on their platform and report back?
> I
> > > would really like to know if this is problem with my deployment, or
> > > there is something wrong with dpdk?
> > >
> > > Test should be simple - use l2fwd or l3fwd, and determine max pps. Then
> > > drive pps 30%over max, and then return back and confirm that you can
> > > still get max pps.
> > >
> > > Thanks in advance.
> > >
> > > H.
> > >
> >
> > I did receive few mails from users facing this issue, asking how it was
> > resolved.
> >
> > Unfortunately, there is no real fix. It seems that this issue is related
> > to card and hardware used. I'm still not sure which is more to blame,
> > but the combination i had is definitely problematic.
> >
> > Anyhow, in the end, i did conclude that card driver have some issues
> > when it is saturated with packets. My suspicion is that driver/software
> > does not properly free packets, and then DPDK mempool becomes
> > fragmented, and this causes performance drops. Restarting software
> > releases pools, and restores proper functionality.
> >
> > After no luck with ixgbe, we migrated to Mellanox (4LX), and now there
> > is no more of this permanent performance drop. With mlx, when limit is
> > reached, reducing number of packets restores packet forwarding, and this
> > limit seems to be stable.
> >
> > Also, we moved to newer servers - DL380G10, and got significant
> > performance increase. Also, we moved to newer switch (also cisco), with
> > 25G ports, which reduced latency - almost by factor of 2!
> >
> > I did not try old ixgbe on newer server, but i did try Intel's XL710,
> > and it is not as happy as Mellanox. It gives better PPS, but it is more
> > unstable in terms of maximum bw (has similar issues as ixgbe).
> >
> > Regards,
> >
> > H.
>
>
>
>
>

-- 
*Lincoln Lavoie*
Senior Engineer, Broadband Technologies
21 Madbury Rd., Ste. 100, Durham, NH 03824
lylavoie@iol.unh.edu
https://www.iol.unh.edu
+1-603-674-2755 (m)
<https://www.iol.unh.edu/>

[-- Attachment #2: Type: text/html, Size: 8998 bytes --]

<div dir="ltr"><div class="gmail_default" style="font-size:small">Hi Thomas,</div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">I&#39;ve captured this as <a href="https://bugs.dpdk.org/show_bug.cgi?id=429">https://bugs.dpdk.org/show_bug.cgi?id=429</a>, so we can add this to the list of development items for the testing, etc.</div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">Cheers,<br>Lincoln</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Mar 26, 2020 at 4:54 PM Thomas Monjalon &lt;<a href="mailto:thomas@monjalon.net">thomas@monjalon.net</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Thanks for the interesting feedback.<br>
It seems we should test this performance use case in our labs.<br>
<br>
<br>
18/02/2020 09:36, Hrvoje Habjanic:<br>
&gt; On 08. 04. 2019. 11:52, Hrvoje Habjanić wrote:<br>
&gt; &gt; On 29/03/2019 08:24, Hrvoje Habjanić wrote:<br>
&gt; &gt;&gt;&gt; Hi.<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt; I did write an application using dpdk 17.11 (did try also with 18.11),<br>
&gt; &gt;&gt;&gt; and when doing some performance testing, i&#39;m seeing very odd behavior.<br>
&gt; &gt;&gt;&gt; To verify that this is not because of my app, i did the same test with<br>
&gt; &gt;&gt;&gt; l2fwd example app, and i&#39;m still confused by results.<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt; In short, i&#39;m trying to push a lot of L2 packets through dpdk engine -<br>
&gt; &gt;&gt;&gt; packet processing is minimal. When testing, i&#39;m starting with small<br>
&gt; &gt;&gt;&gt; number of packets-per-second, and then gradually increase it to see<br>
&gt; &gt;&gt;&gt; where is the limit. At some point, i do reach this limit - packets start<br>
&gt; &gt;&gt;&gt; to get dropped. And this is when stuff become weird.<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt; When i reach peek packet rate (at which packets start to get dropped), i<br>
&gt; &gt;&gt;&gt; would expect that reducing packet rate will remove packet drops. But,<br>
&gt; &gt;&gt;&gt; this is not the case. For example, let&#39;s assume that peek packet rate is<br>
&gt; &gt;&gt;&gt; 3.5Mpps. At this point everything works ok. Increasing pps to 4.0Mpps,<br>
&gt; &gt;&gt;&gt; makes a lot of dropped packets. When reducing pps back to 3.5Mpps, app<br>
&gt; &gt;&gt;&gt; is still broken - packets are still dropped.<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt; At this point, i need to drastically reduce pps (1.4Mpps) to make<br>
&gt; &gt;&gt;&gt; dropped packets go away. Also, app is unable to successfully forward<br>
&gt; &gt;&gt;&gt; anything beyond this 1.4M, despite the fact that in the beginning it did<br>
&gt; &gt;&gt;&gt; forward 3.5M! Only way to recover is to restart the app.<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt; Also, sometimes, the app just stops forwarding any packets - packets are<br>
&gt; &gt;&gt;&gt; received (as seen by counters), but app is unable to send anything back.<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt; As i did mention, i&#39;m seeing the same behavior with l2fwd example app. I<br>
&gt; &gt;&gt;&gt; did test dpdk 17.11 and also dpdk 18.11 - the results are the same.<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt; My test environment is HP DL380G8, with 82599ES 10Gig (ixgbe) cards,<br>
&gt; &gt;&gt;&gt; connected with Cisco nexus 9300 sw. On the other side is ixia test<br>
&gt; &gt;&gt;&gt; appliance. Application is run in virtual machine (VM), using KVM<br>
&gt; &gt;&gt;&gt; (openstack, with sriov enabled, and numa restrictions). I did check that<br>
&gt; &gt;&gt;&gt; VM is using only cpu&#39;s from NUMA node on which network card is<br>
&gt; &gt;&gt;&gt; connected, so there is no cross-numa traffic. Openstack is Queens,<br>
&gt; &gt;&gt;&gt; Ubuntu is Bionic release. Virtual machine is also using ubuntu bionic<br>
&gt; &gt;&gt;&gt; as OS.<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt; I do not know how to debug this? Does someone else have the same<br>
&gt; &gt;&gt;&gt; observations?<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt; Regards,<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt; H.<br>
&gt; &gt;&gt; There are additional findings. It seems that when i reach peak pps<br>
&gt; &gt;&gt; rate, application is not fast enough, and i can see rx missed errors<br>
&gt; &gt;&gt; on card statistics on the host. At the same time, tx side starts to<br>
&gt; &gt;&gt; show problems (tx burst starts to show it did not send all packets).<br>
&gt; &gt;&gt; Shortly after that, tx falls apart completely and top pps rate drops.<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; Since i did not disable pause frames, i can see on the switch &quot;RX<br>
&gt; &gt;&gt; pause&quot; frame counter is increasing. On the other hand, if i disable<br>
&gt; &gt;&gt; pause frames (on the nic of server), host driver (ixgbe) reports &quot;TX<br>
&gt; &gt;&gt; unit hang&quot; in dmesg, and issues card reset. Of course, after reset<br>
&gt; &gt;&gt; none of the dpdk apps in VM&#39;s on this host does not work.<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; Is it possible that at time of congestion DPDK does not release mbufs<br>
&gt; &gt;&gt; back to the pool, and tx ring becomes &quot;filled&quot; with zombie packets<br>
&gt; &gt;&gt; (not send by card and also having ref counter as they are in use)?<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; Is there a way to check mempool or tx ring for &quot;left-owers&quot;? Is is<br>
&gt; &gt;&gt; possible to somehow &quot;flush&quot; tx ring and/or mempool?<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; H.<br>
&gt; &gt; After few more test, things become even weirder - if i do not free mbufs<br>
&gt; &gt; which are not sent, but resend them again, i can &quot;survive&quot; over-the-peek<br>
&gt; &gt; event! But, then peek rate starts to drop gradually ...<br>
&gt; &gt;<br>
&gt; &gt; I would ask if someone can try this on their platform and report back? I<br>
&gt; &gt; would really like to know if this is problem with my deployment, or<br>
&gt; &gt; there is something wrong with dpdk?<br>
&gt; &gt;<br>
&gt; &gt; Test should be simple - use l2fwd or l3fwd, and determine max pps. Then<br>
&gt; &gt; drive pps 30%over max, and then return back and confirm that you can<br>
&gt; &gt; still get max pps.<br>
&gt; &gt;<br>
&gt; &gt; Thanks in advance.<br>
&gt; &gt;<br>
&gt; &gt; H.<br>
&gt; &gt;<br>
&gt; <br>
&gt; I did receive few mails from users facing this issue, asking how it was<br>
&gt; resolved.<br>
&gt; <br>
&gt; Unfortunately, there is no real fix. It seems that this issue is related<br>
&gt; to card and hardware used. I&#39;m still not sure which is more to blame,<br>
&gt; but the combination i had is definitely problematic.<br>
&gt; <br>
&gt; Anyhow, in the end, i did conclude that card driver have some issues<br>
&gt; when it is saturated with packets. My suspicion is that driver/software<br>
&gt; does not properly free packets, and then DPDK mempool becomes<br>
&gt; fragmented, and this causes performance drops. Restarting software<br>
&gt; releases pools, and restores proper functionality.<br>
&gt; <br>
&gt; After no luck with ixgbe, we migrated to Mellanox (4LX), and now there<br>
&gt; is no more of this permanent performance drop. With mlx, when limit is<br>
&gt; reached, reducing number of packets restores packet forwarding, and this<br>
&gt; limit seems to be stable.<br>
&gt; <br>
&gt; Also, we moved to newer servers - DL380G10, and got significant<br>
&gt; performance increase. Also, we moved to newer switch (also cisco), with<br>
&gt; 25G ports, which reduced latency - almost by factor of 2!<br>
&gt; <br>
&gt; I did not try old ixgbe on newer server, but i did try Intel&#39;s XL710,<br>
&gt; and it is not as happy as Mellanox. It gives better PPS, but it is more<br>
&gt; unstable in terms of maximum bw (has similar issues as ixgbe).<br>
&gt; <br>
&gt; Regards,<br>
&gt; <br>
&gt; H.<br>
<br>
<br>
<br>
<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div><b>Lincoln Lavoie</b><br></div><div>Senior Engineer, Broadband Technologies</div><div>21 Madbury Rd., Ste. 100, Durham, NH 03824</div><div><a href="mailto:lylavoie@iol.unh.edu" target="_blank">lylavoie@iol.unh.edu</a></div><div><a href="https://www.iol.unh.edu" target="_blank">https://www.iol.unh.edu</a></div><div>+1-603-674-2755 (m)<br></div><div><a href="https://www.iol.unh.edu/" target="_blank"><img src="http://homeautomation.lavoieholdings.com/_/rsrc/1390068882701/unh-iol-logo.png"></a></div></div></div></div></div></div></div></div></div></div></div></div>

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, back to index

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <c188c7eb-ee23-8ca4-0e4a-69948a38f425@zg.ht.hr>
     [not found] ` <f333c4a5-8d16-2947-24ce-d06b4abf60c0@zg.ht.hr>
     [not found]   ` <ddc6792a-7efc-ab25-7193-c5ba0c7387f9@zg.ht.hr>
2020-03-26 20:54     ` [dpdk-ci] [dpdk-users] DPDK TX problems Thomas Monjalon
2020-03-27 18:25       ` Lincoln Lavoie

DPDK CI discussions

Archives are clonable:
	git clone --mirror http://inbox.dpdk.org/ci/0 ci/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 ci ci/ http://inbox.dpdk.org/ci \
		ci@dpdk.org
	public-inbox-index ci


Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.ci


AGPL code for this site: git clone https://public-inbox.org/ public-inbox