DPDK usage discussions
 help / color / mirror / Atom feed
From: Thomas Monjalon <thomas@monjalon.net>
To: Jaeeun Ham <jaeeun.ham@ericsson.com>
Cc: "users@dpdk.org" <users@dpdk.org>,
	"alialnu@nvidia.com" <alialnu@nvidia.com>,
	"rasland@nvidia.com" <rasland@nvidia.com>,
	"asafp@nvidia.com" <asafp@nvidia.com>
Subject: Re: I need DPDK MLX5 Probe error support
Date: Tue, 05 Oct 2021 08:00:06 +0200
Message-ID: <90747432.bxWCIcx659@thomas> (raw)
In-Reply-To: <HE1PR07MB4220CB9F8D24E7C470F822BAF3AF9@HE1PR07MB4220.eurprd07.prod.outlook.com>

05/10/2021 03:17, Jaeeun Ham:
> Hi Thomas,
> 
> I attached the testpmd result which is gathered on the host sever.
> Could you please take a look at the mlx5_core PCI issue?

I see no real issue in the log.
For doing more tests, I recommend using the latest DPDK version.


> Thank you in advance.
> 
> BR/Jaeeun
> 
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net> 
> Sent: Sunday, October 3, 2021 4:51 PM
> To: Jaeeun Ham <jaeeun.ham@ericsson.com>
> Cc: users@dpdk.org; alialnu@nvidia.com; rasland@nvidia.com; asafp@nvidia.com
> Subject: Re: I need DPDK MLX5 Probe error support
> 
> Hi,
> 
> I think you need to read the documentation.
> For DPDK install on Linux:
> https://protect2.fireeye.com/v1/url?k=7925aba3-26be92c2-7925eb38-86d8a30ca42b-d871f122b4a0a61a&q=1&e=88eca0f4-aa71-4ba8-a332-179f08406da3&u=https%3A%2F%2Fdoc.dpdk.org%2Fguides%2Flinux_gsg%2Fbuild_dpdk.html%23compiling-and-installing-dpdk-system-wide
> For mlx5 specific dependencies, install rdma-core package:
> https://protect2.fireeye.com/v1/url?k=9bce4984-c45570e5-9bce091f-86d8a30ca42b-25bd3d467b5f290d&q=1&e=88eca0f4-aa71-4ba8-a332-179f08406da3&u=https%3A%2F%2Fdoc.dpdk.org%2Fguides%2Fnics%2Fmlx5.html%23linux-prerequisites
> 
> 
> 02/10/2021 12:57, Jaeeun Ham:
> > Hi,
> > 
> > Could you teach me how to install dpdk-testpmd?
> > I have to run the application on the host server, not a development server.
> > So, I don't know how to get dpdk-testpmd.
> > 
> > By the way, testpmd run result is as below.
> > root@seroics05590:~/ejaeham# testpmd
> > EAL: Detected 64 lcore(s)
> > EAL: libmlx4.so.1: cannot open shared object file: No such file or 
> > directory
> > EAL: FATAL: Cannot init plugins
> > 
> > EAL: Cannot init plugins
> > 
> > PANIC in main():
> > Cannot init EAL
> > 5: [testpmd(_start+0x2a) [0x55d301d98e1a]]
> > 4: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) 
> > [0x7f5e044a4bf7]]
> > 3: [testpmd(main+0x907) [0x55d301d98d07]]
> > 2: [/usr/lib/x86_64-linux-gnu/librte_eal.so.17.11(__rte_panic+0xbd) 
> > [0x7f5e04ca3cfd]]
> > 1: [/usr/lib/x86_64-linux-gnu/librte_eal.so.17.11(rte_dump_stack+0x2e) 
> > [0x7f5e04cac19e]] Aborted
> > 
> > 
> > I added option below when the process is starting in the docker.
> >  dv_flow_en=0 \
> >  --log-level=pmd,8 \
> > < MLX5 log >
> > 415a695ba348:/tmp/logs # cat epp.log
> > MIDHAUL_PCI_ADDR:0000:12:01.0, BACKHAUL_PCI_ADDR:0000:12:01.1 
> > MIDHAUL_IP_ADDR:10.255.21.177, BACKHAUL_IP_ADDR:10.255.21.178
> > mlx5_pci: unable to recognize master/representors on the multiple IB 
> > devices
> > common_mlx5: Failed to load driver = mlx5_pci.
> > 
> > EAL: Requested device 0000:12:01.0 cannot be used
> > mlx5_pci: unable to recognize master/representors on the multiple IB 
> > devices
> > common_mlx5: Failed to load driver = mlx5_pci.
> > 
> > EAL: Requested device 0000:12:01.1 cannot be used
> > EAL: Bus (pci) probe failed.
> > EAL: Trying to obtain current memory policy.
> > EAL: Setting policy MPOL_PREFERRED for socket 1 Caught signal 15
> > EAL: Restoring previous memory policy: 0
> > EAL: Calling mem event callback 'MLX5_MEM_EVENT_CB:(nil)'
> > EAL: request: mp_malloc_sync
> > EAL: Heap on socket 1 was expanded by 5120MB
> > FATAL: epp_init.c::copy_mac_addr:130: Call to 
> > rte_eth_dev_get_port_by_name(src_dpdk_dev_name, &port_id) failed: -19 
> > (Unknown error -19), rte_errno=0 (not set)
> > 
> > Caught signal 6
> > Obtained 7 stack frames, tid=713.
> > tid=713, /usr/local/bin/ericsson-packet-processor() [0x40a4a4] 
> > tid=713, /lib64/libpthread.so.0(+0x13f80) [0x7f7e1eae8f80] tid=713, 
> > /lib64/libc.so.6(gsignal+0x10b) [0x7f7e1c5f818b] tid=713, 
> > /lib64/libc.so.6(abort+0x175) [0x7f7e1c5f9585] tid=713, 
> > /usr/local/bin/ericsson-packet-processor(main+0x458) [0x406818] 
> > tid=713, /lib64/libc.so.6(__libc_start_main+0xed) [0x7f7e1c5e334d] 
> > tid=713, /usr/local/bin/ericsson-packet-processor(_start+0x2a) 
> > [0x4091ca]
> > 
> > < i40e log >
> > cat epp.log
> > MIDHAUL_PCI_ADDR:0000:3b:0d.5, BACKHAUL_PCI_ADDR:0000:3b:0d.4 
> > MIDHAUL_IP_ADDR:10.51.21.112, BACKHAUL_IP_ADDR:10.51.21.113
> > EAL: Trying to obtain current memory policy.
> > EAL: Setting policy MPOL_PREFERRED for socket 1
> > EAL: Restoring previous memory policy: 0
> > EAL: Calling mem event callback 'vfio_mem_event_clb:(nil)'
> > EAL: request: mp_malloc_sync
> > EAL: Heap on socket 1 was expanded by 5120MB
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 28
> > i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
> > i40e_set_tx_function_flag(): Neither simple nor vector Tx enabled on 
> > Tx queue 0
> > 
> > i40evf_dev_start(): >>
> > i40evf_config_rss(): No hash flag is set
> > i40e_set_rx_function(): Vector Rx path will be used on port=0.
> > i40e_set_tx_function(): Xmit tx finally be used.
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 6
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 7
> > i40evf_add_del_all_mac_addr(): add/rm mac:62:64:21:84:83:b0
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 10
> > i40evf_dev_rx_queue_start(): >>
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 8
> > i40evf_handle_pf_event(): VIRTCHNL_EVENT_LINK_CHANGE event
> > i40evf_dev_tx_queue_start(): >>
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 8
> > i40evf_handle_pf_event(): VIRTCHNL_EVENT_LINK_CHANGE event
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 14
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 14
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 14
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 28
> > i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=1, queue=0.
> > i40e_set_tx_function_flag(): Neither simple nor vector Tx enabled on 
> > Tx queue 0
> > 
> > i40evf_dev_start(): >>
> > i40evf_config_rss(): No hash flag is set
> > i40e_set_rx_function(): Vector Rx path will be used on port=1.
> > i40e_set_tx_function(): Xmit tx finally be used.
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 6
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 7
> > i40evf_add_del_all_mac_addr(): add/rm mac:c2:88:5c:a9:a2:ef
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 10
> > i40evf_dev_rx_queue_start(): >>
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 8
> > i40evf_handle_pf_event(): VIRTCHNL_EVENT_LINK_CHANGE event
> > i40evf_dev_tx_queue_start(): >>
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 8
> > i40evf_handle_pf_event(): VIRTCHNL_EVENT_LINK_CHANGE event
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 14
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 14
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 14
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket 
> > -1
> > i40evf_dev_mtu_set(): port 1 must be stopped before configuration
> > i40evf_dev_mtu_set(): port 0 must be stopped before configuration 
> > Caught signal 10
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 15
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 15
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 15
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 15
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 15
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 
> > Caught signal 10
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 15
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 15
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 15
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 15
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 15
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is received, opcode = 15
> > 
> > 
> > process start option which is triggered by shell script is as below.
> > 
> > < start-epp.sh >
> > exec /usr/local/bin/ericsson-packet-processor \
> >   $(get_dpdk_core_list_parameter) \
> >   $(get_dpdk_mem_parameter) \
> >   $(get_dpdk_hugepage_parameters) \
> >  -d /usr/local/lib/librte_mempool_ring.so \  -d 
> > /usr/local/lib/librte_mempool_stack.so \  -d 
> > /usr/local/lib/librte_net_pcap.so \  -d 
> > /usr/local/lib/librte_net_i40e.so \  -d 
> > /usr/local/lib/librte_net_mlx5.so \  -d 
> > /usr/local/lib/librte_event_dsw.so \  $DPDK_PCI_OPTIONS \
> >  --vdev=event_dsw0 \
> >  --vdev=eth_pcap0,iface=midhaul_edk \
> >  --vdev=eth_pcap1,iface=backhaul_edk \  --file-prefix=container \  
> > --log-level lib.eal:debug \
> >  dv_flow_en=0 \
> >  --log-level=pmd,8 \
> >  -- \
> >   $(get_epp_mempool_parameter) \
> >  
> > "--neighbor-discovery-interface=midhaul_ker,${MIDHAUL_IP_ADDR},mac_addr_dev=${MIDHAUL_MAC_ADDR_DEV},vr_id=0" \  "--neighbor-discovery-interface=backhaul_ker,${BACKHAUL_IP_ADDR},mac_addr_dev=${BACKHAUL_MAC_ADDR_DEV},vr_id=1"
> > 
> > BR/Jaeeun
> > 
> > -----Original Message-----
> > From: Thomas Monjalon <thomas@monjalon.net>
> > Sent: Wednesday, September 29, 2021 8:16 PM
> > To: Jaeeun Ham <jaeeun.ham@ericsson.com>
> > Cc: users@dpdk.org; alialnu@nvidia.com; rasland@nvidia.com; 
> > asafp@nvidia.com
> > Subject: Re: I need DPDK MLX5 Probe error support
> > 
> > 27/09/2021 02:18, Jaeeun Ham:
> > > Hi,
> > > 
> > > I hope you are well.
> > > My name is Jaeeun Ham and I have been working for the Ericsson.
> > > 
> > > I am suffering from enabling MLX5 NIC, so could you take a look at how to run it?
> > > There are two pci address for the SRIOV(vfio) mlx5 nic support but 
> > > it doesn't run correctly. (12:01.0, 12:01.1)
> > > 
> > > I started one process which is running inside the docker process that is on the MLX5 NIC support host server.
> > > The process started to run with following option.
> > >     -d /usr/local/lib/librte_net_mlx5.so And the docker process has
> > > mlx5 libraries as below.
> > 
> > Did you try on the host outside of any container?
> > 
> > Please could you try following commands (variables to be replaced)?
> > 
> >     dpdk-hugepages.py --reserve 1G
> >     ip link set $netdev netns $container
> >     docker run --cap-add SYS_NICE --cap-add IPC_LOCK --cap-add NET_ADMIN \
> >                --device /dev/infiniband/ $image
> >     echo show port summary all | dpdk-testpmd --in-memory -- -i
> > 
> > 
> > 
> > > 706a37a35d29:/usr/local/lib # ls -1 | grep mlx librte_common_mlx5.so
> > > librte_common_mlx5.so.21
> > > librte_common_mlx5.so.21.0
> > > librte_net_mlx5.so
> > > librte_net_mlx5.so.21
> > > librte_net_mlx5.so.21.0
> > > 
> > > But I failed to run the process with following error. 
> > > (MIDHAUL_PCI_ADDR:0000:12:01.0, BACKHAUL_PCI_ADDR:0000:12:01.1)
> > > 
> > > ---
> > > 
> > > mlx5_pci: unable to recognize master/representors on the multiple IB 
> > > devices
> > > common_mlx5: Failed to load driver = mlx5_pci.
> > > EAL: Requested device 0000:12:01.0 cannot be used
> > > mlx5_pci: unable to recognize master/representors on the multiple IB 
> > > devices
> > > common_mlx5: Failed to load driver = mlx5_pci.
> > > EAL: Requested device 0000:12:01.1 cannot be used
> > > EAL: Bus (pci) probe failed.
> > > 
> > > ---
> > > 
> > > For the success case of pci address 12:01.2, it showed following messages.
> > > 
> > > ---
> > > 
> > > EAL: Detected 64 lcore(s)
> > > EAL: Detected 2 NUMA nodes
> > > EAL: Multi-process socket /var/run/dpdk/nah2/mp_socket
> > > EAL: Probing VFIO support...
> > > EAL: VFIO support initialized
> > > EAL: PCI device 0000:12:01.2 on NUMA socket 0
> > > EAL:   probe driver: 15b3:1016 net_mlx5
> > > net_mlx5: MPLS over GRE/UDP tunnel offloading disabled due to old 
> > > OFED/rdma-core version or firmware configuration
> > > net_mlx5: port 0 the requested maximum Rx packet size (2056) is 
> > > larger than a single mbuf (2048) and scattered mode has not been 
> > > requested
> > > USER1: rte_ip_frag_table_create: allocated of 6291584 bytes at 
> > > socket
> > > 0
> > > 
> > > ---
> > > 
> > > BR/Jaeeun
> > 
> 
> 
> 
> 






  reply	other threads:[~2021-10-05  6:00 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <HE1PR07MB422057B3D4E22FB2BE882D37F3A59@HE1PR07MB4220.eurprd07.prod.outlook.com>
     [not found] ` <HE1PR07MB42201B7FB0F61E6FCB39AAE9F3A79@HE1PR07MB4220.eurprd07.prod.outlook.com>
2021-09-29 11:16   ` Thomas Monjalon
2021-10-02 10:57     ` Jaeeun Ham
2021-10-03  7:51       ` Thomas Monjalon
2021-10-03  8:10         ` Jaeeun Ham
2021-10-03 18:44           ` Thomas Monjalon
2021-10-05  1:17         ` Jaeeun Ham
2021-10-05  6:00           ` Thomas Monjalon [this message]
2021-10-06  9:57         ` Jaeeun Ham
2021-10-06 10:58           ` Thomas Monjalon
     [not found]             ` <HE1PR07MB42208754E63C0DE2D0F0D138F3B09@HE1PR07MB4220.eurprd07.prod.outlook.com>
2021-10-06 13:19               ` Thomas Monjalon
2021-10-09  1:12                 ` Jaeeun Ham
2021-10-09  1:15                   ` Jaeeun Ham
2021-10-09  4:42                     ` Jaeeun Ham

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=90747432.bxWCIcx659@thomas \
    --to=thomas@monjalon.net \
    --cc=alialnu@nvidia.com \
    --cc=asafp@nvidia.com \
    --cc=jaeeun.ham@ericsson.com \
    --cc=rasland@nvidia.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

DPDK usage discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror http://inbox.dpdk.org/users/0 users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 users users/ http://inbox.dpdk.org/users \
		users@dpdk.org
	public-inbox-index users

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.users


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git