DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 1/4] common/mlx5: share UAR allocation routine
@ 2020-11-10 16:04 Viacheslav Ovsiienko
  2020-11-10 16:04 ` [dpdk-dev] [PATCH 2/4] regex/mlx5: fix UAR allocation Viacheslav Ovsiienko
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: Viacheslav Ovsiienko @ 2020-11-10 16:04 UTC (permalink / raw)
  To: dev; +Cc: rasland, matan, stable

This patch introduces the routine to allocate the UAR (User
Access Region) with various memory mapping types. The origin
patch being fixed provided the UAR allocation workaround
for the mlx5 net PMD only. As it was found the other mlx5
based drivers - vdpa and regex are affected by the issue
as well and must be fixed.

Fixes: a0bfe9d56f74 ("net/mlx5: fix UAR memory mapping type")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/common/mlx5/mlx5_common.c    | 98 ++++++++++++++++++++++++++++++++++++
 drivers/common/mlx5/mlx5_common.h    |  2 +
 drivers/common/mlx5/mlx5_devx_cmds.h |  8 +++
 drivers/common/mlx5/version.map      |  1 +
 drivers/net/mlx5/mlx5_defs.h         |  9 ----
 5 files changed, 109 insertions(+), 9 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c
index 06f0a64..d2bdf29 100644
--- a/drivers/common/mlx5/mlx5_common.c
+++ b/drivers/common/mlx5/mlx5_common.c
@@ -244,3 +244,101 @@ static inline void mlx5_cpu_id(unsigned int level,
 	}
 	return ret;
 }
+
+/**
+ * Allocate the User Access Region with DevX on specified device.
+ *
+ * @param [in] ctx
+ *   Infiniband device context to perform allocation on.
+ * @param [in] mapping
+ *   MLX5DV_UAR_ALLOC_TYPE_BF - allocate as cached memory with write-combining
+ *				attributes (if supported by the host), the
+ *				writes to the UAR registers must be followed
+ *				by write memory barrier.
+ *   MLX5DV_UAR_ALLOC_TYPE_NC - allocate as non-cached nenory, all writes are
+ *				promoted to the registers immediately, no
+ *				memory barriers needed.
+ *   mapping < 0 - the first attempt is performed with MLX5DV_UAR_ALLOC_TYPE_BF,
+ *		   if this fails the next attempt with MLX5DV_UAR_ALLOC_TYPE_NC
+ *		   is performed. The drivers specifying negative values should
+ *		   always provide the write memory barrier operation after UAR
+ *		   register writings.
+ * If there is no definitions for the MLX5DV_UAR_ALLOC_TYPE_xx (older rdma
+ * library headers), the caller can specify 0.
+ *
+ * @return
+ *   UAR object pointer on success, NULL otherwise and rte_errno is set.
+ */
+void *
+mlx5_devx_alloc_uar(void *ctx, int mapping)
+{
+	void *uar;
+	uint32_t retry, uar_mapping;
+	void *base_addr;
+
+	for (retry = 0; retry < MLX5_ALLOC_UAR_RETRY; ++retry) {
+#ifdef MLX5DV_UAR_ALLOC_TYPE_NC
+		/* Control the mapping type according to the settings. */
+		uar_mapping = (mapping < 0) ?
+			      MLX5DV_UAR_ALLOC_TYPE_NC : mapping;
+#else
+		/*
+		 * It seems we have no way to control the memory mapping type
+		 * for the UAR, the default "Write-Combining" type is supposed.
+		 */
+		uar_mapping = 0;
+#endif
+		uar = mlx5_glue->devx_alloc_uar(ctx, uar_mapping);
+#ifdef MLX5DV_UAR_ALLOC_TYPE_NC
+		if (!uar &&
+		    mapping < 0 &&
+		    uar_mapping == MLX5DV_UAR_ALLOC_TYPE_BF) {
+			/*
+			 * In some environments like virtual machine the
+			 * Write Combining mapped might be not supported and
+			 * UAR allocation fails. We tried "Non-Cached" mapping
+			 * for the case.
+			 */
+			DRV_LOG(WARNING, "Failed to allocate DevX UAR (BF)");
+			uar_mapping = MLX5DV_UAR_ALLOC_TYPE_NC;
+			uar = mlx5_glue->devx_alloc_uar(ctx, uar_mapping);
+		} else if (!uar &&
+			   mapping < 0 &&
+			   uar_mapping == MLX5DV_UAR_ALLOC_TYPE_NC) {
+			/*
+			 * If Verbs/kernel does not support "Non-Cached"
+			 * try the "Write-Combining".
+			 */
+			DRV_LOG(WARNING, "Failed to allocate DevX UAR (NC)");
+			uar_mapping = MLX5DV_UAR_ALLOC_TYPE_BF;
+			uar = mlx5_glue->devx_alloc_uar(ctx, uar_mapping);
+		}
+#endif
+		if (!uar) {
+			DRV_LOG(ERR, "Failed to allocate DevX UAR (BF/NC)");
+			rte_errno = ENOMEM;
+			goto exit;
+		}
+		base_addr = mlx5_os_get_devx_uar_base_addr(uar);
+		if (base_addr)
+			break;
+		/*
+		 * The UARs are allocated by rdma_core within the
+		 * IB device context, on context closure all UARs
+		 * will be freed, should be no memory/object leakage.
+		 */
+		DRV_LOG(WARNING, "Retrying to allocate DevX UAR");
+		uar = NULL;
+	}
+	/* Check whether we finally succeeded with valid UAR allocation. */
+	if (!uar) {
+		DRV_LOG(ERR, "Failed to allocate DevX UAR (NULL base)");
+		rte_errno = ENOMEM;
+	}
+	/*
+	 * Return void * instead of struct mlx5dv_devx_uar *
+	 * is for compatibility with older rdma-core library headers.
+	 */
+exit:
+	return uar;
+}
diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h
index 9d226e5..10a0851 100644
--- a/drivers/common/mlx5/mlx5_common.h
+++ b/drivers/common/mlx5/mlx5_common.h
@@ -261,6 +261,8 @@ int64_t mlx5_get_dbr(void *ctx,  struct mlx5_dbr_page_list *head,
 __rte_internal
 int32_t mlx5_release_dbr(struct mlx5_dbr_page_list *head, uint32_t umem_id,
 			 uint64_t offset);
+__rte_internal
+void *mlx5_devx_alloc_uar(void *ctx, int mapping);
 extern uint8_t haswell_broadwell_cpu;
 
 __rte_internal
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index 8d66f1d..726e9f5 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -8,6 +8,14 @@
 #include "mlx5_glue.h"
 #include "mlx5_prm.h"
 
+/*
+ * Defines the amount of retries to allocate the first UAR in the page.
+ * OFED 5.0.x and Upstream rdma_core before v29 returned the NULL as
+ * UAR base address if UAR was not the first object in the UAR page.
+ * It caused the PMD failure and we should try to get another UAR
+ * till we get the first one with non-NULL base address returned.
+ */
+#define MLX5_ALLOC_UAR_RETRY 32
 
 /* This is limitation of libibverbs: in length variable type is u16. */
 #define MLX5_DEVX_MAX_KLM_ENTRIES ((UINT16_MAX - \
diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map
index af182e6..17dd11f 100644
--- a/drivers/common/mlx5/version.map
+++ b/drivers/common/mlx5/version.map
@@ -40,6 +40,7 @@ INTERNAL {
 	mlx5_devx_cmd_query_virtq;
 	mlx5_devx_cmd_register_read;
 	mlx5_devx_get_out_command_status;
+	mlx5_devx_alloc_uar;
 
 	mlx5_get_ifname_sysfs;
 	mlx5_get_dbr;
diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index f8f8a1f..aa55db3 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -204,13 +204,4 @@
 #define static_assert _Static_assert
 #endif
 
-/*
- * Defines the amount of retries to allocate the first UAR in the page.
- * OFED 5.0.x and Upstream rdma_core before v29 returned the NULL as
- * UAR base address if UAR was not the first object in the UAR page.
- * It caused the PMD failure and we should try to get another UAR
- * till we get the first one with non-NULL base address returned.
- */
-#define MLX5_ALLOC_UAR_RETRY 32
-
 #endif /* RTE_PMD_MLX5_DEFS_H_ */
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dpdk-dev] [PATCH 2/4] regex/mlx5: fix UAR allocation
  2020-11-10 16:04 [dpdk-dev] [PATCH 1/4] common/mlx5: share UAR allocation routine Viacheslav Ovsiienko
@ 2020-11-10 16:04 ` Viacheslav Ovsiienko
  2020-11-10 16:04 ` [dpdk-dev] [PATCH 3/4] vdpa/mlx5: " Viacheslav Ovsiienko
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Viacheslav Ovsiienko @ 2020-11-10 16:04 UTC (permalink / raw)
  To: dev; +Cc: rasland, matan, stable

This patch provides the UAR allocation workaround for the
hosts where UAR allocation with Write-Combining memory
mapping type fails.

Fixes: b34d816363b5 ("regex/mlx5: support rules import")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/regex/mlx5/mlx5_regex.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/regex/mlx5/mlx5_regex.c b/drivers/regex/mlx5/mlx5_regex.c
index 17590b9..05048e7 100644
--- a/drivers/regex/mlx5/mlx5_regex.c
+++ b/drivers/regex/mlx5/mlx5_regex.c
@@ -176,7 +176,12 @@
 		rte_errno = ENOMEM;
 		goto error;
 	}
-	priv->uar = mlx5_glue->devx_alloc_uar(ctx, 0);
+	/*
+	 * This PMD always claims the write memory barrier on UAR
+	 * registers writings, it is safe to allocate UAR with any
+	 * memory mapping type.
+	 */
+	priv->uar = mlx5_devx_alloc_uar(ctx, -1);
 	if (!priv->uar) {
 		DRV_LOG(ERR, "can't allocate uar.");
 		rte_errno = ENOMEM;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dpdk-dev] [PATCH 3/4] vdpa/mlx5: fix UAR allocation
  2020-11-10 16:04 [dpdk-dev] [PATCH 1/4] common/mlx5: share UAR allocation routine Viacheslav Ovsiienko
  2020-11-10 16:04 ` [dpdk-dev] [PATCH 2/4] regex/mlx5: fix UAR allocation Viacheslav Ovsiienko
@ 2020-11-10 16:04 ` Viacheslav Ovsiienko
  2020-11-10 16:04 ` [dpdk-dev] [PATCH 4/4] net/mlx5: fix UAR used by ASO queues Viacheslav Ovsiienko
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Viacheslav Ovsiienko @ 2020-11-10 16:04 UTC (permalink / raw)
  To: dev; +Cc: rasland, matan, stable

This patch provides the UAR allocation workaround for the
hosts where UAR allocation with Write-Combining memory
mapping type fails.

Fixes: 8395927cdfaf ("vdpa/mlx5: prepare HW queues")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/vdpa/mlx5/mlx5_vdpa_event.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
index 010543c..3aeaeb8 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
@@ -77,7 +77,12 @@
 		DRV_LOG(ERR, "Failed to change event channel FD.");
 		goto error;
 	}
-	priv->uar = mlx5_glue->devx_alloc_uar(priv->ctx, 0);
+	/*
+	 * This PMD always claims the write memory barrier on UAR
+	 * registers writings, it is safe to allocate UAR with any
+	 * memory mapping type.
+	 */
+	priv->uar = mlx5_devx_alloc_uar(priv->ctx, -1);
 	if (!priv->uar) {
 		rte_errno = errno;
 		DRV_LOG(ERR, "Failed to allocate UAR.");
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dpdk-dev] [PATCH 4/4] net/mlx5: fix UAR used by ASO queues
  2020-11-10 16:04 [dpdk-dev] [PATCH 1/4] common/mlx5: share UAR allocation routine Viacheslav Ovsiienko
  2020-11-10 16:04 ` [dpdk-dev] [PATCH 2/4] regex/mlx5: fix UAR allocation Viacheslav Ovsiienko
  2020-11-10 16:04 ` [dpdk-dev] [PATCH 3/4] vdpa/mlx5: " Viacheslav Ovsiienko
@ 2020-11-10 16:04 ` Viacheslav Ovsiienko
  2020-11-14  8:36 ` [dpdk-dev] [dpdk-stable] [PATCH 1/4] common/mlx5: share UAR allocation routine Thomas Monjalon
  2020-11-14 10:01 ` Thomas Monjalon
  4 siblings, 0 replies; 6+ messages in thread
From: Viacheslav Ovsiienko @ 2020-11-10 16:04 UTC (permalink / raw)
  To: dev; +Cc: rasland, matan

The dedicated UAR was allocated for the ASO queues.
The shared UAR created for Tx queues can be used instead.

Fixes: f935ed4b645a ("net/mlx5: support flow hit action for aging")

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5.h          |  1 -
 drivers/net/mlx5/mlx5_flow_age.c | 22 ++++++++++------------
 2 files changed, 10 insertions(+), 13 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 7ee63a7..2ad927b 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -513,7 +513,6 @@ struct mlx5_aso_sq {
 		volatile struct mlx5_aso_wqe *wqes;
 	};
 	volatile uint32_t *db_rec;
-	struct mlx5dv_devx_uar *uar_obj;
 	volatile uint64_t *uar_addr;
 	struct mlx5_aso_devx_mr mr;
 	uint16_t pi;
diff --git a/drivers/net/mlx5/mlx5_flow_age.c b/drivers/net/mlx5/mlx5_flow_age.c
index 636bcda..cea2cf7 100644
--- a/drivers/net/mlx5/mlx5_flow_age.c
+++ b/drivers/net/mlx5/mlx5_flow_age.c
@@ -196,8 +196,6 @@
 	}
 	if (sq->cq.cq)
 		mlx5_aso_cq_destroy(&sq->cq);
-	if (sq->uar_obj)
-		mlx5_glue->devx_free_uar(sq->uar_obj);
 	mlx5_aso_devx_dereg_mr(&sq->mr);
 	memset(sq, 0, sizeof(*sq));
 }
@@ -244,6 +242,8 @@
  *   Pointer to SQ to create.
  * @param[in] socket
  *   Socket to use for allocation.
+ * @param[in] uar
+ *   User Access Region object.
  * @param[in] pdn
  *   Protection Domain number to use.
  * @param[in] eqn
@@ -256,7 +256,8 @@
  */
 static int
 mlx5_aso_sq_create(void *ctx, struct mlx5_aso_sq *sq, int socket,
-		   uint32_t pdn, uint32_t eqn,  uint16_t log_desc_n)
+		   struct mlx5dv_devx_uar *uar, uint32_t pdn,
+		   uint32_t eqn,  uint16_t log_desc_n)
 {
 	struct mlx5_devx_create_sq_attr attr = { 0 };
 	struct mlx5_devx_modify_sq_attr modify_attr = { 0 };
@@ -269,11 +270,8 @@
 	if (mlx5_aso_devx_reg_mr(ctx, (MLX5_ASO_AGE_ACTIONS_PER_POOL / 8) *
 				 sq_desc_n, &sq->mr, socket, pdn))
 		return -1;
-	sq->uar_obj = mlx5_glue->devx_alloc_uar(ctx, 0);
-	if (!sq->uar_obj)
-		goto error;
 	if (mlx5_aso_cq_create(ctx, &sq->cq, log_desc_n, socket,
-				mlx5_os_get_devx_uar_page_id(sq->uar_obj), eqn))
+				mlx5_os_get_devx_uar_page_id(uar), eqn))
 		goto error;
 	sq->log_desc_n = log_desc_n;
 	sq->umem_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, wq_size +
@@ -297,7 +295,7 @@
 	attr.tis_num = 0;
 	attr.user_index = 0xFFFF;
 	attr.cqn = sq->cq.cq->id;
-	wq_attr->uar_page = mlx5_os_get_devx_uar_page_id(sq->uar_obj);
+	wq_attr->uar_page = mlx5_os_get_devx_uar_page_id(uar);
 	wq_attr->pd = pdn;
 	wq_attr->wq_type = MLX5_WQ_TYPE_CYCLIC;
 	wq_attr->log_wq_pg_sz = rte_log2_u32(pgsize);
@@ -327,8 +325,7 @@
 	sq->tail = 0;
 	sq->sqn = sq->sq->id;
 	sq->db_rec = RTE_PTR_ADD(sq->umem_buf, (uintptr_t)(wq_attr->dbr_addr));
-	sq->uar_addr = (volatile uint64_t *)((uint8_t *)sq->uar_obj->base_addr +
-									 0x800);
+	sq->uar_addr = (volatile uint64_t *)((uint8_t *)uar->base_addr + 0x800);
 	mlx5_aso_init_sq(sq);
 	return 0;
 error:
@@ -348,8 +345,9 @@
 int
 mlx5_aso_queue_init(struct mlx5_dev_ctx_shared *sh)
 {
-	return mlx5_aso_sq_create(sh->ctx, &sh->aso_age_mng->aso_sq, 0, sh->pdn,
-				  sh->eqn, MLX5_ASO_QUEUE_LOG_DESC);
+	return mlx5_aso_sq_create(sh->ctx, &sh->aso_age_mng->aso_sq, 0,
+				  sh->tx_uar, sh->pdn, sh->eqn,
+				  MLX5_ASO_QUEUE_LOG_DESC);
 }
 
 /**
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [dpdk-stable] [PATCH 1/4] common/mlx5: share UAR allocation routine
  2020-11-10 16:04 [dpdk-dev] [PATCH 1/4] common/mlx5: share UAR allocation routine Viacheslav Ovsiienko
                   ` (2 preceding siblings ...)
  2020-11-10 16:04 ` [dpdk-dev] [PATCH 4/4] net/mlx5: fix UAR used by ASO queues Viacheslav Ovsiienko
@ 2020-11-14  8:36 ` Thomas Monjalon
  2020-11-14 10:01 ` Thomas Monjalon
  4 siblings, 0 replies; 6+ messages in thread
From: Thomas Monjalon @ 2020-11-14  8:36 UTC (permalink / raw)
  To: Viacheslav Ovsiienko; +Cc: dev, stable, rasland, matan, asafp

10/11/2020 17:04, Viacheslav Ovsiienko:
> This patch introduces the routine to allocate the UAR (User
> Access Region) with various memory mapping types. The origin
> patch being fixed provided the UAR allocation workaround
> for the mlx5 net PMD only. As it was found the other mlx5
> based drivers - vdpa and regex are affected by the issue
> as well and must be fixed.
> 
> Fixes: a0bfe9d56f74 ("net/mlx5: fix UAR memory mapping type")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> Acked-by: Matan Azrad <matan@nvidia.com>

Series applied, thanks




^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [dpdk-stable] [PATCH 1/4] common/mlx5: share UAR allocation routine
  2020-11-10 16:04 [dpdk-dev] [PATCH 1/4] common/mlx5: share UAR allocation routine Viacheslav Ovsiienko
                   ` (3 preceding siblings ...)
  2020-11-14  8:36 ` [dpdk-dev] [dpdk-stable] [PATCH 1/4] common/mlx5: share UAR allocation routine Thomas Monjalon
@ 2020-11-14 10:01 ` Thomas Monjalon
  4 siblings, 0 replies; 6+ messages in thread
From: Thomas Monjalon @ 2020-11-14 10:01 UTC (permalink / raw)
  To: Viacheslav Ovsiienko; +Cc: dev, stable, rasland, matan, asafp, david.marchand

10/11/2020 17:04, Viacheslav Ovsiienko:
> +void *
> +mlx5_devx_alloc_uar(void *ctx, int mapping)
> +{
> +	void *uar;
> +	uint32_t retry, uar_mapping;
> +	void *base_addr;
> +
> +	for (retry = 0; retry < MLX5_ALLOC_UAR_RETRY; ++retry) {
> +#ifdef MLX5DV_UAR_ALLOC_TYPE_NC
> +		/* Control the mapping type according to the settings. */
> +		uar_mapping = (mapping < 0) ?
> +			      MLX5DV_UAR_ALLOC_TYPE_NC : mapping;
> +#else
> +		/*
> +		 * It seems we have no way to control the memory mapping type
> +		 * for the UAR, the default "Write-Combining" type is supposed.
> +		 */
> +		uar_mapping = 0;
> +#endif

A failure was reported by the CI:
	error: unused parameter ‘mapping’
It is fixed while merging by adding the below in the #else case:
	RTE_SET_USED(mapping);

Please take care of CI results!



^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-11-14 10:01 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-10 16:04 [dpdk-dev] [PATCH 1/4] common/mlx5: share UAR allocation routine Viacheslav Ovsiienko
2020-11-10 16:04 ` [dpdk-dev] [PATCH 2/4] regex/mlx5: fix UAR allocation Viacheslav Ovsiienko
2020-11-10 16:04 ` [dpdk-dev] [PATCH 3/4] vdpa/mlx5: " Viacheslav Ovsiienko
2020-11-10 16:04 ` [dpdk-dev] [PATCH 4/4] net/mlx5: fix UAR used by ASO queues Viacheslav Ovsiienko
2020-11-14  8:36 ` [dpdk-dev] [dpdk-stable] [PATCH 1/4] common/mlx5: share UAR allocation routine Thomas Monjalon
2020-11-14 10:01 ` Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).