* [dpdk-stable] [PATCH 1/3] net/virtio-user: fix deadlock in memory events callback
[not found] <20180905042852.6212-1-tiwei.bie@intel.com>
@ 2018-09-05 4:28 ` Tiwei Bie
2018-09-05 8:10 ` Sean Harte
` (2 more replies)
2018-09-05 4:28 ` [dpdk-stable] [PATCH 3/3] net/virtio-user: fix memory hotplug support in vhost-kernel Tiwei Bie
1 sibling, 3 replies; 11+ messages in thread
From: Tiwei Bie @ 2018-09-05 4:28 UTC (permalink / raw)
To: maxime.coquelin, zhihong.wang, dev; +Cc: seanbh, anatoly.burakov, stable
Deadlock can occur when allocating memory if a vhost-kernel
based virtio-user device is in use. To fix the deadlock,
we will take memory hotplug lock explicitly in virtio-user
when necessary, and always call the _thread_unsafe memory
functions.
Bugzilla ID: 81
Fixes: 12ecb2f63b12 ("net/virtio-user: support memory hotplug")
Cc: stable@dpdk.org
Reported-by: Seán Harte <seanbh@gmail.com>
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
drivers/net/virtio/virtio_user/vhost_kernel.c | 6 +++++-
.../net/virtio/virtio_user/virtio_user_dev.c | 19 +++++++++++++++++++
2 files changed, 24 insertions(+), 1 deletion(-)
diff --git a/drivers/net/virtio/virtio_user/vhost_kernel.c b/drivers/net/virtio/virtio_user/vhost_kernel.c
index b2444096c..897fee0af 100644
--- a/drivers/net/virtio/virtio_user/vhost_kernel.c
+++ b/drivers/net/virtio/virtio_user/vhost_kernel.c
@@ -115,7 +115,11 @@ prepare_vhost_memory_kernel(void)
wa.region_nr = 0;
wa.vm = vm;
- if (rte_memseg_contig_walk(add_memory_region, &wa) < 0) {
+ /*
+ * The memory lock has already been taken by memory subsystem
+ * or virtio_user_start_device().
+ */
+ if (rte_memseg_contig_walk_thread_unsafe(add_memory_region, &wa) < 0) {
free(vm);
return NULL;
}
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 7df600b02..869e96f87 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -13,6 +13,8 @@
#include <sys/types.h>
#include <sys/stat.h>
+#include <rte_eal_memconfig.h>
+
#include "vhost.h"
#include "virtio_user_dev.h"
#include "../virtio_ethdev.h"
@@ -109,9 +111,24 @@ is_vhost_user_by_type(const char *path)
int
virtio_user_start_device(struct virtio_user_dev *dev)
{
+ struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
uint64_t features;
int ret;
+ /*
+ * XXX workaround!
+ *
+ * We need to make sure that the locks will be
+ * taken in the correct order to avoid deadlocks.
+ *
+ * Before releasing this lock, this thread should
+ * not trigger any memory hotplug events.
+ *
+ * This is a temporary workaround, and should be
+ * replaced when we get proper supports from the
+ * memory subsystem in the future.
+ */
+ rte_rwlock_read_lock(&mcfg->memory_hotplug_lock);
pthread_mutex_lock(&dev->mutex);
if (is_vhost_user_by_type(dev->path) && dev->vhostfd < 0)
@@ -152,10 +169,12 @@ virtio_user_start_device(struct virtio_user_dev *dev)
dev->started = true;
pthread_mutex_unlock(&dev->mutex);
+ rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
return 0;
error:
pthread_mutex_unlock(&dev->mutex);
+ rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
/* TODO: free resource here or caller to check */
return -1;
}
--
2.18.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* [dpdk-stable] [PATCH 3/3] net/virtio-user: fix memory hotplug support in vhost-kernel
[not found] <20180905042852.6212-1-tiwei.bie@intel.com>
2018-09-05 4:28 ` [dpdk-stable] [PATCH 1/3] net/virtio-user: fix deadlock in memory events callback Tiwei Bie
@ 2018-09-05 4:28 ` Tiwei Bie
2018-09-07 9:44 ` Burakov, Anatoly
2018-09-11 13:10 ` Maxime Coquelin
1 sibling, 2 replies; 11+ messages in thread
From: Tiwei Bie @ 2018-09-05 4:28 UTC (permalink / raw)
To: maxime.coquelin, zhihong.wang, dev; +Cc: seanbh, anatoly.burakov, stable
It's possible to have much more hugepage backed memory regions
than what vhost-kernel supports due to the memory hotplug, which
may cause problems. A better solution is to have the virtio-user
pass all the memory ranges reserved by DPDK to vhost-kernel.
Fixes: 12ecb2f63b12 ("net/virtio-user: support memory hotplug")
Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
drivers/net/virtio/virtio_user/vhost_kernel.c | 38 +++++++++----------
1 file changed, 18 insertions(+), 20 deletions(-)
diff --git a/drivers/net/virtio/virtio_user/vhost_kernel.c b/drivers/net/virtio/virtio_user/vhost_kernel.c
index 897fee0af..9338166d9 100644
--- a/drivers/net/virtio/virtio_user/vhost_kernel.c
+++ b/drivers/net/virtio/virtio_user/vhost_kernel.c
@@ -70,41 +70,41 @@ static uint64_t vhost_req_user_to_kernel[] = {
[VHOST_USER_SET_MEM_TABLE] = VHOST_SET_MEM_TABLE,
};
-struct walk_arg {
- struct vhost_memory_kernel *vm;
- uint32_t region_nr;
-};
static int
-add_memory_region(const struct rte_memseg_list *msl __rte_unused,
- const struct rte_memseg *ms, size_t len, void *arg)
+add_memseg_list(const struct rte_memseg_list *msl, void *arg)
{
- struct walk_arg *wa = arg;
+ struct vhost_memory_kernel *vm = arg;
struct vhost_memory_region *mr;
void *start_addr;
+ uint64_t len;
- if (wa->region_nr >= max_regions)
+ if (vm->nregions >= max_regions)
return -1;
- mr = &wa->vm->regions[wa->region_nr++];
- start_addr = ms->addr;
+ start_addr = msl->base_va;
+ len = msl->page_sz * msl->memseg_arr.len;
+
+ mr = &vm->regions[vm->nregions++];
mr->guest_phys_addr = (uint64_t)(uintptr_t)start_addr;
mr->userspace_addr = (uint64_t)(uintptr_t)start_addr;
mr->memory_size = len;
- mr->mmap_offset = 0;
+ mr->mmap_offset = 0; /* flags_padding */
+
+ PMD_DRV_LOG(DEBUG, "index=%u addr=%p len=%" PRIu64,
+ vm->nregions - 1, start_addr, len);
return 0;
}
-/* By default, vhost kernel module allows 64 regions, but DPDK allows
- * 256 segments. As a relief, below function merges those virtually
- * adjacent memsegs into one region.
+/* By default, vhost kernel module allows 64 regions, but DPDK may
+ * have much more memory regions. Below function will treat each
+ * contiguous memory space reserved by DPDK as one region.
*/
static struct vhost_memory_kernel *
prepare_vhost_memory_kernel(void)
{
struct vhost_memory_kernel *vm;
- struct walk_arg wa;
vm = malloc(sizeof(struct vhost_memory_kernel) +
max_regions *
@@ -112,20 +112,18 @@ prepare_vhost_memory_kernel(void)
if (!vm)
return NULL;
- wa.region_nr = 0;
- wa.vm = vm;
+ vm->nregions = 0;
+ vm->padding = 0;
/*
* The memory lock has already been taken by memory subsystem
* or virtio_user_start_device().
*/
- if (rte_memseg_contig_walk_thread_unsafe(add_memory_region, &wa) < 0) {
+ if (rte_memseg_list_walk_thread_unsafe(add_memseg_list, vm) < 0) {
free(vm);
return NULL;
}
- vm->nregions = wa.region_nr;
- vm->padding = 0;
return vm;
}
--
2.18.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-stable] [PATCH 1/3] net/virtio-user: fix deadlock in memory events callback
2018-09-05 4:28 ` [dpdk-stable] [PATCH 1/3] net/virtio-user: fix deadlock in memory events callback Tiwei Bie
@ 2018-09-05 8:10 ` Sean Harte
2018-09-07 9:30 ` Burakov, Anatoly
2018-09-11 12:52 ` Maxime Coquelin
2 siblings, 0 replies; 11+ messages in thread
From: Sean Harte @ 2018-09-05 8:10 UTC (permalink / raw)
To: Tiwei Bie; +Cc: maxime.coquelin, zhihong.wang, dev, anatoly.burakov, stable
On 5 September 2018 at 05:28, Tiwei Bie <tiwei.bie@intel.com> wrote:
> Deadlock can occur when allocating memory if a vhost-kernel
> based virtio-user device is in use. To fix the deadlock,
> we will take memory hotplug lock explicitly in virtio-user
> when necessary, and always call the _thread_unsafe memory
> functions.
>
> Bugzilla ID: 81
> Fixes: 12ecb2f63b12 ("net/virtio-user: support memory hotplug")
> Cc: stable@dpdk.org
>
> Reported-by: Seán Harte <seanbh@gmail.com>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtio_user/vhost_kernel.c | 6 +++++-
> .../net/virtio/virtio_user/virtio_user_dev.c | 19 +++++++++++++++++++
> 2 files changed, 24 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/virtio/virtio_user/vhost_kernel.c b/drivers/net/virtio/virtio_user/vhost_kernel.c
> index b2444096c..897fee0af 100644
> --- a/drivers/net/virtio/virtio_user/vhost_kernel.c
> +++ b/drivers/net/virtio/virtio_user/vhost_kernel.c
> @@ -115,7 +115,11 @@ prepare_vhost_memory_kernel(void)
> wa.region_nr = 0;
> wa.vm = vm;
>
> - if (rte_memseg_contig_walk(add_memory_region, &wa) < 0) {
> + /*
> + * The memory lock has already been taken by memory subsystem
> + * or virtio_user_start_device().
> + */
> + if (rte_memseg_contig_walk_thread_unsafe(add_memory_region, &wa) < 0) {
> free(vm);
> return NULL;
> }
> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> index 7df600b02..869e96f87 100644
> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> @@ -13,6 +13,8 @@
> #include <sys/types.h>
> #include <sys/stat.h>
>
> +#include <rte_eal_memconfig.h>
> +
> #include "vhost.h"
> #include "virtio_user_dev.h"
> #include "../virtio_ethdev.h"
> @@ -109,9 +111,24 @@ is_vhost_user_by_type(const char *path)
> int
> virtio_user_start_device(struct virtio_user_dev *dev)
> {
> + struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
> uint64_t features;
> int ret;
>
> + /*
> + * XXX workaround!
> + *
> + * We need to make sure that the locks will be
> + * taken in the correct order to avoid deadlocks.
> + *
> + * Before releasing this lock, this thread should
> + * not trigger any memory hotplug events.
> + *
> + * This is a temporary workaround, and should be
> + * replaced when we get proper supports from the
> + * memory subsystem in the future.
> + */
> + rte_rwlock_read_lock(&mcfg->memory_hotplug_lock);
> pthread_mutex_lock(&dev->mutex);
>
> if (is_vhost_user_by_type(dev->path) && dev->vhostfd < 0)
> @@ -152,10 +169,12 @@ virtio_user_start_device(struct virtio_user_dev *dev)
>
> dev->started = true;
> pthread_mutex_unlock(&dev->mutex);
> + rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
>
> return 0;
> error:
> pthread_mutex_unlock(&dev->mutex);
> + rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
> /* TODO: free resource here or caller to check */
> return -1;
> }
> --
> 2.18.0
>
Tested-by: Seán Harte <seanbh@gmail.com>
Reviewed-by: Seán Harte <seanbh@gmail.com>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-stable] [PATCH 1/3] net/virtio-user: fix deadlock in memory events callback
2018-09-05 4:28 ` [dpdk-stable] [PATCH 1/3] net/virtio-user: fix deadlock in memory events callback Tiwei Bie
2018-09-05 8:10 ` Sean Harte
@ 2018-09-07 9:30 ` Burakov, Anatoly
2018-09-11 12:52 ` Maxime Coquelin
2 siblings, 0 replies; 11+ messages in thread
From: Burakov, Anatoly @ 2018-09-07 9:30 UTC (permalink / raw)
To: Tiwei Bie, maxime.coquelin, zhihong.wang, dev; +Cc: seanbh, stable
On 05-Sep-18 5:28 AM, Tiwei Bie wrote:
> Deadlock can occur when allocating memory if a vhost-kernel
> based virtio-user device is in use. To fix the deadlock,
> we will take memory hotplug lock explicitly in virtio-user
> when necessary, and always call the _thread_unsafe memory
> functions.
>
> Bugzilla ID: 81
> Fixes: 12ecb2f63b12 ("net/virtio-user: support memory hotplug")
> Cc: stable@dpdk.org
>
> Reported-by: Seán Harte <seanbh@gmail.com>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtio_user/vhost_kernel.c | 6 +++++-
> .../net/virtio/virtio_user/virtio_user_dev.c | 19 +++++++++++++++++++
> 2 files changed, 24 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/virtio/virtio_user/vhost_kernel.c b/drivers/net/virtio/virtio_user/vhost_kernel.c
> index b2444096c..897fee0af 100644
> --- a/drivers/net/virtio/virtio_user/vhost_kernel.c
> +++ b/drivers/net/virtio/virtio_user/vhost_kernel.c
> @@ -115,7 +115,11 @@ prepare_vhost_memory_kernel(void)
> wa.region_nr = 0;
> wa.vm = vm;
>
> - if (rte_memseg_contig_walk(add_memory_region, &wa) < 0) {
> + /*
> + * The memory lock has already been taken by memory subsystem
> + * or virtio_user_start_device().
> + */
> + if (rte_memseg_contig_walk_thread_unsafe(add_memory_region, &wa) < 0) {
> free(vm);
> return NULL;
> }
> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> index 7df600b02..869e96f87 100644
> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> @@ -13,6 +13,8 @@
> #include <sys/types.h>
> #include <sys/stat.h>
>
> +#include <rte_eal_memconfig.h>
> +
> #include "vhost.h"
> #include "virtio_user_dev.h"
> #include "../virtio_ethdev.h"
> @@ -109,9 +111,24 @@ is_vhost_user_by_type(const char *path)
> int
> virtio_user_start_device(struct virtio_user_dev *dev)
> {
> + struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
> uint64_t features;
> int ret;
>
> + /*
> + * XXX workaround!
> + *
> + * We need to make sure that the locks will be
> + * taken in the correct order to avoid deadlocks.
> + *
> + * Before releasing this lock, this thread should
> + * not trigger any memory hotplug events.
> + *
> + * This is a temporary workaround, and should be
> + * replaced when we get proper supports from the
> + * memory subsystem in the future.
> + */
> + rte_rwlock_read_lock(&mcfg->memory_hotplug_lock);
> pthread_mutex_lock(&dev->mutex);
>
> if (is_vhost_user_by_type(dev->path) && dev->vhostfd < 0)
> @@ -152,10 +169,12 @@ virtio_user_start_device(struct virtio_user_dev *dev)
>
> dev->started = true;
> pthread_mutex_unlock(&dev->mutex);
> + rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
>
> return 0;
> error:
> pthread_mutex_unlock(&dev->mutex);
> + rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
> /* TODO: free resource here or caller to check */
> return -1;
> }
>
For the memory parts,
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-stable] [PATCH 3/3] net/virtio-user: fix memory hotplug support in vhost-kernel
2018-09-05 4:28 ` [dpdk-stable] [PATCH 3/3] net/virtio-user: fix memory hotplug support in vhost-kernel Tiwei Bie
@ 2018-09-07 9:44 ` Burakov, Anatoly
2018-09-07 11:37 ` Tiwei Bie
2018-09-11 13:10 ` Maxime Coquelin
1 sibling, 1 reply; 11+ messages in thread
From: Burakov, Anatoly @ 2018-09-07 9:44 UTC (permalink / raw)
To: Tiwei Bie, maxime.coquelin, zhihong.wang, dev; +Cc: seanbh, stable
On 05-Sep-18 5:28 AM, Tiwei Bie wrote:
> It's possible to have much more hugepage backed memory regions
> than what vhost-kernel supports due to the memory hotplug, which
> may cause problems. A better solution is to have the virtio-user
> pass all the memory ranges reserved by DPDK to vhost-kernel.
>
> Fixes: 12ecb2f63b12 ("net/virtio-user: support memory hotplug")
> Cc: stable@dpdk.org
>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtio_user/vhost_kernel.c | 38 +++++++++----------
> 1 file changed, 18 insertions(+), 20 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_user/vhost_kernel.c b/drivers/net/virtio/virtio_user/vhost_kernel.c
> index 897fee0af..9338166d9 100644
> --- a/drivers/net/virtio/virtio_user/vhost_kernel.c
> +++ b/drivers/net/virtio/virtio_user/vhost_kernel.c
> @@ -70,41 +70,41 @@ static uint64_t vhost_req_user_to_kernel[] = {
> [VHOST_USER_SET_MEM_TABLE] = VHOST_SET_MEM_TABLE,
> };
>
> -struct walk_arg {
> - struct vhost_memory_kernel *vm;
> - uint32_t region_nr;
> -};
> static int
> -add_memory_region(const struct rte_memseg_list *msl __rte_unused,
> - const struct rte_memseg *ms, size_t len, void *arg)
> +add_memseg_list(const struct rte_memseg_list *msl, void *arg)
> {
> - struct walk_arg *wa = arg;
> + struct vhost_memory_kernel *vm = arg;
> struct vhost_memory_region *mr;
> void *start_addr;
> + uint64_t len;
>
> - if (wa->region_nr >= max_regions)
> + if (vm->nregions >= max_regions)
> return -1;
>
> - mr = &wa->vm->regions[wa->region_nr++];
> - start_addr = ms->addr;
> + start_addr = msl->base_va;
> + len = msl->page_sz * msl->memseg_arr.len;
> +
> + mr = &vm->regions[vm->nregions++];
>
> mr->guest_phys_addr = (uint64_t)(uintptr_t)start_addr;
> mr->userspace_addr = (uint64_t)(uintptr_t)start_addr;
> mr->memory_size = len;
> - mr->mmap_offset = 0;
> + mr->mmap_offset = 0; /* flags_padding */
> +
> + PMD_DRV_LOG(DEBUG, "index=%u addr=%p len=%" PRIu64,
> + vm->nregions - 1, start_addr, len);
>
> return 0;
> }
>
> -/* By default, vhost kernel module allows 64 regions, but DPDK allows
> - * 256 segments. As a relief, below function merges those virtually
> - * adjacent memsegs into one region.
> +/* By default, vhost kernel module allows 64 regions, but DPDK may
> + * have much more memory regions. Below function will treat each
> + * contiguous memory space reserved by DPDK as one region.
> */
> static struct vhost_memory_kernel *
> prepare_vhost_memory_kernel(void)
> {
> struct vhost_memory_kernel *vm;
> - struct walk_arg wa;
>
> vm = malloc(sizeof(struct vhost_memory_kernel) +
> max_regions *
> @@ -112,20 +112,18 @@ prepare_vhost_memory_kernel(void)
> if (!vm)
> return NULL;
>
> - wa.region_nr = 0;
> - wa.vm = vm;
> + vm->nregions = 0;
> + vm->padding = 0;
>
> /*
> * The memory lock has already been taken by memory subsystem
> * or virtio_user_start_device().
> */
> - if (rte_memseg_contig_walk_thread_unsafe(add_memory_region, &wa) < 0) {
> + if (rte_memseg_list_walk_thread_unsafe(add_memseg_list, vm) < 0) {
> free(vm);
> return NULL;
> }
>
> - vm->nregions = wa.region_nr;
> - vm->padding = 0;
> return vm;
> }
>
>
Doesn't that assume single file segments mode?
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-stable] [PATCH 3/3] net/virtio-user: fix memory hotplug support in vhost-kernel
2018-09-07 9:44 ` Burakov, Anatoly
@ 2018-09-07 11:37 ` Tiwei Bie
2018-09-07 12:24 ` Burakov, Anatoly
0 siblings, 1 reply; 11+ messages in thread
From: Tiwei Bie @ 2018-09-07 11:37 UTC (permalink / raw)
To: Burakov, Anatoly; +Cc: maxime.coquelin, zhihong.wang, dev, seanbh, stable
On Fri, Sep 07, 2018 at 10:44:22AM +0100, Burakov, Anatoly wrote:
> On 05-Sep-18 5:28 AM, Tiwei Bie wrote:
> > It's possible to have much more hugepage backed memory regions
> > than what vhost-kernel supports due to the memory hotplug, which
> > may cause problems. A better solution is to have the virtio-user
> > pass all the memory ranges reserved by DPDK to vhost-kernel.
> >
> > Fixes: 12ecb2f63b12 ("net/virtio-user: support memory hotplug")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> > ---
> > drivers/net/virtio/virtio_user/vhost_kernel.c | 38 +++++++++----------
> > 1 file changed, 18 insertions(+), 20 deletions(-)
> >
> > diff --git a/drivers/net/virtio/virtio_user/vhost_kernel.c b/drivers/net/virtio/virtio_user/vhost_kernel.c
> > index 897fee0af..9338166d9 100644
> > --- a/drivers/net/virtio/virtio_user/vhost_kernel.c
> > +++ b/drivers/net/virtio/virtio_user/vhost_kernel.c
> > @@ -70,41 +70,41 @@ static uint64_t vhost_req_user_to_kernel[] = {
> > [VHOST_USER_SET_MEM_TABLE] = VHOST_SET_MEM_TABLE,
> > };
> > -struct walk_arg {
> > - struct vhost_memory_kernel *vm;
> > - uint32_t region_nr;
> > -};
> > static int
> > -add_memory_region(const struct rte_memseg_list *msl __rte_unused,
> > - const struct rte_memseg *ms, size_t len, void *arg)
> > +add_memseg_list(const struct rte_memseg_list *msl, void *arg)
> > {
> > - struct walk_arg *wa = arg;
> > + struct vhost_memory_kernel *vm = arg;
> > struct vhost_memory_region *mr;
> > void *start_addr;
> > + uint64_t len;
> > - if (wa->region_nr >= max_regions)
> > + if (vm->nregions >= max_regions)
> > return -1;
> > - mr = &wa->vm->regions[wa->region_nr++];
> > - start_addr = ms->addr;
> > + start_addr = msl->base_va;
> > + len = msl->page_sz * msl->memseg_arr.len;
> > +
> > + mr = &vm->regions[vm->nregions++];
> > mr->guest_phys_addr = (uint64_t)(uintptr_t)start_addr;
> > mr->userspace_addr = (uint64_t)(uintptr_t)start_addr;
> > mr->memory_size = len;
> > - mr->mmap_offset = 0;
> > + mr->mmap_offset = 0; /* flags_padding */
> > +
> > + PMD_DRV_LOG(DEBUG, "index=%u addr=%p len=%" PRIu64,
> > + vm->nregions - 1, start_addr, len);
> > return 0;
> > }
> > -/* By default, vhost kernel module allows 64 regions, but DPDK allows
> > - * 256 segments. As a relief, below function merges those virtually
> > - * adjacent memsegs into one region.
> > +/* By default, vhost kernel module allows 64 regions, but DPDK may
> > + * have much more memory regions. Below function will treat each
> > + * contiguous memory space reserved by DPDK as one region.
> > */
> > static struct vhost_memory_kernel *
> > prepare_vhost_memory_kernel(void)
> > {
> > struct vhost_memory_kernel *vm;
> > - struct walk_arg wa;
> > vm = malloc(sizeof(struct vhost_memory_kernel) +
> > max_regions *
> > @@ -112,20 +112,18 @@ prepare_vhost_memory_kernel(void)
> > if (!vm)
> > return NULL;
> > - wa.region_nr = 0;
> > - wa.vm = vm;
> > + vm->nregions = 0;
> > + vm->padding = 0;
> > /*
> > * The memory lock has already been taken by memory subsystem
> > * or virtio_user_start_device().
> > */
> > - if (rte_memseg_contig_walk_thread_unsafe(add_memory_region, &wa) < 0) {
> > + if (rte_memseg_list_walk_thread_unsafe(add_memseg_list, vm) < 0) {
> > free(vm);
> > return NULL;
> > }
> > - vm->nregions = wa.region_nr;
> > - vm->padding = 0;
> > return vm;
> > }
> >
>
> Doesn't that assume single file segments mode?
This is to find out the VA ranges reserved by memory subsystem.
Why does it need to assume single file segments mode?
>
> --
> Thanks,
> Anatoly
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-stable] [PATCH 3/3] net/virtio-user: fix memory hotplug support in vhost-kernel
2018-09-07 11:37 ` Tiwei Bie
@ 2018-09-07 12:24 ` Burakov, Anatoly
2018-09-10 4:04 ` Tiwei Bie
0 siblings, 1 reply; 11+ messages in thread
From: Burakov, Anatoly @ 2018-09-07 12:24 UTC (permalink / raw)
To: Tiwei Bie; +Cc: maxime.coquelin, zhihong.wang, dev, seanbh, stable
On 07-Sep-18 12:37 PM, Tiwei Bie wrote:
> On Fri, Sep 07, 2018 at 10:44:22AM +0100, Burakov, Anatoly wrote:
>> On 05-Sep-18 5:28 AM, Tiwei Bie wrote:
>>> It's possible to have much more hugepage backed memory regions
>>> than what vhost-kernel supports due to the memory hotplug, which
>>> may cause problems. A better solution is to have the virtio-user
>>> pass all the memory ranges reserved by DPDK to vhost-kernel.
>>>
>>> Fixes: 12ecb2f63b12 ("net/virtio-user: support memory hotplug")
>>> Cc: stable@dpdk.org
>>>
>>> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>>> ---
>>> drivers/net/virtio/virtio_user/vhost_kernel.c | 38 +++++++++----------
>>> 1 file changed, 18 insertions(+), 20 deletions(-)
>>>
>>> diff --git a/drivers/net/virtio/virtio_user/vhost_kernel.c b/drivers/net/virtio/virtio_user/vhost_kernel.c
>>> index 897fee0af..9338166d9 100644
>>> --- a/drivers/net/virtio/virtio_user/vhost_kernel.c
>>> +++ b/drivers/net/virtio/virtio_user/vhost_kernel.c
>>> @@ -70,41 +70,41 @@ static uint64_t vhost_req_user_to_kernel[] = {
>>> [VHOST_USER_SET_MEM_TABLE] = VHOST_SET_MEM_TABLE,
>>> };
>>> -struct walk_arg {
>>> - struct vhost_memory_kernel *vm;
>>> - uint32_t region_nr;
>>> -};
>>> static int
>>> -add_memory_region(const struct rte_memseg_list *msl __rte_unused,
>>> - const struct rte_memseg *ms, size_t len, void *arg)
>>> +add_memseg_list(const struct rte_memseg_list *msl, void *arg)
>>> {
>>> - struct walk_arg *wa = arg;
>>> + struct vhost_memory_kernel *vm = arg;
>>> struct vhost_memory_region *mr;
>>> void *start_addr;
>>> + uint64_t len;
>>> - if (wa->region_nr >= max_regions)
>>> + if (vm->nregions >= max_regions)
>>> return -1;
>>> - mr = &wa->vm->regions[wa->region_nr++];
>>> - start_addr = ms->addr;
>>> + start_addr = msl->base_va;
>>> + len = msl->page_sz * msl->memseg_arr.len;
>>> +
>>> + mr = &vm->regions[vm->nregions++];
>>> mr->guest_phys_addr = (uint64_t)(uintptr_t)start_addr;
>>> mr->userspace_addr = (uint64_t)(uintptr_t)start_addr;
>>> mr->memory_size = len;
>>> - mr->mmap_offset = 0;
>>> + mr->mmap_offset = 0; /* flags_padding */
>>> +
>>> + PMD_DRV_LOG(DEBUG, "index=%u addr=%p len=%" PRIu64,
>>> + vm->nregions - 1, start_addr, len);
>>> return 0;
>>> }
>>> -/* By default, vhost kernel module allows 64 regions, but DPDK allows
>>> - * 256 segments. As a relief, below function merges those virtually
>>> - * adjacent memsegs into one region.
>>> +/* By default, vhost kernel module allows 64 regions, but DPDK may
>>> + * have much more memory regions. Below function will treat each
>>> + * contiguous memory space reserved by DPDK as one region.
>>> */
>>> static struct vhost_memory_kernel *
>>> prepare_vhost_memory_kernel(void)
>>> {
>>> struct vhost_memory_kernel *vm;
>>> - struct walk_arg wa;
>>> vm = malloc(sizeof(struct vhost_memory_kernel) +
>>> max_regions *
>>> @@ -112,20 +112,18 @@ prepare_vhost_memory_kernel(void)
>>> if (!vm)
>>> return NULL;
>>> - wa.region_nr = 0;
>>> - wa.vm = vm;
>>> + vm->nregions = 0;
>>> + vm->padding = 0;
>>> /*
>>> * The memory lock has already been taken by memory subsystem
>>> * or virtio_user_start_device().
>>> */
>>> - if (rte_memseg_contig_walk_thread_unsafe(add_memory_region, &wa) < 0) {
>>> + if (rte_memseg_list_walk_thread_unsafe(add_memseg_list, vm) < 0) {
>>> free(vm);
>>> return NULL;
>>> }
>>> - vm->nregions = wa.region_nr;
>>> - vm->padding = 0;
>>> return vm;
>>> }
>>>
>>
>> Doesn't that assume single file segments mode?
>
> This is to find out the VA ranges reserved by memory subsystem.
> Why does it need to assume single file segments mode?
If you are not in single-file segments mode, each individual page in a
VA-contiguous area will be behind a different fd - so it will be part of
a different region, would it not?
>
>
>>
>> --
>> Thanks,
>> Anatoly
>
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-stable] [PATCH 3/3] net/virtio-user: fix memory hotplug support in vhost-kernel
2018-09-07 12:24 ` Burakov, Anatoly
@ 2018-09-10 4:04 ` Tiwei Bie
2018-09-17 10:18 ` Burakov, Anatoly
0 siblings, 1 reply; 11+ messages in thread
From: Tiwei Bie @ 2018-09-10 4:04 UTC (permalink / raw)
To: Burakov, Anatoly; +Cc: maxime.coquelin, zhihong.wang, dev, seanbh, stable
On Fri, Sep 07, 2018 at 01:24:05PM +0100, Burakov, Anatoly wrote:
> On 07-Sep-18 12:37 PM, Tiwei Bie wrote:
> > On Fri, Sep 07, 2018 at 10:44:22AM +0100, Burakov, Anatoly wrote:
> > > On 05-Sep-18 5:28 AM, Tiwei Bie wrote:
> > > > It's possible to have much more hugepage backed memory regions
> > > > than what vhost-kernel supports due to the memory hotplug, which
> > > > may cause problems. A better solution is to have the virtio-user
> > > > pass all the memory ranges reserved by DPDK to vhost-kernel.
> > > >
> > > > Fixes: 12ecb2f63b12 ("net/virtio-user: support memory hotplug")
> > > > Cc: stable@dpdk.org
> > > >
> > > > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> > > > ---
> > > > drivers/net/virtio/virtio_user/vhost_kernel.c | 38 +++++++++----------
> > > > 1 file changed, 18 insertions(+), 20 deletions(-)
> > > >
> > > > diff --git a/drivers/net/virtio/virtio_user/vhost_kernel.c b/drivers/net/virtio/virtio_user/vhost_kernel.c
> > > > index 897fee0af..9338166d9 100644
> > > > --- a/drivers/net/virtio/virtio_user/vhost_kernel.c
> > > > +++ b/drivers/net/virtio/virtio_user/vhost_kernel.c
> > > > @@ -70,41 +70,41 @@ static uint64_t vhost_req_user_to_kernel[] = {
> > > > [VHOST_USER_SET_MEM_TABLE] = VHOST_SET_MEM_TABLE,
> > > > };
> > > > -struct walk_arg {
> > > > - struct vhost_memory_kernel *vm;
> > > > - uint32_t region_nr;
> > > > -};
> > > > static int
> > > > -add_memory_region(const struct rte_memseg_list *msl __rte_unused,
> > > > - const struct rte_memseg *ms, size_t len, void *arg)
> > > > +add_memseg_list(const struct rte_memseg_list *msl, void *arg)
> > > > {
> > > > - struct walk_arg *wa = arg;
> > > > + struct vhost_memory_kernel *vm = arg;
> > > > struct vhost_memory_region *mr;
> > > > void *start_addr;
> > > > + uint64_t len;
> > > > - if (wa->region_nr >= max_regions)
> > > > + if (vm->nregions >= max_regions)
> > > > return -1;
> > > > - mr = &wa->vm->regions[wa->region_nr++];
> > > > - start_addr = ms->addr;
> > > > + start_addr = msl->base_va;
> > > > + len = msl->page_sz * msl->memseg_arr.len;
> > > > +
> > > > + mr = &vm->regions[vm->nregions++];
> > > > mr->guest_phys_addr = (uint64_t)(uintptr_t)start_addr;
> > > > mr->userspace_addr = (uint64_t)(uintptr_t)start_addr;
> > > > mr->memory_size = len;
> > > > - mr->mmap_offset = 0;
> > > > + mr->mmap_offset = 0; /* flags_padding */
> > > > +
> > > > + PMD_DRV_LOG(DEBUG, "index=%u addr=%p len=%" PRIu64,
> > > > + vm->nregions - 1, start_addr, len);
> > > > return 0;
> > > > }
> > > > -/* By default, vhost kernel module allows 64 regions, but DPDK allows
> > > > - * 256 segments. As a relief, below function merges those virtually
> > > > - * adjacent memsegs into one region.
> > > > +/* By default, vhost kernel module allows 64 regions, but DPDK may
> > > > + * have much more memory regions. Below function will treat each
> > > > + * contiguous memory space reserved by DPDK as one region.
> > > > */
> > > > static struct vhost_memory_kernel *
> > > > prepare_vhost_memory_kernel(void)
> > > > {
> > > > struct vhost_memory_kernel *vm;
> > > > - struct walk_arg wa;
> > > > vm = malloc(sizeof(struct vhost_memory_kernel) +
> > > > max_regions *
> > > > @@ -112,20 +112,18 @@ prepare_vhost_memory_kernel(void)
> > > > if (!vm)
> > > > return NULL;
> > > > - wa.region_nr = 0;
> > > > - wa.vm = vm;
> > > > + vm->nregions = 0;
> > > > + vm->padding = 0;
> > > > /*
> > > > * The memory lock has already been taken by memory subsystem
> > > > * or virtio_user_start_device().
> > > > */
> > > > - if (rte_memseg_contig_walk_thread_unsafe(add_memory_region, &wa) < 0) {
> > > > + if (rte_memseg_list_walk_thread_unsafe(add_memseg_list, vm) < 0) {
> > > > free(vm);
> > > > return NULL;
> > > > }
> > > > - vm->nregions = wa.region_nr;
> > > > - vm->padding = 0;
> > > > return vm;
> > > > }
> > > >
> > >
> > > Doesn't that assume single file segments mode?
> >
> > This is to find out the VA ranges reserved by memory subsystem.
> > Why does it need to assume single file segments mode?
>
> If you are not in single-file segments mode, each individual page in a
> VA-contiguous area will be behind a different fd - so it will be part of a
> different region, would it not?
Above code is for vhost-kernel. Kernel doesn't need the
fds to get the access to virtio-user process's memory.
Kernel just needs to know the mappings between GPA (guest
physical address) and VA (virtio-user's virtual address).
>
> >
> >
> > >
> > > --
> > > Thanks,
> > > Anatoly
> >
>
>
> --
> Thanks,
> Anatoly
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-stable] [PATCH 1/3] net/virtio-user: fix deadlock in memory events callback
2018-09-05 4:28 ` [dpdk-stable] [PATCH 1/3] net/virtio-user: fix deadlock in memory events callback Tiwei Bie
2018-09-05 8:10 ` Sean Harte
2018-09-07 9:30 ` Burakov, Anatoly
@ 2018-09-11 12:52 ` Maxime Coquelin
2 siblings, 0 replies; 11+ messages in thread
From: Maxime Coquelin @ 2018-09-11 12:52 UTC (permalink / raw)
To: Tiwei Bie, zhihong.wang, dev; +Cc: seanbh, anatoly.burakov, stable
On 09/05/2018 06:28 AM, Tiwei Bie wrote:
> Deadlock can occur when allocating memory if a vhost-kernel
> based virtio-user device is in use. To fix the deadlock,
> we will take memory hotplug lock explicitly in virtio-user
> when necessary, and always call the _thread_unsafe memory
> functions.
>
> Bugzilla ID: 81
> Fixes: 12ecb2f63b12 ("net/virtio-user: support memory hotplug")
> Cc: stable@dpdk.org
>
> Reported-by: Seán Harte <seanbh@gmail.com>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtio_user/vhost_kernel.c | 6 +++++-
> .../net/virtio/virtio_user/virtio_user_dev.c | 19 +++++++++++++++++++
> 2 files changed, 24 insertions(+), 1 deletion(-)
>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-stable] [PATCH 3/3] net/virtio-user: fix memory hotplug support in vhost-kernel
2018-09-05 4:28 ` [dpdk-stable] [PATCH 3/3] net/virtio-user: fix memory hotplug support in vhost-kernel Tiwei Bie
2018-09-07 9:44 ` Burakov, Anatoly
@ 2018-09-11 13:10 ` Maxime Coquelin
1 sibling, 0 replies; 11+ messages in thread
From: Maxime Coquelin @ 2018-09-11 13:10 UTC (permalink / raw)
To: Tiwei Bie, zhihong.wang, dev; +Cc: seanbh, anatoly.burakov, stable
On 09/05/2018 06:28 AM, Tiwei Bie wrote:
> It's possible to have much more hugepage backed memory regions
> than what vhost-kernel supports due to the memory hotplug, which
> may cause problems. A better solution is to have the virtio-user
> pass all the memory ranges reserved by DPDK to vhost-kernel.
>
> Fixes: 12ecb2f63b12 ("net/virtio-user: support memory hotplug")
> Cc: stable@dpdk.org
>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtio_user/vhost_kernel.c | 38 +++++++++----------
> 1 file changed, 18 insertions(+), 20 deletions(-)
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-stable] [PATCH 3/3] net/virtio-user: fix memory hotplug support in vhost-kernel
2018-09-10 4:04 ` Tiwei Bie
@ 2018-09-17 10:18 ` Burakov, Anatoly
0 siblings, 0 replies; 11+ messages in thread
From: Burakov, Anatoly @ 2018-09-17 10:18 UTC (permalink / raw)
To: Tiwei Bie; +Cc: maxime.coquelin, zhihong.wang, dev, seanbh, stable
On 10-Sep-18 5:04 AM, Tiwei Bie wrote:
> On Fri, Sep 07, 2018 at 01:24:05PM +0100, Burakov, Anatoly wrote:
>> On 07-Sep-18 12:37 PM, Tiwei Bie wrote:
>>> On Fri, Sep 07, 2018 at 10:44:22AM +0100, Burakov, Anatoly wrote:
>>>> On 05-Sep-18 5:28 AM, Tiwei Bie wrote:
>>>>> It's possible to have much more hugepage backed memory regions
>>>>> than what vhost-kernel supports due to the memory hotplug, which
>>>>> may cause problems. A better solution is to have the virtio-user
>>>>> pass all the memory ranges reserved by DPDK to vhost-kernel.
>>>>>
>>>>> Fixes: 12ecb2f63b12 ("net/virtio-user: support memory hotplug")
>>>>> Cc: stable@dpdk.org
>>>>>
>>>>> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>>>>> ---
>>>>> drivers/net/virtio/virtio_user/vhost_kernel.c | 38 +++++++++----------
>>>>> 1 file changed, 18 insertions(+), 20 deletions(-)
>>>>>
>>>>> diff --git a/drivers/net/virtio/virtio_user/vhost_kernel.c b/drivers/net/virtio/virtio_user/vhost_kernel.c
>>>>> index 897fee0af..9338166d9 100644
>>>>> --- a/drivers/net/virtio/virtio_user/vhost_kernel.c
>>>>> +++ b/drivers/net/virtio/virtio_user/vhost_kernel.c
>>>>> @@ -70,41 +70,41 @@ static uint64_t vhost_req_user_to_kernel[] = {
>>>>> [VHOST_USER_SET_MEM_TABLE] = VHOST_SET_MEM_TABLE,
>>>>> };
>>>>> -struct walk_arg {
>>>>> - struct vhost_memory_kernel *vm;
>>>>> - uint32_t region_nr;
>>>>> -};
>>>>> static int
>>>>> -add_memory_region(const struct rte_memseg_list *msl __rte_unused,
>>>>> - const struct rte_memseg *ms, size_t len, void *arg)
>>>>> +add_memseg_list(const struct rte_memseg_list *msl, void *arg)
>>>>> {
>>>>> - struct walk_arg *wa = arg;
>>>>> + struct vhost_memory_kernel *vm = arg;
>>>>> struct vhost_memory_region *mr;
>>>>> void *start_addr;
>>>>> + uint64_t len;
>>>>> - if (wa->region_nr >= max_regions)
>>>>> + if (vm->nregions >= max_regions)
>>>>> return -1;
>>>>> - mr = &wa->vm->regions[wa->region_nr++];
>>>>> - start_addr = ms->addr;
>>>>> + start_addr = msl->base_va;
>>>>> + len = msl->page_sz * msl->memseg_arr.len;
>>>>> +
>>>>> + mr = &vm->regions[vm->nregions++];
>>>>> mr->guest_phys_addr = (uint64_t)(uintptr_t)start_addr;
>>>>> mr->userspace_addr = (uint64_t)(uintptr_t)start_addr;
>>>>> mr->memory_size = len;
>>>>> - mr->mmap_offset = 0;
>>>>> + mr->mmap_offset = 0; /* flags_padding */
>>>>> +
>>>>> + PMD_DRV_LOG(DEBUG, "index=%u addr=%p len=%" PRIu64,
>>>>> + vm->nregions - 1, start_addr, len);
>>>>> return 0;
>>>>> }
>>>>> -/* By default, vhost kernel module allows 64 regions, but DPDK allows
>>>>> - * 256 segments. As a relief, below function merges those virtually
>>>>> - * adjacent memsegs into one region.
>>>>> +/* By default, vhost kernel module allows 64 regions, but DPDK may
>>>>> + * have much more memory regions. Below function will treat each
>>>>> + * contiguous memory space reserved by DPDK as one region.
>>>>> */
>>>>> static struct vhost_memory_kernel *
>>>>> prepare_vhost_memory_kernel(void)
>>>>> {
>>>>> struct vhost_memory_kernel *vm;
>>>>> - struct walk_arg wa;
>>>>> vm = malloc(sizeof(struct vhost_memory_kernel) +
>>>>> max_regions *
>>>>> @@ -112,20 +112,18 @@ prepare_vhost_memory_kernel(void)
>>>>> if (!vm)
>>>>> return NULL;
>>>>> - wa.region_nr = 0;
>>>>> - wa.vm = vm;
>>>>> + vm->nregions = 0;
>>>>> + vm->padding = 0;
>>>>> /*
>>>>> * The memory lock has already been taken by memory subsystem
>>>>> * or virtio_user_start_device().
>>>>> */
>>>>> - if (rte_memseg_contig_walk_thread_unsafe(add_memory_region, &wa) < 0) {
>>>>> + if (rte_memseg_list_walk_thread_unsafe(add_memseg_list, vm) < 0) {
>>>>> free(vm);
>>>>> return NULL;
>>>>> }
>>>>> - vm->nregions = wa.region_nr;
>>>>> - vm->padding = 0;
>>>>> return vm;
>>>>> }
>>>>>
>>>>
>>>> Doesn't that assume single file segments mode?
>>>
>>> This is to find out the VA ranges reserved by memory subsystem.
>>> Why does it need to assume single file segments mode?
>>
>> If you are not in single-file segments mode, each individual page in a
>> VA-contiguous area will be behind a different fd - so it will be part of a
>> different region, would it not?
>
> Above code is for vhost-kernel. Kernel doesn't need the
> fds to get the access to virtio-user process's memory.
> Kernel just needs to know the mappings between GPA (guest
> physical address) and VA (virtio-user's virtual address).
>
Ah OK. Thanks for clarification!
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2018-09-17 10:18 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <20180905042852.6212-1-tiwei.bie@intel.com>
2018-09-05 4:28 ` [dpdk-stable] [PATCH 1/3] net/virtio-user: fix deadlock in memory events callback Tiwei Bie
2018-09-05 8:10 ` Sean Harte
2018-09-07 9:30 ` Burakov, Anatoly
2018-09-11 12:52 ` Maxime Coquelin
2018-09-05 4:28 ` [dpdk-stable] [PATCH 3/3] net/virtio-user: fix memory hotplug support in vhost-kernel Tiwei Bie
2018-09-07 9:44 ` Burakov, Anatoly
2018-09-07 11:37 ` Tiwei Bie
2018-09-07 12:24 ` Burakov, Anatoly
2018-09-10 4:04 ` Tiwei Bie
2018-09-17 10:18 ` Burakov, Anatoly
2018-09-11 13:10 ` Maxime Coquelin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).