DPDK patches and discussions
 help / color / mirror / Atom feed
From: Maciej Grochowski <maciej.grochowski@codilime.com>
To: "Gaohaifeng (A)" <gaohaifeng.gao@huawei.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] Vhost user no connection vm2vm
Date: Fri, 22 May 2015 11:28:28 +0200	[thread overview]
Message-ID: <CALPCkO_LdfC+VrWeyph5y46w+ycT0tbgSGUgj7GBM=qR5wX_cA@mail.gmail.com> (raw)
In-Reply-To: <47CEA9C0E570484FBF22EF0D7EBCE5B534AC4D90@szxema505-mbs.china.huawei.com>

"Do you use some command I suggest before,
In case of you miss the previous mail, just copy it again:"

-Yes but it didn't help me ;/

I will describe step by step to esure that configuration is made by right
way


I started vhost:

./build/app/vhost-switch -c f -n 4  --huge-dir /mnt/huge --socket-mem 3712
-- -p 0x1 --dev-basename usvhost --vm2vm 1 --stats 9

Now I run two vm machines, with followed configuration

VM1   __  __  VM2
eth0 >  \/  > eth0
eth1 >__/\__> eth1

So I will connect VM1.eth0 with VM2.eth1 and VM1.eth1 with VM2.eth0
Because it is test env and I didn't have other network connection on vhost
I will create two networks 192.168.0.x and 192.168.1.x
 VM1.eth0 with VM2.eth1 will be placed in 192.168.0.x and VM1.eth1 with
VM2.eth0 in 192.168.1.x

## I started first VM1 as follow
kvm -nographic -boot c -machine pc-i440fx-1.4,accel=kvm -name vm2 -cpu host
-smp 1 \
-hda /home/ubuntu/esi_ee/qemu/debian_min_1.qcow2 -m 256 -mem-path /mnt/huge
-mem-prealloc \
-chardev
socket,id=char3,path=/home/ubuntu/esi_ee/dpdk/examples/vhost/usvhost \
-netdev type=vhost-user,id=hostnet3,chardev=char3 \
-device
virtio-net-pci,netdev=hostnet3,id=net3,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
\
-chardev
socket,id=char4,path=/home/ubuntu/esi_ee/dpdk/examples/vhost/usvhost \
-netdev type=vhost-user,id=hostnet4,chardev=char4 \
-device
virtio-net-pci,netdev=hostnet4,id=net4,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
## qemu give followed output
qemu-system-x86_64: -netdev type=vhost-user,id=hostnet3,chardev=char3:
chardev "char3" went up
qemu-system-x86_64: -netdev type=vhost-user,id=hostnet4,chardev=char4:
chardev "char4" went up

## second VM2
kvm -nographic -boot c -machine pc-i440fx-1.4,accel=kvm -name vm1 -cpu host
-smp 1 \
-hda /home/ubuntu/esi_ee/qemu/debian_min_2.qcow2 -m 256 -mem-path /mnt/huge
-mem-prealloc \
-chardev
socket,id=char1,path=/home/ubuntu/esi_ee/dpdk/examples/vhost/usvhost \
-netdev type=vhost-user,id=hostnet1,chardev=char1 \
-device
virtio-net-pci,netdev=hostnet1,id=net1,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
\
-chardev
socket,id=char2,path=/home/ubuntu/esi_ee/dpdk/examples/vhost/usvhost \
-netdev type=vhost-user,id=hostnet2,chardev=char2 \
-device
virtio-net-pci,netdev=hostnet2,id=net2,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
## second output
qemu-system-x86_64: -netdev type=vhost-user,id=hostnet1,chardev=char1:
chardev "char1" went up
qemu-system-x86_64: -netdev type=vhost-user,id=hostnet2,chardev=char2:
chardev "char2" went up



After that I had MAC conflict between VM2 and VM1

VM1: -ifconfig -a
eth0      Link encap:Ethernet  HWaddr 52:54:00:12:34:56
          inet6 addr: fe80::5054:ff:fe12:3456/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth1      Link encap:Ethernet  HWaddr 52:54:00:12:34:57
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)


VM2: -ifconfig -a
eth0      Link encap:Ethernet  HWaddr 52:54:00:12:34:56
          inet6 addr: fe80::5054:ff:fe12:3456/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth1      Link encap:Ethernet  HWaddr 52:54:00:12:34:57
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

In KNI example I had something similar and also no packet flow and solution
was to change MAC addresses

#VM1
/etc/init.d/networking stop
ifconfig eth0 hw ether 00:01:04:00:01:00
ifconfig eth1 hw ether 00:01:04:00:01:01
/etc/init.d/networking start
ifconfig eth0
ifconfig eth1

#VM2
/etc/init.d/networking stop
ifconfig eth0 hw ether 00:01:04:00:02:00
ifconfig eth1 hw ether 00:01:04:00:02:01
/etc/init.d/networking start
ifconfig eth0
ifconfig eth1

Then I make a configuration that You show:

#VM1
ip addr add 192.168.0.100/24 dev eth0
ip addr add 192.168.1.100/24 dev eth1
ip neigh add 192.168.0.200 lladdr 00:01:04:00:02:01 dev eth0
ip link set dev eth0 up
ip neigh add 192.168.1.200 lladdr 00:01:04:00:02:00 dev eth1
ip link set dev eth1 up

eth0      Link encap:Ethernet  HWaddr 00:01:04:00:01:00
          inet addr:192.168.0.100  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::201:4ff:fe00:100/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth1      Link encap:Ethernet  HWaddr 00:01:04:00:01:01
          inet addr:192.168.1.100  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::201:4ff:fe00:101/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)


#VM2
ip addr add 192.168.1.200/24 dev eth0
ip addr add 192.168.0.200/24 dev eth1
ip neigh add 192.168.1.100 lladdr 00:01:04:00:01:01 dev eth0
ip link set dev eth0 up
ip neigh add 192.168.0.100 lladdr 00:01:04:00:01:00 dev eth1
ip link set dev eth1 up

eth0      Link encap:Ethernet  HWaddr 00:01:04:00:02:00
          inet addr:192.168.1.200  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::201:4ff:fe00:200/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth1      Link encap:Ethernet  HWaddr 00:01:04:00:02:01
          inet addr:192.168.0.200  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::201:4ff:fe00:201/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

After that:

VM1.eth0 ip=192.168.0.100-MAC=00:01:04:00:01:00  is connected to VM2.eth1
ip=192.168.0.200-MAC=00:01:04:00:02:01
VM1.eth1 ip=192.168.1.100-MAC=00:01:04:00:01:01  is connected to VM2.eth0
ip=192.168.1.200-MAC=00:01:04:00:02:00

That show my arp tables:

#VM1
arp -a
? (192.168.0.200) at 00:01:04:00:02:01 [ether] PERM on eth0
? (192.168.1.200) at 00:01:04:00:02:00 [ether] PERM on eth1


#VM2
arp -a
? (192.168.0.100) at 00:01:04:00:01:00 [ether] PERM on eth1
? (192.168.1.100) at 00:01:04:00:01:01 [ether] PERM on eth0


#After this configuration I trying to ping from VM1 VM2 (both IP)

root@debian-amd64:~# ping -I eth0 192.168.0.200
PING 192.168.0.200 (192.168.0.200) from 192.168.0.100 eth0: 56(84) bytes of
data.
^C
--- 192.168.0.200 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4032ms

root@debian-amd64:~# ping 192.168.0.200
PING 192.168.0.200 (192.168.0.200) 56(84) bytes of data.
^C
--- 192.168.0.200 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 3999ms

root@debian-amd64:~# ping -I eth1 192.168.1.200
PING 192.168.1.200 (192.168.1.200) from 192.168.1.100 eth1: 56(84) bytes of
data.
^C
--- 192.168.1.200 ping statistics ---
6 packets transmitted, 0 received, 100% packet loss, time 5040ms

root@debian-amd64:~# ping 192.168.1.200
PING 192.168.1.200 (192.168.1.200) 56(84) bytes of data.
^C
--- 192.168.1.200 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4030ms

#and from VM2 VM1
root@debian-amd64:~# ping 192.168.0.100
PING 192.168.0.100 (192.168.0.100) 56(84) bytes of data.
^C
--- 192.168.0.100 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2015ms

root@debian-amd64:~# ping -I eth1 192.168.0.100
PING 192.168.0.100 (192.168.0.100) from 192.168.0.200 eth1: 56(84) bytes of
data.
^C
--- 192.168.0.100 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4032ms

root@debian-amd64:~# ping -I eth0 192.168.1.100
PING 192.168.1.100 (192.168.1.100) from 192.168.1.200 eth0: 56(84) bytes of
data.
^C
--- 192.168.1.100 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3024ms

root@debian-amd64:~# ping 192.168.1.100
PING 192.168.1.100 (192.168.1.100) 56(84) bytes of data.
^C
--- 192.168.1.100 ping statistics ---
8 packets transmitted, 0 received, 100% packet loss, time 7055ms


Also stats from vhost:
Device statistics ====================================
Statistics for device 0 ------------------------------
TX total:               0
TX dropped:             0
TX successful:          0
RX total:               0
RX dropped:             0
RX successful:          0
Statistics for device 1 ------------------------------
TX total:               0
TX dropped:             0
TX successful:          0
RX total:               0
RX dropped:             0
RX successful:          0
Statistics for device 2 ------------------------------
TX total:               0
TX dropped:             0
TX successful:          0
RX total:               0
RX dropped:             0
RX successful:          0
Statistics for device 3 ------------------------------
TX total:               0
TX dropped:             0
TX successful:          0
RX total:               0
RX dropped:             0
RX successful:          0
======================================================

My way of thinking was: "In vhost there are several function for L2 that
learn MAC-s and links them so why I see no received packets?"

Maybe I'm doing some silly bug in network configuration but for me its
looking like data flow issue especially that no function on the vhost side
did not see any packages.

On Fri, May 22, 2015 at 10:54 AM, Gaohaifeng (A) <gaohaifeng.gao@huawei.com>
wrote:

>  Hi
>
> What kernel version are You using on host/guest?
>
> >>I use ubuntu 12.04(3.11.0-15-generic) in host. In vm I use ubuntu 12.04
> and ubuntu14.04 both, but the result is same.
>
>
>
> Do you use some command I suggest before, In case of you miss the previous
> mail, just copy it again:
>
> >> I try it but the result is same
>
>
>
>
>
> I use l2fwd in vm to do more test and found that virtio_xmit_pkts is
> called and avail_idx is increasing in vm, but in host avail_idx(in
> rte_vhost_dequeue_burst function) is always zero. It seems that the host
> see the different mem area.
>
>
>
> Init Logs below:
>
> VHOST_CONFIG: (0) Mergeable RX buffers disabled
>
> VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
>
> VHOST_CONFIG: mapped region 0 fd:24 to 0x2aaaaac00000 sz:0xa0000 off:0x0
>
> VHOST_CONFIG: REGION: 0 GPA: (nil) QEMU VA: 0x2aaaaac00000 SIZE (655360)
>
> VHOST_CONFIG: mapped region 1 fd:26 to 0x2aaaaae00000 sz:0x40000000
> off:0xc0000
>
> VHOST_CONFIG: REGION: 1 GPA: 0xc0000 QEMU VA: 0x2aaaaacc0000 SIZE
> (1072955392)
>
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
>
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
>
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
>
> VHOST_CONFIG: (0) mapped address desc: 0x2aaae62f1000
>
> VHOST_CONFIG: (0) mapped address avail: 0x2aaae62f2000
>
> VHOST_CONFIG: (0) mapped address used: 0x2aaae62f3000
>
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
>
> VHOST_CONFIG: vring kick idx:0 file:23
>
> VHOST_CONFIG: virtio isn't ready for processing.
>
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
>
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
>
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
>
> VHOST_CONFIG: (0) mapped address desc: 0x2aaae62f4000
>
> VHOST_CONFIG: (0) mapped address avail: 0x2aaae62f5000
>
> VHOST_CONFIG: (0) mapped address used: 0x2aaae62f6000
>
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
>
> VHOST_CONFIG: vring kick idx:1 file:28
>
> VHOST_CONFIG: virtio is now ready for processing.
>
>
>
>
>
> >Unfortunately not, I have the same issue in rte_vhost_dequeue_burst
> function.
>
>
>
> >What kernel version are You using on host/guest? In my case on host I
> had 3.13.0 and on guests old 3.2 debian.
>
>
>
> >I just looked deeper into virtio  back-end (vhost) but at first glace it
> seems like nothing coming from virtio.
>
>
>
> >What I'm going to do today is to compile newest kernel for vhost and
> guest and debug where packet flow stuck, I will report the result
>
>
>
> >On Thu, May 21, 2015 at 11:12 AM, Gaohaifeng (A) <
> gaohaifeng.gao@huawei.com> wrote:
>
> >Hi Maciej
>         >Did you solve your problem? I meet this problem as your case.
> And I found avail_idx(in rte_vhost_dequeue_burst function) is always zero
> although I do send packets in VM.
>
> >Thanks.
>
>
>
> > Hello, I have strange issue with example/vhost app.
> >
> > I had compiled DPDK to run a vhost example app with followed flags
> >
> > CONFIG_RTE_LIBRTE_VHOST=y
> > CONFIG_RTE_LIBRTE_VHOST_USER=y
> > CONFIG_RTE_LIBRTE_VHOST_DEBUG=n
> >
> > then I run vhost app based on documentation:
> >
> >  ./build/app/vhost-switch -c f -n 4  --huge-dir /mnt/huge --socket-mem
> > 3712
> > -- -p 0x1 --dev-basename usvhost --vm2vm 1 --stats 9
> >
> > -I use this strange --socket-mem 3712 because of physical limit of
> > memoryon device -with this vhost user I run two KVM machines with
> > followed parameters
> >
> > kvm -nographic -boot c -machine pc-i440fx-1.4,accel=kvm -name vm1 -cpu
> > host -smp 2 -hda /home/ubuntu/qemu/debian_squeeze2_amd64.qcow2 -m
> > 1024 -mem-path /mnt/huge -mem-prealloc -chardev
> > socket,id=char1,path=/home/ubuntu/dpdk/examples/vhost/usvhost
> > -netdev type=vhost-user,id=hostnet1,chardev=char1
> > -device virtio-net
> > pci,netdev=hostnet1,id=net1,csum=off,gso=off,guest_tso4=off,guest_tso6
> > =
> > off,guest_ecn=off
> > -chardev
> > socket,id=char2,path=/home/ubuntu/dpdk/examples/vhost/usvhost
> > -netdev type=vhost-user,id=hostnet2,chardev=char2
> > -device
> > virtio-net-
> > pci,netdev=hostnet2,id=net2,csum=off,gso=off,guest_tso4=off,guest_tso6
> > =
> > off,guest_ecn=off
> >
> > After running KVM virtio correctly starting (below logs from vhost app)
> ...
> > VHOST_CONFIG: mapped region 0 fd:31 to 0x2aaabae00000 sz:0xa0000
> > off:0x0
> > VHOST_CONFIG: mapped region 1 fd:37 to 0x2aaabb000000 sz:0x10000000
> > off:0xc0000
> > VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
> > VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
> > VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
> > VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
> > VHOST_CONFIG: vring kick idx:0 file:38
> > VHOST_CONFIG: virtio isn't ready for processing.
> > VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
> > VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
> > VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
> > VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
> > VHOST_CONFIG: vring kick idx:1 file:39
> > VHOST_CONFIG: virtio is now ready for processing.
> > VHOST_DATA: (1) Device has been added to data core 2
> >
> > So everything looking good.
> >
> > Maybe it is something trivial but using options: --vm2vm 1 (or) 2
> > --stats 9 it seems that I didn't have connection between VM2VM
> > communication. I set manually IP for eth0 and eth1:
> >
> > on 1 VM
> > ifconfig eth0 192.168.0.100 netmask 255.255.255.0 up ifconfig eth1
> > 192.168.1.101 netmask 255.255.255.0 up
> >
> > on 2 VM
> > ifconfig eth0 192.168.1.200 netmask 255.255.255.0 up ifconfig eth1
> > 192.168.0.202 netmask 255.255.255.0 up
> >
> > I notice that in vhostapp are one directional rx/tx queue so I tryied
> > to ping between VM1 to VM2 using both interfaces ping -I eth0
> > 192.168.1.200 ping -I
> > eth1 192.168.1.200 ping -I eth0 192.168.0.202 ping -I eth1
> > 192.168.0.202
> >
> > on VM2 using tcpdump on both interfaces I didn't see any ICMP requests
> > or traffic
> >
> > And I cant ping between any IP/interfaces, moreover stats show me that:
> >
> > Device statistics ====================================
> > Statistics for device 0 ------------------------------
> > TX total:               0
> > TX dropped:             0
> > TX successful:          0
> > RX total:               0
> > RX dropped:             0
> > RX successful:          0
> > Statistics for device 1 ------------------------------
> > TX total:               0
> > TX dropped:             0
> > TX successful:          0
> > RX total:               0
> > RX dropped:             0
> > RX successful:          0
> > Statistics for device 2 ------------------------------
> > TX total:               0
> > TX dropped:             0
> > TX successful:          0
> > RX total:               0
> > RX dropped:             0
> > RX successful:          0
> > Statistics for device 3 ------------------------------
> > TX total:               0
> > TX dropped:             0
> > TX successful:          0
> > RX total:               0
> > RX dropped:             0
> > RX successful:          0
> > ======================================================
> >
> > So it seems like any packet didn't leave my VM.
> > also arp table is empty on each VM.
>
>
>

  reply	other threads:[~2015-05-22  9:28 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-05-15 10:15 Maciej Grochowski
2015-05-17 14:41 ` Ouyang, Changchun
2015-05-21  9:12 ` [dpdk-dev] FW: " Gaohaifeng (A)
2015-05-22  8:05   ` Maciej Grochowski
2015-05-22  8:26     ` Ouyang, Changchun
2015-05-22  8:54     ` [dpdk-dev] " Gaohaifeng (A)
2015-05-22  9:28       ` Maciej Grochowski [this message]
2015-05-22  9:58         ` Tetsuya Mukawa
2015-05-22 10:04           ` Maciej Grochowski
2015-05-22 10:54             ` Andriy Berestovskyy
2015-05-22 17:59               ` Maciej Grochowski
2015-05-25  4:15                 ` Gaohaifeng (A)
2015-05-22  9:27     ` [dpdk-dev] FW: " Luke Gorrie

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALPCkO_LdfC+VrWeyph5y46w+ycT0tbgSGUgj7GBM=qR5wX_cA@mail.gmail.com' \
    --to=maciej.grochowski@codilime.com \
    --cc=dev@dpdk.org \
    --cc=gaohaifeng.gao@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).