* [dpdk-dev] KNI as kernel vHost backend failing
@ 2015-01-01 9:02 sai kiran
2015-02-25 10:35 ` Xie, Huawei
0 siblings, 1 reply; 2+ messages in thread
From: sai kiran @ 2015-01-01 9:02 UTC (permalink / raw)
To: dev
Hi,
We are trying to experiment with DPDK’s KNI application, with KNI working
as Kernel vHost backend.
1. After starting the KNI application, KNI application has detected link
up.
*[root@localhost kni]# ./build/app/kni -c 0xf0 -n 4 -- -p 0x3 -P
--config="(0,4,6),(1,5,7)"*
APP: Initialising port 0 ...
KNI: pci: 10:00:01 8086:10fb
APP: Initialising port 1 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH
value to 4, 8, or 16.
KNI: pci: 16:00:01 8086:10e7
Checking link status
.................................done
Port 0 Link Up - speed 10000 Mbps - full-duplex
Port 1 Link Up - speed 1000 Mbps - full-duplex
APP: Lcore 5 is reading from port 1
APP: Lcore 7 is writing to port 1
APP: Lcore 6 is writing to port 0
APP: Lcore 4 is reading from port 0
2. As mentioned in Programming guide, *sock_en* variable in sysfs is
enabled and a fd is generated
[root@localhost dpdk-1.7.1]# cat /sys/class/net/vEth0/sock_en
1
[root@localhost dpdk-1.7.1]# cat /sys/class/net/vEth1/sock_en
1
[root@localhost dpdk-1.7.1]# cat /sys/class/net/vEth0/sock_fd
11
[root@localhost dpdk-1.7.1]# cat /sys/class/net/vEth1/sock_fd
12
3. But when a VM is launched with this file-descriptor as the
vhost-backend, the qemu-kvm is throwing an ioctl-failure error. This ioctl
is making the vhost-backend fallback to virtio-userspace.
[root@localhost qemu-kvm-1.2.0]# /usr/bin/qemu-kvm -m 2048 -enable-kvm -cpu
host -smp 2 -name VSK1 -drive file=/root/SAI/NSVPX-KVM-11.0-28.1_nc.raw
-netdev tap,fd=12,id=mynet_kni,vhost=on -device
virtio-net-pci,netdev=mynet_kni,bus=pci.0,addr=0x4,ioeventfd=on
qemu-kvm: -netdev tap,fd=12,id=mynet_kni,vhost=on: TUNGETIFF ioctl()
failed: Bad file descriptor
TUNSETOFFLOAD ioctl() failed: Bad file descriptor
qemu-kvm: unable to start vhost net: 88: falling back on userspace virtio
qemu-kvm: unable to start vhost net: 88: falling back on userspace virtio
qemu-kvm: unable to start vhost net: 88: falling back on userspace virtio
With this failure, the traffic from VM is not flowing through KNI interface.
The above mentioned ioctl failure does NOT happen consistently. During the
instances when failure is not seen, traffic flows successfully through the
KNI interfaces.
Can someone please shed some light as to what is happening in this case.
Are we missing something here? Is there a known issue?
Thanks,
Kiran
--
*Saikiran V*
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [dpdk-dev] KNI as kernel vHost backend failing
2015-01-01 9:02 [dpdk-dev] KNI as kernel vHost backend failing sai kiran
@ 2015-02-25 10:35 ` Xie, Huawei
0 siblings, 0 replies; 2+ messages in thread
From: Xie, Huawei @ 2015-02-25 10:35 UTC (permalink / raw)
To: sai kiran; +Cc: dev
On 1/1/2015 5:02 PM, sai kiran wrote:
> Hi,
>
>
>
> We are trying to experiment with DPDK’s KNI application, with KNI working
> as Kernel vHost backend.
>
>
> 1. After starting the KNI application, KNI application has detected link
> up.
>
>
> *[root@localhost kni]# ./build/app/kni -c 0xf0 -n 4 -- -p 0x3 -P
> --config="(0,4,6),(1,5,7)"*
>
>
> APP: Initialising port 0 ...
>
> KNI: pci: 10:00:01 8086:10fb
>
> APP: Initialising port 1 ...
>
> PMD: To improve 1G driver performance, consider setting the TX WTHRESH
> value to 4, 8, or 16.
>
> KNI: pci: 16:00:01 8086:10e7
>
> Checking link status
>
> .................................done
>
> Port 0 Link Up - speed 10000 Mbps - full-duplex
>
> Port 1 Link Up - speed 1000 Mbps - full-duplex
>
> APP: Lcore 5 is reading from port 1
>
> APP: Lcore 7 is writing to port 1
>
> APP: Lcore 6 is writing to port 0
>
> APP: Lcore 4 is reading from port 0
>
>
> 2. As mentioned in Programming guide, *sock_en* variable in sysfs is
> enabled and a fd is generated
>
> [root@localhost dpdk-1.7.1]# cat /sys/class/net/vEth0/sock_en
> 1
> [root@localhost dpdk-1.7.1]# cat /sys/class/net/vEth1/sock_en
> 1
> [root@localhost dpdk-1.7.1]# cat /sys/class/net/vEth0/sock_fd
> 11
> [root@localhost dpdk-1.7.1]# cat /sys/class/net/vEth1/sock_fd
> 12
>
> 3. But when a VM is launched with this file-descriptor as the
> vhost-backend, the qemu-kvm is throwing an ioctl-failure error. This
> ioctl
> is making the vhost-backend fallback to virtio-userspace.
>
>
>
> [root@localhost qemu-kvm-1.2.0]# /usr/bin/qemu-kvm -m 2048 -enable-kvm
> -cpu
> host -smp 2 -name VSK1 -drive file=/root/SAI/NSVPX-KVM-11.0-28.1_nc.raw
> -netdev tap,fd=12,id=mynet_kni,vhost=on -device
> virtio-net-pci,netdev=mynet_kni,bus=pci.0,addr=0x4,ioeventfd=on
>
> qemu-kvm: -netdev tap,fd=12,id=mynet_kni,vhost=on: TUNGETIFF ioctl()
> failed: Bad file descriptor
>
> TUNSETOFFLOAD ioctl() failed: Bad file descriptor
>
> qemu-kvm: unable to start vhost net: 88: falling back on userspace virtio
>
> qemu-kvm: unable to start vhost net: 88: falling back on userspace virtio
>
> qemu-kvm: unable to start vhost net: 88: falling back on userspace virtio
>
> With this failure, the traffic from VM is not flowing through KNI
> interface.
>
>
>
> The above mentioned ioctl failure does NOT happen consistently. During
> the
> instances when failure is not seen, traffic flows successfully through
> the
> KNI interfaces.
>
>
>
> Can someone please shed some light as to what is happening in this case.
> Are we missing something here? Is there a known issue?
>
>
Hi Kiran:
Is it possible you switch to user space vhost?
>
> Thanks,
>
> Kiran
>
>
>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2015-02-25 10:36 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-01 9:02 [dpdk-dev] KNI as kernel vHost backend failing sai kiran
2015-02-25 10:35 ` Xie, Huawei
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).