DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] problem in binding interfaces of virtio-pci on the VM
@ 2015-02-26 14:11 Srinivasreddy R
  2015-02-26 15:12 ` [dpdk-dev] [Dpdk-ovs] " Polehn, Mike A
  0 siblings, 1 reply; 10+ messages in thread
From: Srinivasreddy R @ 2015-02-26 14:11 UTC (permalink / raw)
  To: dev, dpdk-ovs

hi ,
I have written sample program for usvhost  supported by ovdk.

i have initialized VM using the below command .
On the VM :

I am able to see two interfaces . and working fine with traffic in
rawsocket mode .
my problem is when i bind the interfaces to pmd driver[ ibg_uio ] my
virtual machine is getting hanged . and  i am not able to access it further
.
now my question is . what may be the reason for the behavior . and how can
in debug the root cause .
please help in finding out the problem .



 ./tools/dpdk_nic_bind.py --status

Network devices using DPDK-compatible driver
============================================
<none>

Network devices using kernel driver
===================================
0000:00:03.0 '82540EM Gigabit Ethernet Controller' if=ens3 drv=e1000
unused=igb_uio *Active*
0000:00:04.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio
0000:00:05.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio

Other network devices
=====================
<none>


./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0



./x86_64-softmmu/qemu-system-x86_64 -cpu host -boot c  -hda
/home/utils/images/vm1.img  -m 2048M -smp 3 --enable-kvm -name 'VM1'
-nographic -vnc :1 -pidfile /tmp/vm1.pid -drive
file=fat:rw:/tmp/qemu_share,snapshot=off -monitor
unix:/tmp/vm1monitor,server,nowait  -net none -no-reboot -mem-path
/dev/hugepages -mem-prealloc -netdev
type=tap,id=net1,script=no,downscript=no,ifname=usvhost1,vhost=on -device
virtio-net-pci,netdev=net1,mac=00:16:3e:00:03:03,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
-netdev type=tap,id=net2,script=no,downscript=no,ifname=usvhost2,vhost=on
-device
virtio-net-pci,netdev=net2,mac=00:16:3e:00:03:04,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off




----------
thanks
srinivas.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-dev] [Dpdk-ovs] problem in binding interfaces of virtio-pci on the VM
  2015-02-26 14:11 [dpdk-dev] problem in binding interfaces of virtio-pci on the VM Srinivasreddy R
@ 2015-02-26 15:12 ` Polehn, Mike A
  2015-02-26 16:38   ` Srinivasreddy R
  0 siblings, 1 reply; 10+ messages in thread
From: Polehn, Mike A @ 2015-02-26 15:12 UTC (permalink / raw)
  To: Srinivasreddy R, dev, dpdk-ovs

In this example, the control network 00:03.0, remains unbound to UIO driver but remains attached
 to Linux device driver (ssh access with putty) and just the target interfaces are bound.
Below, it shows all 3 interfaces bound to the uio driver, which are not usable until a task uses the UIO driver. 

[root@F21vm l3fwd-vf]# lspci -nn
00:00.0 Host bridge [0600]: Intel Corporation 440FX - 82441FX PMC [Natoma] [8086:1237] (rev 02)
00:01.0 ISA bridge [0601]: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] [8086:7000]
00:01.1 IDE interface [0101]: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] [8086:7010]
00:01.3 Bridge [0680]: Intel Corporation 82371AB/EB/MB PIIX4 ACPI [8086:7113] (rev 03)
00:02.0 VGA compatible controller [0300]: Cirrus Logic GD 5446 [1013:00b8]
00:03.0 Ethernet controller [0200]: Red Hat, Inc Virtio network device [1af4:1000]
00:04.0 Ethernet controller [0200]: Intel Corporation XL710/X710 Virtual Function [8086:154c] (rev 01)
00:05.0 Ethernet controller [0200]: Intel Corporation XL710/X710 Virtual Function [8086:154c] (rev 01)

[root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py --bind=igb_uio 00:04.0
[root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py --bind=igb_uio 00:05.0
[root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py --status

Network devices using DPDK-compatible driver
============================================
0000:00:04.0 'XL710/X710 Virtual Function' drv=igb_uio unused=i40evf
0000:00:05.0 'XL710/X710 Virtual Function' drv=igb_uio unused=i40evf

Network devices using kernel driver
===================================
0000:00:03.0 'Virtio network device' if= drv=virtio-pci unused=virtio_pci,igb_uio

Other network devices
=====================
<none>

-----Original Message-----
From: Dpdk-ovs [mailto:dpdk-ovs-bounces@lists.01.org] On Behalf Of Srinivasreddy R
Sent: Thursday, February 26, 2015 6:11 AM
To: dev@dpdk.org; dpdk-ovs@lists.01.org
Subject: [Dpdk-ovs] problem in binding interfaces of virtio-pci on the VM

hi ,
I have written sample program for usvhost  supported by ovdk.

i have initialized VM using the below command .
On the VM :

I am able to see two interfaces . and working fine with traffic in rawsocket mode .
my problem is when i bind the interfaces to pmd driver[ ibg_uio ] my virtual machine is getting hanged . and  i am not able to access it further .
now my question is . what may be the reason for the behavior . and how can in debug the root cause .
please help in finding out the problem .



 ./tools/dpdk_nic_bind.py --status

Network devices using DPDK-compatible driver ============================================
<none>

Network devices using kernel driver
===================================
0000:00:03.0 '82540EM Gigabit Ethernet Controller' if=ens3 drv=e1000 unused=igb_uio *Active*
0000:00:04.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio
0000:00:05.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio

Other network devices
=====================
<none>


./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0



./x86_64-softmmu/qemu-system-x86_64 -cpu host -boot c  -hda /home/utils/images/vm1.img  -m 2048M -smp 3 --enable-kvm -name 'VM1'
-nographic -vnc :1 -pidfile /tmp/vm1.pid -drive file=fat:rw:/tmp/qemu_share,snapshot=off -monitor unix:/tmp/vm1monitor,server,nowait  -net none -no-reboot -mem-path /dev/hugepages -mem-prealloc -netdev type=tap,id=net1,script=no,downscript=no,ifname=usvhost1,vhost=on -device virtio-net-pci,netdev=net1,mac=00:16:3e:00:03:03,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
-netdev type=tap,id=net2,script=no,downscript=no,ifname=usvhost2,vhost=on
-device
virtio-net-pci,netdev=net2,mac=00:16:3e:00:03:04,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off




----------
thanks
srinivas.
_______________________________________________
Dpdk-ovs mailing list
Dpdk-ovs@lists.01.org
https://lists.01.org/mailman/listinfo/dpdk-ovs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-dev] [Dpdk-ovs] problem in binding interfaces of virtio-pci on the VM
  2015-02-26 15:12 ` [dpdk-dev] [Dpdk-ovs] " Polehn, Mike A
@ 2015-02-26 16:38   ` Srinivasreddy R
  2015-02-26 17:00     ` Bruce Richardson
  0 siblings, 1 reply; 10+ messages in thread
From: Srinivasreddy R @ 2015-02-26 16:38 UTC (permalink / raw)
  To: Polehn, Mike A; +Cc: dev, dpdk-ovs

hi Mike,
Thanks for our detailed explanation of your example . usually i do similar
to u and i am aware of working with dpdk applications .
my problem is :
1. i have written a code for  host to guest communication .[taken form
usvhost which is developed in ovdk vswitch] .
2. i launched VM with two  interfaces .
3. i am able to send and receive traffic to and from guest to host on these
interfaces .
4. when i  try to bind these interfaces to igb_uio  to run dpdk application
. i am not able to access my instance . it got struck and not responding .
i need to hard reboot the vm.

My Question is  :
surely i might done something wrong in code . as my VM is not able to
access any more when i try to bind interfaces to igb_uio  . not able to
debug the issue .
someone please help me in figuring the issue . i dont find anything in
/var/log/messages after relaunching the instance .


thanks,
srinivas.



On Thu, Feb 26, 2015 at 8:42 PM, Polehn, Mike A <mike.a.polehn@intel.com>
wrote:

> In this example, the control network 00:03.0, remains unbound to UIO
> driver but remains attached
>  to Linux device driver (ssh access with putty) and just the target
> interfaces are bound.
> Below, it shows all 3 interfaces bound to the uio driver, which are not
> usable until a task uses the UIO driver.
>
> [root@F21vm l3fwd-vf]# lspci -nn
> 00:00.0 Host bridge [0600]: Intel Corporation 440FX - 82441FX PMC [Natoma]
> [8086:1237] (rev 02)
> 00:01.0 ISA bridge [0601]: Intel Corporation 82371SB PIIX3 ISA
> [Natoma/Triton II] [8086:7000]
> 00:01.1 IDE interface [0101]: Intel Corporation 82371SB PIIX3 IDE
> [Natoma/Triton II] [8086:7010]
> 00:01.3 Bridge [0680]: Intel Corporation 82371AB/EB/MB PIIX4 ACPI
> [8086:7113] (rev 03)
> 00:02.0 VGA compatible controller [0300]: Cirrus Logic GD 5446 [1013:00b8]
> 00:03.0 Ethernet controller [0200]: Red Hat, Inc Virtio network device
> [1af4:1000]
> 00:04.0 Ethernet controller [0200]: Intel Corporation XL710/X710 Virtual
> Function [8086:154c] (rev 01)
> 00:05.0 Ethernet controller [0200]: Intel Corporation XL710/X710 Virtual
> Function [8086:154c] (rev 01)
>
> [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> --bind=igb_uio 00:04.0
> [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> --bind=igb_uio 00:05.0
> [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py --status
>
> Network devices using DPDK-compatible driver
> ============================================
> 0000:00:04.0 'XL710/X710 Virtual Function' drv=igb_uio unused=i40evf
> 0000:00:05.0 'XL710/X710 Virtual Function' drv=igb_uio unused=i40evf
>
> Network devices using kernel driver
> ===================================
> 0000:00:03.0 'Virtio network device' if= drv=virtio-pci
> unused=virtio_pci,igb_uio
>
> Other network devices
> =====================
> <none>
>
> -----Original Message-----
> From: Dpdk-ovs [mailto:dpdk-ovs-bounces@lists.01.org] On Behalf Of
> Srinivasreddy R
> Sent: Thursday, February 26, 2015 6:11 AM
> To: dev@dpdk.org; dpdk-ovs@lists.01.org
> Subject: [Dpdk-ovs] problem in binding interfaces of virtio-pci on the VM
>
> hi ,
> I have written sample program for usvhost  supported by ovdk.
>
> i have initialized VM using the below command .
> On the VM :
>
> I am able to see two interfaces . and working fine with traffic in
> rawsocket mode .
> my problem is when i bind the interfaces to pmd driver[ ibg_uio ] my
> virtual machine is getting hanged . and  i am not able to access it further
> .
> now my question is . what may be the reason for the behavior . and how can
> in debug the root cause .
> please help in finding out the problem .
>
>
>
>  ./tools/dpdk_nic_bind.py --status
>
> Network devices using DPDK-compatible driver
> ============================================
> <none>
>
> Network devices using kernel driver
> ===================================
> 0000:00:03.0 '82540EM Gigabit Ethernet Controller' if=ens3 drv=e1000
> unused=igb_uio *Active*
> 0000:00:04.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio
> 0000:00:05.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio
>
> Other network devices
> =====================
> <none>
>
>
> ./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0
>
>
>
> ./x86_64-softmmu/qemu-system-x86_64 -cpu host -boot c  -hda
> /home/utils/images/vm1.img  -m 2048M -smp 3 --enable-kvm -name 'VM1'
> -nographic -vnc :1 -pidfile /tmp/vm1.pid -drive
> file=fat:rw:/tmp/qemu_share,snapshot=off -monitor
> unix:/tmp/vm1monitor,server,nowait  -net none -no-reboot -mem-path
> /dev/hugepages -mem-prealloc -netdev
> type=tap,id=net1,script=no,downscript=no,ifname=usvhost1,vhost=on -device
> virtio-net-pci,netdev=net1,mac=00:16:3e:00:03:03,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
> -netdev type=tap,id=net2,script=no,downscript=no,ifname=usvhost2,vhost=on
> -device
>
> virtio-net-pci,netdev=net2,mac=00:16:3e:00:03:04,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
>
>
>
>
> ----------
> thanks
> srinivas.
> _______________________________________________
> Dpdk-ovs mailing list
> Dpdk-ovs@lists.01.org
> https://lists.01.org/mailman/listinfo/dpdk-ovs
>



-- 
thanks
srinivas.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-dev] [Dpdk-ovs] problem in binding interfaces of virtio-pci on the VM
  2015-02-26 16:38   ` Srinivasreddy R
@ 2015-02-26 17:00     ` Bruce Richardson
  2015-02-26 17:16       ` Srinivasreddy R
  0 siblings, 1 reply; 10+ messages in thread
From: Bruce Richardson @ 2015-02-26 17:00 UTC (permalink / raw)
  To: Srinivasreddy R; +Cc: dev, dpdk-ovs, Polehn, Mike A

On Thu, Feb 26, 2015 at 10:08:59PM +0530, Srinivasreddy R wrote:
> hi Mike,
> Thanks for our detailed explanation of your example . usually i do similar
> to u and i am aware of working with dpdk applications .
> my problem is :
> 1. i have written a code for  host to guest communication .[taken form
> usvhost which is developed in ovdk vswitch] .
> 2. i launched VM with two  interfaces .
> 3. i am able to send and receive traffic to and from guest to host on these
> interfaces .
> 4. when i  try to bind these interfaces to igb_uio  to run dpdk application
> . i am not able to access my instance . it got struck and not responding .
> i need to hard reboot the vm.

Are you sure you are not trying to access the vm via one of the interfaces
now bount to igb_uio? If you bind the interface you use for ssh to igb_uio, 
you won't be able to ssh to that vm any more.

/Bruce

> 
> My Question is  :
> surely i might done something wrong in code . as my VM is not able to
> access any more when i try to bind interfaces to igb_uio  . not able to
> debug the issue .
> someone please help me in figuring the issue . i dont find anything in
> /var/log/messages after relaunching the instance .
> 
> 
> thanks,
> srinivas.
> 
> 
> 
> On Thu, Feb 26, 2015 at 8:42 PM, Polehn, Mike A <mike.a.polehn@intel.com>
> wrote:
> 
> > In this example, the control network 00:03.0, remains unbound to UIO
> > driver but remains attached
> >  to Linux device driver (ssh access with putty) and just the target
> > interfaces are bound.
> > Below, it shows all 3 interfaces bound to the uio driver, which are not
> > usable until a task uses the UIO driver.
> >
> > [root@F21vm l3fwd-vf]# lspci -nn
> > 00:00.0 Host bridge [0600]: Intel Corporation 440FX - 82441FX PMC [Natoma]
> > [8086:1237] (rev 02)
> > 00:01.0 ISA bridge [0601]: Intel Corporation 82371SB PIIX3 ISA
> > [Natoma/Triton II] [8086:7000]
> > 00:01.1 IDE interface [0101]: Intel Corporation 82371SB PIIX3 IDE
> > [Natoma/Triton II] [8086:7010]
> > 00:01.3 Bridge [0680]: Intel Corporation 82371AB/EB/MB PIIX4 ACPI
> > [8086:7113] (rev 03)
> > 00:02.0 VGA compatible controller [0300]: Cirrus Logic GD 5446 [1013:00b8]
> > 00:03.0 Ethernet controller [0200]: Red Hat, Inc Virtio network device
> > [1af4:1000]
> > 00:04.0 Ethernet controller [0200]: Intel Corporation XL710/X710 Virtual
> > Function [8086:154c] (rev 01)
> > 00:05.0 Ethernet controller [0200]: Intel Corporation XL710/X710 Virtual
> > Function [8086:154c] (rev 01)
> >
> > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> > --bind=igb_uio 00:04.0
> > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> > --bind=igb_uio 00:05.0
> > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py --status
> >
> > Network devices using DPDK-compatible driver
> > ============================================
> > 0000:00:04.0 'XL710/X710 Virtual Function' drv=igb_uio unused=i40evf
> > 0000:00:05.0 'XL710/X710 Virtual Function' drv=igb_uio unused=i40evf
> >
> > Network devices using kernel driver
> > ===================================
> > 0000:00:03.0 'Virtio network device' if= drv=virtio-pci
> > unused=virtio_pci,igb_uio
> >
> > Other network devices
> > =====================
> > <none>
> >
> > -----Original Message-----
> > From: Dpdk-ovs [mailto:dpdk-ovs-bounces@lists.01.org] On Behalf Of
> > Srinivasreddy R
> > Sent: Thursday, February 26, 2015 6:11 AM
> > To: dev@dpdk.org; dpdk-ovs@lists.01.org
> > Subject: [Dpdk-ovs] problem in binding interfaces of virtio-pci on the VM
> >
> > hi ,
> > I have written sample program for usvhost  supported by ovdk.
> >
> > i have initialized VM using the below command .
> > On the VM :
> >
> > I am able to see two interfaces . and working fine with traffic in
> > rawsocket mode .
> > my problem is when i bind the interfaces to pmd driver[ ibg_uio ] my
> > virtual machine is getting hanged . and  i am not able to access it further
> > .
> > now my question is . what may be the reason for the behavior . and how can
> > in debug the root cause .
> > please help in finding out the problem .
> >
> >
> >
> >  ./tools/dpdk_nic_bind.py --status
> >
> > Network devices using DPDK-compatible driver
> > ============================================
> > <none>
> >
> > Network devices using kernel driver
> > ===================================
> > 0000:00:03.0 '82540EM Gigabit Ethernet Controller' if=ens3 drv=e1000
> > unused=igb_uio *Active*
> > 0000:00:04.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio
> > 0000:00:05.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio
> >
> > Other network devices
> > =====================
> > <none>
> >
> >
> > ./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0
> >
> >
> >
> > ./x86_64-softmmu/qemu-system-x86_64 -cpu host -boot c  -hda
> > /home/utils/images/vm1.img  -m 2048M -smp 3 --enable-kvm -name 'VM1'
> > -nographic -vnc :1 -pidfile /tmp/vm1.pid -drive
> > file=fat:rw:/tmp/qemu_share,snapshot=off -monitor
> > unix:/tmp/vm1monitor,server,nowait  -net none -no-reboot -mem-path
> > /dev/hugepages -mem-prealloc -netdev
> > type=tap,id=net1,script=no,downscript=no,ifname=usvhost1,vhost=on -device
> > virtio-net-pci,netdev=net1,mac=00:16:3e:00:03:03,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
> > -netdev type=tap,id=net2,script=no,downscript=no,ifname=usvhost2,vhost=on
> > -device
> >
> > virtio-net-pci,netdev=net2,mac=00:16:3e:00:03:04,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
> >
> >
> >
> >
> > ----------
> > thanks
> > srinivas.
> > _______________________________________________
> > Dpdk-ovs mailing list
> > Dpdk-ovs@lists.01.org
> > https://lists.01.org/mailman/listinfo/dpdk-ovs
> >
> 
> 
> 
> -- 
> thanks
> srinivas.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-dev] [Dpdk-ovs] problem in binding interfaces of virtio-pci on the VM
  2015-02-26 17:00     ` Bruce Richardson
@ 2015-02-26 17:16       ` Srinivasreddy R
  2015-02-27 10:06         ` Bruce Richardson
  0 siblings, 1 reply; 10+ messages in thread
From: Srinivasreddy R @ 2015-02-26 17:16 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev, dpdk-ovs, Polehn, Mike A

hi Bruce ,
Thank you for your response .
I am accessing my VM via  " vncviewer " . so ssh doesn't come into picture .
Is there any way to find the root cause of my problem . does dpdk stores
any logs while binding interfaces to igb_uio.
i have seen my /var/log/messages . but could not find any clue.

the movement i gave the below command my vm got struck and not responding
untill i forcefully kill the qemu and relaunch .
./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0



thanks,
srinivas.



On Thu, Feb 26, 2015 at 10:30 PM, Bruce Richardson <
bruce.richardson@intel.com> wrote:

> On Thu, Feb 26, 2015 at 10:08:59PM +0530, Srinivasreddy R wrote:
> > hi Mike,
> > Thanks for our detailed explanation of your example . usually i do
> similar
> > to u and i am aware of working with dpdk applications .
> > my problem is :
> > 1. i have written a code for  host to guest communication .[taken form
> > usvhost which is developed in ovdk vswitch] .
> > 2. i launched VM with two  interfaces .
> > 3. i am able to send and receive traffic to and from guest to host on
> these
> > interfaces .
> > 4. when i  try to bind these interfaces to igb_uio  to run dpdk
> application
> > . i am not able to access my instance . it got struck and not responding
> .
> > i need to hard reboot the vm.
>
> Are you sure you are not trying to access the vm via one of the interfaces
> now bount to igb_uio? If you bind the interface you use for ssh to igb_uio,
> you won't be able to ssh to that vm any more.
>
> /Bruce
>
> >
> > My Question is  :
> > surely i might done something wrong in code . as my VM is not able to
> > access any more when i try to bind interfaces to igb_uio  . not able to
> > debug the issue .
> > someone please help me in figuring the issue . i dont find anything in
> > /var/log/messages after relaunching the instance .
> >
> >
> > thanks,
> > srinivas.
> >
> >
> >
> > On Thu, Feb 26, 2015 at 8:42 PM, Polehn, Mike A <mike.a.polehn@intel.com
> >
> > wrote:
> >
> > > In this example, the control network 00:03.0, remains unbound to UIO
> > > driver but remains attached
> > >  to Linux device driver (ssh access with putty) and just the target
> > > interfaces are bound.
> > > Below, it shows all 3 interfaces bound to the uio driver, which are not
> > > usable until a task uses the UIO driver.
> > >
> > > [root@F21vm l3fwd-vf]# lspci -nn
> > > 00:00.0 Host bridge [0600]: Intel Corporation 440FX - 82441FX PMC
> [Natoma]
> > > [8086:1237] (rev 02)
> > > 00:01.0 ISA bridge [0601]: Intel Corporation 82371SB PIIX3 ISA
> > > [Natoma/Triton II] [8086:7000]
> > > 00:01.1 IDE interface [0101]: Intel Corporation 82371SB PIIX3 IDE
> > > [Natoma/Triton II] [8086:7010]
> > > 00:01.3 Bridge [0680]: Intel Corporation 82371AB/EB/MB PIIX4 ACPI
> > > [8086:7113] (rev 03)
> > > 00:02.0 VGA compatible controller [0300]: Cirrus Logic GD 5446
> [1013:00b8]
> > > 00:03.0 Ethernet controller [0200]: Red Hat, Inc Virtio network device
> > > [1af4:1000]
> > > 00:04.0 Ethernet controller [0200]: Intel Corporation XL710/X710
> Virtual
> > > Function [8086:154c] (rev 01)
> > > 00:05.0 Ethernet controller [0200]: Intel Corporation XL710/X710
> Virtual
> > > Function [8086:154c] (rev 01)
> > >
> > > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> > > --bind=igb_uio 00:04.0
> > > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> > > --bind=igb_uio 00:05.0
> > > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py --status
> > >
> > > Network devices using DPDK-compatible driver
> > > ============================================
> > > 0000:00:04.0 'XL710/X710 Virtual Function' drv=igb_uio unused=i40evf
> > > 0000:00:05.0 'XL710/X710 Virtual Function' drv=igb_uio unused=i40evf
> > >
> > > Network devices using kernel driver
> > > ===================================
> > > 0000:00:03.0 'Virtio network device' if= drv=virtio-pci
> > > unused=virtio_pci,igb_uio
> > >
> > > Other network devices
> > > =====================
> > > <none>
> > >
> > > -----Original Message-----
> > > From: Dpdk-ovs [mailto:dpdk-ovs-bounces@lists.01.org] On Behalf Of
> > > Srinivasreddy R
> > > Sent: Thursday, February 26, 2015 6:11 AM
> > > To: dev@dpdk.org; dpdk-ovs@lists.01.org
> > > Subject: [Dpdk-ovs] problem in binding interfaces of virtio-pci on the
> VM
> > >
> > > hi ,
> > > I have written sample program for usvhost  supported by ovdk.
> > >
> > > i have initialized VM using the below command .
> > > On the VM :
> > >
> > > I am able to see two interfaces . and working fine with traffic in
> > > rawsocket mode .
> > > my problem is when i bind the interfaces to pmd driver[ ibg_uio ] my
> > > virtual machine is getting hanged . and  i am not able to access it
> further
> > > .
> > > now my question is . what may be the reason for the behavior . and how
> can
> > > in debug the root cause .
> > > please help in finding out the problem .
> > >
> > >
> > >
> > >  ./tools/dpdk_nic_bind.py --status
> > >
> > > Network devices using DPDK-compatible driver
> > > ============================================
> > > <none>
> > >
> > > Network devices using kernel driver
> > > ===================================
> > > 0000:00:03.0 '82540EM Gigabit Ethernet Controller' if=ens3 drv=e1000
> > > unused=igb_uio *Active*
> > > 0000:00:04.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio
> > > 0000:00:05.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio
> > >
> > > Other network devices
> > > =====================
> > > <none>
> > >
> > >
> > > ./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0
> > >
> > >
> > >
> > > ./x86_64-softmmu/qemu-system-x86_64 -cpu host -boot c  -hda
> > > /home/utils/images/vm1.img  -m 2048M -smp 3 --enable-kvm -name 'VM1'
> > > -nographic -vnc :1 -pidfile /tmp/vm1.pid -drive
> > > file=fat:rw:/tmp/qemu_share,snapshot=off -monitor
> > > unix:/tmp/vm1monitor,server,nowait  -net none -no-reboot -mem-path
> > > /dev/hugepages -mem-prealloc -netdev
> > > type=tap,id=net1,script=no,downscript=no,ifname=usvhost1,vhost=on
> -device
> > >
> virtio-net-pci,netdev=net1,mac=00:16:3e:00:03:03,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
> > > -netdev
> type=tap,id=net2,script=no,downscript=no,ifname=usvhost2,vhost=on
> > > -device
> > >
> > >
> virtio-net-pci,netdev=net2,mac=00:16:3e:00:03:04,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
> > >
> > >
> > >
> > >
> > > ----------
> > > thanks
> > > srinivas.
> > > _______________________________________________
> > > Dpdk-ovs mailing list
> > > Dpdk-ovs@lists.01.org
> > > https://lists.01.org/mailman/listinfo/dpdk-ovs
> > >
> >
> >
> >
> > --
> > thanks
> > srinivas.
>



-- 
thanks
srinivas.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-dev] [Dpdk-ovs] problem in binding interfaces of virtio-pci on the VM
  2015-02-26 17:16       ` Srinivasreddy R
@ 2015-02-27 10:06         ` Bruce Richardson
  2015-02-27 10:59           ` Srinivasreddy R
  0 siblings, 1 reply; 10+ messages in thread
From: Bruce Richardson @ 2015-02-27 10:06 UTC (permalink / raw)
  To: Srinivasreddy R; +Cc: dev, dpdk-ovs, Polehn, Mike A

On Thu, Feb 26, 2015 at 10:46:58PM +0530, Srinivasreddy R wrote:
> hi Bruce ,
> Thank you for your response .
> I am accessing my VM via  " vncviewer " . so ssh doesn't come into picture .
> Is there any way to find the root cause of my problem . does dpdk stores
> any logs while binding interfaces to igb_uio.
> i have seen my /var/log/messages . but could not find any clue.
> 
> the movement i gave the below command my vm got struck and not responding
> untill i forcefully kill the qemu and relaunch .
> ./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0
> 

Does VNC not also connect using a network port? What is the output of 
./dpdk_nic_bind.py --status before you run this command?

/Bruce

> 
> 
> thanks,
> srinivas.
> 
> 
> 
> On Thu, Feb 26, 2015 at 10:30 PM, Bruce Richardson <
> bruce.richardson@intel.com> wrote:
> 
> > On Thu, Feb 26, 2015 at 10:08:59PM +0530, Srinivasreddy R wrote:
> > > hi Mike,
> > > Thanks for our detailed explanation of your example . usually i do
> > similar
> > > to u and i am aware of working with dpdk applications .
> > > my problem is :
> > > 1. i have written a code for  host to guest communication .[taken form
> > > usvhost which is developed in ovdk vswitch] .
> > > 2. i launched VM with two  interfaces .
> > > 3. i am able to send and receive traffic to and from guest to host on
> > these
> > > interfaces .
> > > 4. when i  try to bind these interfaces to igb_uio  to run dpdk
> > application
> > > . i am not able to access my instance . it got struck and not responding
> > .
> > > i need to hard reboot the vm.
> >
> > Are you sure you are not trying to access the vm via one of the interfaces
> > now bount to igb_uio? If you bind the interface you use for ssh to igb_uio,
> > you won't be able to ssh to that vm any more.
> >
> > /Bruce
> >
> > >
> > > My Question is  :
> > > surely i might done something wrong in code . as my VM is not able to
> > > access any more when i try to bind interfaces to igb_uio  . not able to
> > > debug the issue .
> > > someone please help me in figuring the issue . i dont find anything in
> > > /var/log/messages after relaunching the instance .
> > >
> > >
> > > thanks,
> > > srinivas.
> > >
> > >
> > >
> > > On Thu, Feb 26, 2015 at 8:42 PM, Polehn, Mike A <mike.a.polehn@intel.com
> > >
> > > wrote:
> > >
> > > > In this example, the control network 00:03.0, remains unbound to UIO
> > > > driver but remains attached
> > > >  to Linux device driver (ssh access with putty) and just the target
> > > > interfaces are bound.
> > > > Below, it shows all 3 interfaces bound to the uio driver, which are not
> > > > usable until a task uses the UIO driver.
> > > >
> > > > [root@F21vm l3fwd-vf]# lspci -nn
> > > > 00:00.0 Host bridge [0600]: Intel Corporation 440FX - 82441FX PMC
> > [Natoma]
> > > > [8086:1237] (rev 02)
> > > > 00:01.0 ISA bridge [0601]: Intel Corporation 82371SB PIIX3 ISA
> > > > [Natoma/Triton II] [8086:7000]
> > > > 00:01.1 IDE interface [0101]: Intel Corporation 82371SB PIIX3 IDE
> > > > [Natoma/Triton II] [8086:7010]
> > > > 00:01.3 Bridge [0680]: Intel Corporation 82371AB/EB/MB PIIX4 ACPI
> > > > [8086:7113] (rev 03)
> > > > 00:02.0 VGA compatible controller [0300]: Cirrus Logic GD 5446
> > [1013:00b8]
> > > > 00:03.0 Ethernet controller [0200]: Red Hat, Inc Virtio network device
> > > > [1af4:1000]
> > > > 00:04.0 Ethernet controller [0200]: Intel Corporation XL710/X710
> > Virtual
> > > > Function [8086:154c] (rev 01)
> > > > 00:05.0 Ethernet controller [0200]: Intel Corporation XL710/X710
> > Virtual
> > > > Function [8086:154c] (rev 01)
> > > >
> > > > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> > > > --bind=igb_uio 00:04.0
> > > > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> > > > --bind=igb_uio 00:05.0
> > > > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py --status
> > > >
> > > > Network devices using DPDK-compatible driver
> > > > ============================================
> > > > 0000:00:04.0 'XL710/X710 Virtual Function' drv=igb_uio unused=i40evf
> > > > 0000:00:05.0 'XL710/X710 Virtual Function' drv=igb_uio unused=i40evf
> > > >
> > > > Network devices using kernel driver
> > > > ===================================
> > > > 0000:00:03.0 'Virtio network device' if= drv=virtio-pci
> > > > unused=virtio_pci,igb_uio
> > > >
> > > > Other network devices
> > > > =====================
> > > > <none>
> > > >
> > > > -----Original Message-----
> > > > From: Dpdk-ovs [mailto:dpdk-ovs-bounces@lists.01.org] On Behalf Of
> > > > Srinivasreddy R
> > > > Sent: Thursday, February 26, 2015 6:11 AM
> > > > To: dev@dpdk.org; dpdk-ovs@lists.01.org
> > > > Subject: [Dpdk-ovs] problem in binding interfaces of virtio-pci on the
> > VM
> > > >
> > > > hi ,
> > > > I have written sample program for usvhost  supported by ovdk.
> > > >
> > > > i have initialized VM using the below command .
> > > > On the VM :
> > > >
> > > > I am able to see two interfaces . and working fine with traffic in
> > > > rawsocket mode .
> > > > my problem is when i bind the interfaces to pmd driver[ ibg_uio ] my
> > > > virtual machine is getting hanged . and  i am not able to access it
> > further
> > > > .
> > > > now my question is . what may be the reason for the behavior . and how
> > can
> > > > in debug the root cause .
> > > > please help in finding out the problem .
> > > >
> > > >
> > > >
> > > >  ./tools/dpdk_nic_bind.py --status
> > > >
> > > > Network devices using DPDK-compatible driver
> > > > ============================================
> > > > <none>
> > > >
> > > > Network devices using kernel driver
> > > > ===================================
> > > > 0000:00:03.0 '82540EM Gigabit Ethernet Controller' if=ens3 drv=e1000
> > > > unused=igb_uio *Active*
> > > > 0000:00:04.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio
> > > > 0000:00:05.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio
> > > >
> > > > Other network devices
> > > > =====================
> > > > <none>
> > > >
> > > >
> > > > ./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0
> > > >
> > > >
> > > >
> > > > ./x86_64-softmmu/qemu-system-x86_64 -cpu host -boot c  -hda
> > > > /home/utils/images/vm1.img  -m 2048M -smp 3 --enable-kvm -name 'VM1'
> > > > -nographic -vnc :1 -pidfile /tmp/vm1.pid -drive
> > > > file=fat:rw:/tmp/qemu_share,snapshot=off -monitor
> > > > unix:/tmp/vm1monitor,server,nowait  -net none -no-reboot -mem-path
> > > > /dev/hugepages -mem-prealloc -netdev
> > > > type=tap,id=net1,script=no,downscript=no,ifname=usvhost1,vhost=on
> > -device
> > > >
> > virtio-net-pci,netdev=net1,mac=00:16:3e:00:03:03,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
> > > > -netdev
> > type=tap,id=net2,script=no,downscript=no,ifname=usvhost2,vhost=on
> > > > -device
> > > >
> > > >
> > virtio-net-pci,netdev=net2,mac=00:16:3e:00:03:04,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
> > > >
> > > >
> > > >
> > > >
> > > > ----------
> > > > thanks
> > > > srinivas.
> > > > _______________________________________________
> > > > Dpdk-ovs mailing list
> > > > Dpdk-ovs@lists.01.org
> > > > https://lists.01.org/mailman/listinfo/dpdk-ovs
> > > >
> > >
> > >
> > >
> > > --
> > > thanks
> > > srinivas.
> >
> 
> 
> 
> -- 
> thanks
> srinivas.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-dev] [Dpdk-ovs] problem in binding interfaces of virtio-pci on the VM
  2015-02-27 10:06         ` Bruce Richardson
@ 2015-02-27 10:59           ` Srinivasreddy R
  2015-02-27 11:09             ` Bruce Richardson
  2015-02-27 14:17             ` Mussar, Gary
  0 siblings, 2 replies; 10+ messages in thread
From: Srinivasreddy R @ 2015-02-27 10:59 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev, dpdk-ovs, Polehn, Mike A

hi ,

please fine the oputput  On the VM .

/tools/dpdk_nic_bind.py --status

Network devices using DPDK-compatible driver
==============================
==============
<none>

Network devices using kernel driver
===================================
0000:00:03.0 '82540EM Gigabit Ethernet Controller' if=ens3 drv=e1000
unused=igb_uio *Active*
0000:00:04.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio
0000:00:05.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio

Other network devices
=====================
<none>


i am trying to bind  "virtio network devices "   with pci  00:04.0 ,
00:05.0 .
 .
when i give the  below command i face the issue.
./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0



when  qemu does not able to allocate memory for vm  on /dev/hugepages  . it
gives the below
error message . "Cannot allocate memory "
In this case i am able to bind the interfaces to igb_uio .
does this gives any hint on what wrong i am doing .

do i need to handle any thing on the host when i bind to igb_uio on the
guest  for usvhost .


 ./x86_64-softmmu/qemu-system-x86_64 -cpu host -boot c  -hda
/home/utils/images/vm1.img  -m 4096M -smp 3 --enable-kvm -name 'VM1'
-nographic -vnc :1 -pidfile /tmp/vm1.pid -drive
file=fat:rw:/tmp/qemu_share,snapshot=off -monitor
unix:/tmp/vm1monitor,server,nowait  -net none -no-reboot -mem-path
/dev/hugepages -mem-prealloc -netdev
type=tap,id=net1,script=no,downscript=no,ifname=usvhost1,vhost=on -device
virtio-net-pci,netdev=net1,mac=00:16:3e:00:03:03,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
-netdev type=tap,id=net2,script=no,downscript=no,ifname=usvhost2,vhost=on
-device
virtio-net-pci,netdev=net2,mac=00:16:3e:00:03:04,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
-net nic -net tap,ifname=tap6,script=no
vvfat /tmp/qemu_share chs 1024,16,63
file_ram_alloc: can't mmap RAM pages: Cannot allocate memory
qemu-system-x86_64: unable to start vhost net: 22: falling back on
userspace virtio
qemu-system-x86_64: unable to start vhost net: 22: falling back on
userspace virtio




thanks,
srinivas.



On Fri, Feb 27, 2015 at 3:36 PM, Bruce Richardson <
bruce.richardson@intel.com> wrote:

> On Thu, Feb 26, 2015 at 10:46:58PM +0530, Srinivasreddy R wrote:
> > hi Bruce ,
> > Thank you for your response .
> > I am accessing my VM via  " vncviewer " . so ssh doesn't come into
> picture .
> > Is there any way to find the root cause of my problem . does dpdk stores
> > any logs while binding interfaces to igb_uio.
> > i have seen my /var/log/messages . but could not find any clue.
> >
> > the movement i gave the below command my vm got struck and not responding
> > untill i forcefully kill the qemu and relaunch .
> > ./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0
> >
>
> Does VNC not also connect using a network port? What is the output of
> ./dpdk_nic_bind.py --status before you run this command?
>
> /Bruce
>
> >
> >
> > thanks,
> > srinivas.
> >
> >
> >
> > On Thu, Feb 26, 2015 at 10:30 PM, Bruce Richardson <
> > bruce.richardson@intel.com> wrote:
> >
> > > On Thu, Feb 26, 2015 at 10:08:59PM +0530, Srinivasreddy R wrote:
> > > > hi Mike,
> > > > Thanks for our detailed explanation of your example . usually i do
> > > similar
> > > > to u and i am aware of working with dpdk applications .
> > > > my problem is :
> > > > 1. i have written a code for  host to guest communication .[taken
> form
> > > > usvhost which is developed in ovdk vswitch] .
> > > > 2. i launched VM with two  interfaces .
> > > > 3. i am able to send and receive traffic to and from guest to host on
> > > these
> > > > interfaces .
> > > > 4. when i  try to bind these interfaces to igb_uio  to run dpdk
> > > application
> > > > . i am not able to access my instance . it got struck and not
> responding
> > > .
> > > > i need to hard reboot the vm.
> > >
> > > Are you sure you are not trying to access the vm via one of the
> interfaces
> > > now bount to igb_uio? If you bind the interface you use for ssh to
> igb_uio,
> > > you won't be able to ssh to that vm any more.
> > >
> > > /Bruce
> > >
> > > >
> > > > My Question is  :
> > > > surely i might done something wrong in code . as my VM is not able to
> > > > access any more when i try to bind interfaces to igb_uio  . not able
> to
> > > > debug the issue .
> > > > someone please help me in figuring the issue . i dont find anything
> in
> > > > /var/log/messages after relaunching the instance .
> > > >
> > > >
> > > > thanks,
> > > > srinivas.
> > > >
> > > >
> > > >
> > > > On Thu, Feb 26, 2015 at 8:42 PM, Polehn, Mike A <
> mike.a.polehn@intel.com
> > > >
> > > > wrote:
> > > >
> > > > > In this example, the control network 00:03.0, remains unbound to
> UIO
> > > > > driver but remains attached
> > > > >  to Linux device driver (ssh access with putty) and just the target
> > > > > interfaces are bound.
> > > > > Below, it shows all 3 interfaces bound to the uio driver, which
> are not
> > > > > usable until a task uses the UIO driver.
> > > > >
> > > > > [root@F21vm l3fwd-vf]# lspci -nn
> > > > > 00:00.0 Host bridge [0600]: Intel Corporation 440FX - 82441FX PMC
> > > [Natoma]
> > > > > [8086:1237] (rev 02)
> > > > > 00:01.0 ISA bridge [0601]: Intel Corporation 82371SB PIIX3 ISA
> > > > > [Natoma/Triton II] [8086:7000]
> > > > > 00:01.1 IDE interface [0101]: Intel Corporation 82371SB PIIX3 IDE
> > > > > [Natoma/Triton II] [8086:7010]
> > > > > 00:01.3 Bridge [0680]: Intel Corporation 82371AB/EB/MB PIIX4 ACPI
> > > > > [8086:7113] (rev 03)
> > > > > 00:02.0 VGA compatible controller [0300]: Cirrus Logic GD 5446
> > > [1013:00b8]
> > > > > 00:03.0 Ethernet controller [0200]: Red Hat, Inc Virtio network
> device
> > > > > [1af4:1000]
> > > > > 00:04.0 Ethernet controller [0200]: Intel Corporation XL710/X710
> > > Virtual
> > > > > Function [8086:154c] (rev 01)
> > > > > 00:05.0 Ethernet controller [0200]: Intel Corporation XL710/X710
> > > Virtual
> > > > > Function [8086:154c] (rev 01)
> > > > >
> > > > > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> > > > > --bind=igb_uio 00:04.0
> > > > > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> > > > > --bind=igb_uio 00:05.0
> > > > > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> --status
> > > > >
> > > > > Network devices using DPDK-compatible driver
> > > > > ============================================
> > > > > 0000:00:04.0 'XL710/X710 Virtual Function' drv=igb_uio
> unused=i40evf
> > > > > 0000:00:05.0 'XL710/X710 Virtual Function' drv=igb_uio
> unused=i40evf
> > > > >
> > > > > Network devices using kernel driver
> > > > > ===================================
> > > > > 0000:00:03.0 'Virtio network device' if= drv=virtio-pci
> > > > > unused=virtio_pci,igb_uio
> > > > >
> > > > > Other network devices
> > > > > =====================
> > > > > <none>
> > > > >
> > > > > -----Original Message-----
> > > > > From: Dpdk-ovs [mailto:dpdk-ovs-bounces@lists.01.org] On Behalf Of
> > > > > Srinivasreddy R
> > > > > Sent: Thursday, February 26, 2015 6:11 AM
> > > > > To: dev@dpdk.org; dpdk-ovs@lists.01.org
> > > > > Subject: [Dpdk-ovs] problem in binding interfaces of virtio-pci on
> the
> > > VM
> > > > >
> > > > > hi ,
> > > > > I have written sample program for usvhost  supported by ovdk.
> > > > >
> > > > > i have initialized VM using the below command .
> > > > > On the VM :
> > > > >
> > > > > I am able to see two interfaces . and working fine with traffic in
> > > > > rawsocket mode .
> > > > > my problem is when i bind the interfaces to pmd driver[ ibg_uio ]
> my
> > > > > virtual machine is getting hanged . and  i am not able to access it
> > > further
> > > > > .
> > > > > now my question is . what may be the reason for the behavior . and
> how
> > > can
> > > > > in debug the root cause .
> > > > > please help in finding out the problem .
> > > > >
> > > > >
> > > > >
> > > > >  ./tools/dpdk_nic_bind.py --status
> > > > >
> > > > > Network devices using DPDK-compatible driver
> > > > > ============================================
> > > > > <none>
> > > > >
> > > > > Network devices using kernel driver
> > > > > ===================================
> > > > > 0000:00:03.0 '82540EM Gigabit Ethernet Controller' if=ens3
> drv=e1000
> > > > > unused=igb_uio *Active*
> > > > > 0000:00:04.0 'Virtio network device' if= drv=virtio-pci
> unused=igb_uio
> > > > > 0000:00:05.0 'Virtio network device' if= drv=virtio-pci
> unused=igb_uio
> > > > >
> > > > > Other network devices
> > > > > =====================
> > > > > <none>
> > > > >
> > > > >
> > > > > ./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0
> > > > >
> > > > >
> > > > >
> > > > > ./x86_64-softmmu/qemu-system-x86_64 -cpu host -boot c  -hda
> > > > > /home/utils/images/vm1.img  -m 2048M -smp 3 --enable-kvm -name
> 'VM1'
> > > > > -nographic -vnc :1 -pidfile /tmp/vm1.pid -drive
> > > > > file=fat:rw:/tmp/qemu_share,snapshot=off -monitor
> > > > > unix:/tmp/vm1monitor,server,nowait  -net none -no-reboot -mem-path
> > > > > /dev/hugepages -mem-prealloc -netdev
> > > > > type=tap,id=net1,script=no,downscript=no,ifname=usvhost1,vhost=on
> > > -device
> > > > >
> > >
> virtio-net-pci,netdev=net1,mac=00:16:3e:00:03:03,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
> > > > > -netdev
> > > type=tap,id=net2,script=no,downscript=no,ifname=usvhost2,vhost=on
> > > > > -device
> > > > >
> > > > >
> > >
> virtio-net-pci,netdev=net2,mac=00:16:3e:00:03:04,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > ----------
> > > > > thanks
> > > > > srinivas.
> > > > > _______________________________________________
> > > > > Dpdk-ovs mailing list
> > > > > Dpdk-ovs@lists.01.org
> > > > > https://lists.01.org/mailman/listinfo/dpdk-ovs
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > thanks
> > > > srinivas.
> > >
> >
> >
> >
> > --
> > thanks
> > srinivas.
>



-- 
thanks
srinivas.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-dev] [Dpdk-ovs] problem in binding interfaces of virtio-pci on the VM
  2015-02-27 10:59           ` Srinivasreddy R
@ 2015-02-27 11:09             ` Bruce Richardson
  2015-02-27 14:17             ` Mussar, Gary
  1 sibling, 0 replies; 10+ messages in thread
From: Bruce Richardson @ 2015-02-27 11:09 UTC (permalink / raw)
  To: Srinivasreddy R; +Cc: dev, dpdk-ovs, Polehn, Mike A

On Fri, Feb 27, 2015 at 04:29:36PM +0530, Srinivasreddy R wrote:
> hi ,
> 
> please fine the oputput  On the VM .
> 
> /tools/dpdk_nic_bind.py --status
> 
> Network devices using DPDK-compatible driver
> ==============================
> ==============
> <none>
> 
> Network devices using kernel driver
> ===================================
> 0000:00:03.0 '82540EM Gigabit Ethernet Controller' if=ens3 drv=e1000
> unused=igb_uio *Active*
> 0000:00:04.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio
> 0000:00:05.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio
> 
> Other network devices
> =====================
> <none>
> 
> 
> i am trying to bind  "virtio network devices "   with pci  00:04.0 ,
> 00:05.0 .
>  .
> when i give the  below command i face the issue.
> ./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0
> 

Nothing wrong there that I can see. Perhaps someone who knows more about virtio
might be able to suggest something, especially given the additional errors
you report below.

/Bruce

> 
> 
> when  qemu does not able to allocate memory for vm  on /dev/hugepages  . it
> gives the below
> error message . "Cannot allocate memory "
> In this case i am able to bind the interfaces to igb_uio .
> does this gives any hint on what wrong i am doing .
> 
> do i need to handle any thing on the host when i bind to igb_uio on the
> guest  for usvhost .
> 
> 
>  ./x86_64-softmmu/qemu-system-x86_64 -cpu host -boot c  -hda
> /home/utils/images/vm1.img  -m 4096M -smp 3 --enable-kvm -name 'VM1'
> -nographic -vnc :1 -pidfile /tmp/vm1.pid -drive
> file=fat:rw:/tmp/qemu_share,snapshot=off -monitor
> unix:/tmp/vm1monitor,server,nowait  -net none -no-reboot -mem-path
> /dev/hugepages -mem-prealloc -netdev
> type=tap,id=net1,script=no,downscript=no,ifname=usvhost1,vhost=on -device
> virtio-net-pci,netdev=net1,mac=00:16:3e:00:03:03,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
> -netdev type=tap,id=net2,script=no,downscript=no,ifname=usvhost2,vhost=on
> -device
> virtio-net-pci,netdev=net2,mac=00:16:3e:00:03:04,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
> -net nic -net tap,ifname=tap6,script=no
> vvfat /tmp/qemu_share chs 1024,16,63
> file_ram_alloc: can't mmap RAM pages: Cannot allocate memory
> qemu-system-x86_64: unable to start vhost net: 22: falling back on
> userspace virtio
> qemu-system-x86_64: unable to start vhost net: 22: falling back on
> userspace virtio
> 
> 
> 
> 
> thanks,
> srinivas.
> 
> 
> 
> On Fri, Feb 27, 2015 at 3:36 PM, Bruce Richardson <
> bruce.richardson@intel.com> wrote:
> 
> > On Thu, Feb 26, 2015 at 10:46:58PM +0530, Srinivasreddy R wrote:
> > > hi Bruce ,
> > > Thank you for your response .
> > > I am accessing my VM via  " vncviewer " . so ssh doesn't come into
> > picture .
> > > Is there any way to find the root cause of my problem . does dpdk stores
> > > any logs while binding interfaces to igb_uio.
> > > i have seen my /var/log/messages . but could not find any clue.
> > >
> > > the movement i gave the below command my vm got struck and not responding
> > > untill i forcefully kill the qemu and relaunch .
> > > ./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0
> > >
> >
> > Does VNC not also connect using a network port? What is the output of
> > ./dpdk_nic_bind.py --status before you run this command?
> >
> > /Bruce
> >
> > >
> > >
> > > thanks,
> > > srinivas.
> > >
> > >
> > >
> > > On Thu, Feb 26, 2015 at 10:30 PM, Bruce Richardson <
> > > bruce.richardson@intel.com> wrote:
> > >
> > > > On Thu, Feb 26, 2015 at 10:08:59PM +0530, Srinivasreddy R wrote:
> > > > > hi Mike,
> > > > > Thanks for our detailed explanation of your example . usually i do
> > > > similar
> > > > > to u and i am aware of working with dpdk applications .
> > > > > my problem is :
> > > > > 1. i have written a code for  host to guest communication .[taken
> > form
> > > > > usvhost which is developed in ovdk vswitch] .
> > > > > 2. i launched VM with two  interfaces .
> > > > > 3. i am able to send and receive traffic to and from guest to host on
> > > > these
> > > > > interfaces .
> > > > > 4. when i  try to bind these interfaces to igb_uio  to run dpdk
> > > > application
> > > > > . i am not able to access my instance . it got struck and not
> > responding
> > > > .
> > > > > i need to hard reboot the vm.
> > > >
> > > > Are you sure you are not trying to access the vm via one of the
> > interfaces
> > > > now bount to igb_uio? If you bind the interface you use for ssh to
> > igb_uio,
> > > > you won't be able to ssh to that vm any more.
> > > >
> > > > /Bruce
> > > >
> > > > >
> > > > > My Question is  :
> > > > > surely i might done something wrong in code . as my VM is not able to
> > > > > access any more when i try to bind interfaces to igb_uio  . not able
> > to
> > > > > debug the issue .
> > > > > someone please help me in figuring the issue . i dont find anything
> > in
> > > > > /var/log/messages after relaunching the instance .
> > > > >
> > > > >
> > > > > thanks,
> > > > > srinivas.
> > > > >
> > > > >
> > > > >
> > > > > On Thu, Feb 26, 2015 at 8:42 PM, Polehn, Mike A <
> > mike.a.polehn@intel.com
> > > > >
> > > > > wrote:
> > > > >
> > > > > > In this example, the control network 00:03.0, remains unbound to
> > UIO
> > > > > > driver but remains attached
> > > > > >  to Linux device driver (ssh access with putty) and just the target
> > > > > > interfaces are bound.
> > > > > > Below, it shows all 3 interfaces bound to the uio driver, which
> > are not
> > > > > > usable until a task uses the UIO driver.
> > > > > >
> > > > > > [root@F21vm l3fwd-vf]# lspci -nn
> > > > > > 00:00.0 Host bridge [0600]: Intel Corporation 440FX - 82441FX PMC
> > > > [Natoma]
> > > > > > [8086:1237] (rev 02)
> > > > > > 00:01.0 ISA bridge [0601]: Intel Corporation 82371SB PIIX3 ISA
> > > > > > [Natoma/Triton II] [8086:7000]
> > > > > > 00:01.1 IDE interface [0101]: Intel Corporation 82371SB PIIX3 IDE
> > > > > > [Natoma/Triton II] [8086:7010]
> > > > > > 00:01.3 Bridge [0680]: Intel Corporation 82371AB/EB/MB PIIX4 ACPI
> > > > > > [8086:7113] (rev 03)
> > > > > > 00:02.0 VGA compatible controller [0300]: Cirrus Logic GD 5446
> > > > [1013:00b8]
> > > > > > 00:03.0 Ethernet controller [0200]: Red Hat, Inc Virtio network
> > device
> > > > > > [1af4:1000]
> > > > > > 00:04.0 Ethernet controller [0200]: Intel Corporation XL710/X710
> > > > Virtual
> > > > > > Function [8086:154c] (rev 01)
> > > > > > 00:05.0 Ethernet controller [0200]: Intel Corporation XL710/X710
> > > > Virtual
> > > > > > Function [8086:154c] (rev 01)
> > > > > >
> > > > > > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> > > > > > --bind=igb_uio 00:04.0
> > > > > > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> > > > > > --bind=igb_uio 00:05.0
> > > > > > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> > --status
> > > > > >
> > > > > > Network devices using DPDK-compatible driver
> > > > > > ============================================
> > > > > > 0000:00:04.0 'XL710/X710 Virtual Function' drv=igb_uio
> > unused=i40evf
> > > > > > 0000:00:05.0 'XL710/X710 Virtual Function' drv=igb_uio
> > unused=i40evf
> > > > > >
> > > > > > Network devices using kernel driver
> > > > > > ===================================
> > > > > > 0000:00:03.0 'Virtio network device' if= drv=virtio-pci
> > > > > > unused=virtio_pci,igb_uio
> > > > > >
> > > > > > Other network devices
> > > > > > =====================
> > > > > > <none>
> > > > > >
> > > > > > -----Original Message-----
> > > > > > From: Dpdk-ovs [mailto:dpdk-ovs-bounces@lists.01.org] On Behalf Of
> > > > > > Srinivasreddy R
> > > > > > Sent: Thursday, February 26, 2015 6:11 AM
> > > > > > To: dev@dpdk.org; dpdk-ovs@lists.01.org
> > > > > > Subject: [Dpdk-ovs] problem in binding interfaces of virtio-pci on
> > the
> > > > VM
> > > > > >
> > > > > > hi ,
> > > > > > I have written sample program for usvhost  supported by ovdk.
> > > > > >
> > > > > > i have initialized VM using the below command .
> > > > > > On the VM :
> > > > > >
> > > > > > I am able to see two interfaces . and working fine with traffic in
> > > > > > rawsocket mode .
> > > > > > my problem is when i bind the interfaces to pmd driver[ ibg_uio ]
> > my
> > > > > > virtual machine is getting hanged . and  i am not able to access it
> > > > further
> > > > > > .
> > > > > > now my question is . what may be the reason for the behavior . and
> > how
> > > > can
> > > > > > in debug the root cause .
> > > > > > please help in finding out the problem .
> > > > > >
> > > > > >
> > > > > >
> > > > > >  ./tools/dpdk_nic_bind.py --status
> > > > > >
> > > > > > Network devices using DPDK-compatible driver
> > > > > > ============================================
> > > > > > <none>
> > > > > >
> > > > > > Network devices using kernel driver
> > > > > > ===================================
> > > > > > 0000:00:03.0 '82540EM Gigabit Ethernet Controller' if=ens3
> > drv=e1000
> > > > > > unused=igb_uio *Active*
> > > > > > 0000:00:04.0 'Virtio network device' if= drv=virtio-pci
> > unused=igb_uio
> > > > > > 0000:00:05.0 'Virtio network device' if= drv=virtio-pci
> > unused=igb_uio
> > > > > >
> > > > > > Other network devices
> > > > > > =====================
> > > > > > <none>
> > > > > >
> > > > > >
> > > > > > ./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0
> > > > > >
> > > > > >
> > > > > >
> > > > > > ./x86_64-softmmu/qemu-system-x86_64 -cpu host -boot c  -hda
> > > > > > /home/utils/images/vm1.img  -m 2048M -smp 3 --enable-kvm -name
> > 'VM1'
> > > > > > -nographic -vnc :1 -pidfile /tmp/vm1.pid -drive
> > > > > > file=fat:rw:/tmp/qemu_share,snapshot=off -monitor
> > > > > > unix:/tmp/vm1monitor,server,nowait  -net none -no-reboot -mem-path
> > > > > > /dev/hugepages -mem-prealloc -netdev
> > > > > > type=tap,id=net1,script=no,downscript=no,ifname=usvhost1,vhost=on
> > > > -device
> > > > > >
> > > >
> > virtio-net-pci,netdev=net1,mac=00:16:3e:00:03:03,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
> > > > > > -netdev
> > > > type=tap,id=net2,script=no,downscript=no,ifname=usvhost2,vhost=on
> > > > > > -device
> > > > > >
> > > > > >
> > > >
> > virtio-net-pci,netdev=net2,mac=00:16:3e:00:03:04,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > ----------
> > > > > > thanks
> > > > > > srinivas.
> > > > > > _______________________________________________
> > > > > > Dpdk-ovs mailing list
> > > > > > Dpdk-ovs@lists.01.org
> > > > > > https://lists.01.org/mailman/listinfo/dpdk-ovs
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > thanks
> > > > > srinivas.
> > > >
> > >
> > >
> > >
> > > --
> > > thanks
> > > srinivas.
> >
> 
> 
> 
> -- 
> thanks
> srinivas.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-dev] [Dpdk-ovs] problem in binding interfaces of virtio-pci on the VM
  2015-02-27 10:59           ` Srinivasreddy R
  2015-02-27 11:09             ` Bruce Richardson
@ 2015-02-27 14:17             ` Mussar, Gary
  2015-02-27 18:21               ` Srinivasreddy R
  1 sibling, 1 reply; 10+ messages in thread
From: Mussar, Gary @ 2015-02-27 14:17 UTC (permalink / raw)
  Cc: dev, dpdk-ovs

This may be a long shot, but I have noticed that using dissimilar device types when launching the VM that these devices might not be bound to the same eth devices in the VM. Are you sure that esn3 is the device you are expecting to use to talk to the host?

Gary

-----Original Message-----
From: Dpdk-ovs [mailto:dpdk-ovs-bounces@lists.01.org] On Behalf Of Srinivasreddy R
Sent: Friday, February 27, 2015 06:00
To: Bruce Richardson
Cc: dev@dpdk.org; dpdk-ovs@lists.01.org
Subject: Re: [Dpdk-ovs] [dpdk-dev] problem in binding interfaces of virtio-pci on the VM

hi ,

please fine the oputput  On the VM .

/tools/dpdk_nic_bind.py --status

Network devices using DPDK-compatible driver ============================== ============== <none>

Network devices using kernel driver
===================================
0000:00:03.0 '82540EM Gigabit Ethernet Controller' if=ens3 drv=e1000 unused=igb_uio *Active*
0000:00:04.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio
0000:00:05.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio

Other network devices
=====================
<none>


i am trying to bind  "virtio network devices "   with pci  00:04.0 ,
00:05.0 .
 .
when i give the  below command i face the issue.
./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0



when  qemu does not able to allocate memory for vm  on /dev/hugepages  . it gives the below error message . "Cannot allocate memory "
In this case i am able to bind the interfaces to igb_uio .
does this gives any hint on what wrong i am doing .

do i need to handle any thing on the host when i bind to igb_uio on the guest  for usvhost .


 ./x86_64-softmmu/qemu-system-x86_64 -cpu host -boot c  -hda /home/utils/images/vm1.img  -m 4096M -smp 3 --enable-kvm -name 'VM1'
-nographic -vnc :1 -pidfile /tmp/vm1.pid -drive file=fat:rw:/tmp/qemu_share,snapshot=off -monitor unix:/tmp/vm1monitor,server,nowait  -net none -no-reboot -mem-path /dev/hugepages -mem-prealloc -netdev type=tap,id=net1,script=no,downscript=no,ifname=usvhost1,vhost=on -device virtio-net-pci,netdev=net1,mac=00:16:3e:00:03:03,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
-netdev type=tap,id=net2,script=no,downscript=no,ifname=usvhost2,vhost=on
-device
virtio-net-pci,netdev=net2,mac=00:16:3e:00:03:04,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
-net nic -net tap,ifname=tap6,script=no
vvfat /tmp/qemu_share chs 1024,16,63
file_ram_alloc: can't mmap RAM pages: Cannot allocate memory
qemu-system-x86_64: unable to start vhost net: 22: falling back on userspace virtio
qemu-system-x86_64: unable to start vhost net: 22: falling back on userspace virtio




thanks,
srinivas.



On Fri, Feb 27, 2015 at 3:36 PM, Bruce Richardson < bruce.richardson@intel.com> wrote:

> On Thu, Feb 26, 2015 at 10:46:58PM +0530, Srinivasreddy R wrote:
> > hi Bruce ,
> > Thank you for your response .
> > I am accessing my VM via  " vncviewer " . so ssh doesn't come into
> picture .
> > Is there any way to find the root cause of my problem . does dpdk 
> > stores any logs while binding interfaces to igb_uio.
> > i have seen my /var/log/messages . but could not find any clue.
> >
> > the movement i gave the below command my vm got struck and not 
> > responding untill i forcefully kill the qemu and relaunch .
> > ./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0
> >
>
> Does VNC not also connect using a network port? What is the output of 
> ./dpdk_nic_bind.py --status before you run this command?
>
> /Bruce
>
> >
> >
> > thanks,
> > srinivas.
> >
> >
> >
> > On Thu, Feb 26, 2015 at 10:30 PM, Bruce Richardson < 
> > bruce.richardson@intel.com> wrote:
> >
> > > On Thu, Feb 26, 2015 at 10:08:59PM +0530, Srinivasreddy R wrote:
> > > > hi Mike,
> > > > Thanks for our detailed explanation of your example . usually i 
> > > > do
> > > similar
> > > > to u and i am aware of working with dpdk applications .
> > > > my problem is :
> > > > 1. i have written a code for  host to guest communication 
> > > > .[taken
> form
> > > > usvhost which is developed in ovdk vswitch] .
> > > > 2. i launched VM with two  interfaces .
> > > > 3. i am able to send and receive traffic to and from guest to 
> > > > host on
> > > these
> > > > interfaces .
> > > > 4. when i  try to bind these interfaces to igb_uio  to run dpdk
> > > application
> > > > . i am not able to access my instance . it got struck and not
> responding
> > > .
> > > > i need to hard reboot the vm.
> > >
> > > Are you sure you are not trying to access the vm via one of the
> interfaces
> > > now bount to igb_uio? If you bind the interface you use for ssh to
> igb_uio,
> > > you won't be able to ssh to that vm any more.
> > >
> > > /Bruce
> > >
> > > >
> > > > My Question is  :
> > > > surely i might done something wrong in code . as my VM is not 
> > > > able to access any more when i try to bind interfaces to igb_uio  
> > > > . not able
> to
> > > > debug the issue .
> > > > someone please help me in figuring the issue . i dont find 
> > > > anything
> in
> > > > /var/log/messages after relaunching the instance .
> > > >
> > > >
> > > > thanks,
> > > > srinivas.
> > > >
> > > >
> > > >
> > > > On Thu, Feb 26, 2015 at 8:42 PM, Polehn, Mike A <
> mike.a.polehn@intel.com
> > > >
> > > > wrote:
> > > >
> > > > > In this example, the control network 00:03.0, remains unbound 
> > > > > to
> UIO
> > > > > driver but remains attached
> > > > >  to Linux device driver (ssh access with putty) and just the 
> > > > > target interfaces are bound.
> > > > > Below, it shows all 3 interfaces bound to the uio driver, 
> > > > > which
> are not
> > > > > usable until a task uses the UIO driver.
> > > > >
> > > > > [root@F21vm l3fwd-vf]# lspci -nn
> > > > > 00:00.0 Host bridge [0600]: Intel Corporation 440FX - 82441FX 
> > > > > PMC
> > > [Natoma]
> > > > > [8086:1237] (rev 02)
> > > > > 00:01.0 ISA bridge [0601]: Intel Corporation 82371SB PIIX3 ISA 
> > > > > [Natoma/Triton II] [8086:7000]
> > > > > 00:01.1 IDE interface [0101]: Intel Corporation 82371SB PIIX3 
> > > > > IDE [Natoma/Triton II] [8086:7010]
> > > > > 00:01.3 Bridge [0680]: Intel Corporation 82371AB/EB/MB PIIX4 
> > > > > ACPI [8086:7113] (rev 03)
> > > > > 00:02.0 VGA compatible controller [0300]: Cirrus Logic GD 5446
> > > [1013:00b8]
> > > > > 00:03.0 Ethernet controller [0200]: Red Hat, Inc Virtio 
> > > > > network
> device
> > > > > [1af4:1000]
> > > > > 00:04.0 Ethernet controller [0200]: Intel Corporation 
> > > > > XL710/X710
> > > Virtual
> > > > > Function [8086:154c] (rev 01)
> > > > > 00:05.0 Ethernet controller [0200]: Intel Corporation 
> > > > > XL710/X710
> > > Virtual
> > > > > Function [8086:154c] (rev 01)
> > > > >
> > > > > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> > > > > --bind=igb_uio 00:04.0
> > > > > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> > > > > --bind=igb_uio 00:05.0
> > > > > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> --status
> > > > >
> > > > > Network devices using DPDK-compatible driver 
> > > > > ============================================
> > > > > 0000:00:04.0 'XL710/X710 Virtual Function' drv=igb_uio
> unused=i40evf
> > > > > 0000:00:05.0 'XL710/X710 Virtual Function' drv=igb_uio
> unused=i40evf
> > > > >
> > > > > Network devices using kernel driver 
> > > > > ===================================
> > > > > 0000:00:03.0 'Virtio network device' if= drv=virtio-pci 
> > > > > unused=virtio_pci,igb_uio
> > > > >
> > > > > Other network devices
> > > > > =====================
> > > > > <none>
> > > > >
> > > > > -----Original Message-----
> > > > > From: Dpdk-ovs [mailto:dpdk-ovs-bounces@lists.01.org] On 
> > > > > Behalf Of Srinivasreddy R
> > > > > Sent: Thursday, February 26, 2015 6:11 AM
> > > > > To: dev@dpdk.org; dpdk-ovs@lists.01.org
> > > > > Subject: [Dpdk-ovs] problem in binding interfaces of 
> > > > > virtio-pci on
> the
> > > VM
> > > > >
> > > > > hi ,
> > > > > I have written sample program for usvhost  supported by ovdk.
> > > > >
> > > > > i have initialized VM using the below command .
> > > > > On the VM :
> > > > >
> > > > > I am able to see two interfaces . and working fine with 
> > > > > traffic in rawsocket mode .
> > > > > my problem is when i bind the interfaces to pmd driver[ 
> > > > > ibg_uio ]
> my
> > > > > virtual machine is getting hanged . and  i am not able to 
> > > > > access it
> > > further
> > > > > .
> > > > > now my question is . what may be the reason for the behavior . 
> > > > > and
> how
> > > can
> > > > > in debug the root cause .
> > > > > please help in finding out the problem .
> > > > >
> > > > >
> > > > >
> > > > >  ./tools/dpdk_nic_bind.py --status
> > > > >
> > > > > Network devices using DPDK-compatible driver 
> > > > > ============================================
> > > > > <none>
> > > > >
> > > > > Network devices using kernel driver 
> > > > > ===================================
> > > > > 0000:00:03.0 '82540EM Gigabit Ethernet Controller' if=ens3
> drv=e1000
> > > > > unused=igb_uio *Active*
> > > > > 0000:00:04.0 'Virtio network device' if= drv=virtio-pci
> unused=igb_uio
> > > > > 0000:00:05.0 'Virtio network device' if= drv=virtio-pci
> unused=igb_uio
> > > > >
> > > > > Other network devices
> > > > > =====================
> > > > > <none>
> > > > >
> > > > >
> > > > > ./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0
> > > > >
> > > > >
> > > > >
> > > > > ./x86_64-softmmu/qemu-system-x86_64 -cpu host -boot c  -hda 
> > > > > /home/utils/images/vm1.img  -m 2048M -smp 3 --enable-kvm -name
> 'VM1'
> > > > > -nographic -vnc :1 -pidfile /tmp/vm1.pid -drive 
> > > > > file=fat:rw:/tmp/qemu_share,snapshot=off -monitor 
> > > > > unix:/tmp/vm1monitor,server,nowait  -net none -no-reboot 
> > > > > -mem-path /dev/hugepages -mem-prealloc -netdev 
> > > > > type=tap,id=net1,script=no,downscript=no,ifname=usvhost1,vhost
> > > > > =on
> > > -device
> > > > >
> > >
> virtio-net-pci,netdev=net1,mac=00:16:3e:00:03:03,csum=off,gso=off,gues
> t_tso4=off,guest_tso6=off,guest_ecn=off
> > > > > -netdev
> > > type=tap,id=net2,script=no,downscript=no,ifname=usvhost2,vhost=on
> > > > > -device
> > > > >
> > > > >
> > >
> virtio-net-pci,netdev=net2,mac=00:16:3e:00:03:04,csum=off,gso=off,gues
> t_tso4=off,guest_tso6=off,guest_ecn=off
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > ----------
> > > > > thanks
> > > > > srinivas.
> > > > > _______________________________________________
> > > > > Dpdk-ovs mailing list
> > > > > Dpdk-ovs@lists.01.org
> > > > > https://lists.01.org/mailman/listinfo/dpdk-ovs
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > thanks
> > > > srinivas.
> > >
> >
> >
> >
> > --
> > thanks
> > srinivas.
>



--
thanks
srinivas.
_______________________________________________
Dpdk-ovs mailing list
Dpdk-ovs@lists.01.org
https://lists.01.org/mailman/listinfo/dpdk-ovs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-dev] [Dpdk-ovs] problem in binding interfaces of virtio-pci on the VM
  2015-02-27 14:17             ` Mussar, Gary
@ 2015-02-27 18:21               ` Srinivasreddy R
  0 siblings, 0 replies; 10+ messages in thread
From: Srinivasreddy R @ 2015-02-27 18:21 UTC (permalink / raw)
  To: Mussar, Gary; +Cc: dev, dpdk-ovs

hi ,
Thanks for your reply .

Are you sure that esn3 is the device you are expecting to use to talk to
the host?
I am sure ens3 is the device i talk to the host . later on i removed ens3
and accessed my VM with  "vncviewer" .

when i bind interfaces on the VM with igb_uio . How the communication
between guest to host takes place ..
may be i am not handling properly on the host application .
what are the things to be taken care in the host application ?

thanks,
srinivas.


> Gary
>
> -----Original Message-----
> From: Dpdk-ovs [mailto:dpdk-ovs-bounces@lists.01.org] On Behalf Of
> Srinivasreddy R
> Sent: Friday, February 27, 2015 06:00
> To: Bruce Richardson
> Cc: dev@dpdk.org; dpdk-ovs@lists.01.org
> Subject: Re: [Dpdk-ovs] [dpdk-dev] problem in binding interfaces of
> virtio-pci on the VM
>
> hi ,
>
> please fine the oputput  On the VM .
>
> /tools/dpdk_nic_bind.py --status
>
> Network devices using DPDK-compatible driver
> ============================== ============== <none>
>
> Network devices using kernel driver
> ===================================
> 0000:00:03.0 '82540EM Gigabit Ethernet Controller' if=ens3 drv=e1000
> unused=igb_uio *Active*
> 0000:00:04.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio
> 0000:00:05.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio
>
> Other network devices
> =====================
> <none>
>
>
> i am trying to bind  "virtio network devices "   with pci  00:04.0 ,
> 00:05.0 .
>  .
> when i give the  below command i face the issue.
> ./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0
>
>
>
> when  qemu does not able to allocate memory for vm  on /dev/hugepages  .
> it gives the below error message . "Cannot allocate memory "
> In this case i am able to bind the interfaces to igb_uio .
> does this gives any hint on what wrong i am doing .
>
> do i need to handle any thing on the host when i bind to igb_uio on the
> guest  for usvhost .
>
>
>  ./x86_64-softmmu/qemu-system-x86_64 -cpu host -boot c  -hda
> /home/utils/images/vm1.img  -m 4096M -smp 3 --enable-kvm -name 'VM1'
> -nographic -vnc :1 -pidfile /tmp/vm1.pid -drive
> file=fat:rw:/tmp/qemu_share,snapshot=off -monitor
> unix:/tmp/vm1monitor,server,nowait  -net none -no-reboot -mem-path
> /dev/hugepages -mem-prealloc -netdev
> type=tap,id=net1,script=no,downscript=no,ifname=usvhost1,vhost=on -device
> virtio-net-pci,netdev=net1,mac=00:16:3e:00:03:03,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
> -netdev type=tap,id=net2,script=no,downscript=no,ifname=usvhost2,vhost=on
> -device
>
> virtio-net-pci,netdev=net2,mac=00:16:3e:00:03:04,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
> -net nic -net tap,ifname=tap6,script=no
> vvfat /tmp/qemu_share chs 1024,16,63
> file_ram_alloc: can't mmap RAM pages: Cannot allocate memory
> qemu-system-x86_64: unable to start vhost net: 22: falling back on
> userspace virtio
> qemu-system-x86_64: unable to start vhost net: 22: falling back on
> userspace virtio
>
>
>
>
> thanks,
> srinivas.
>
>
>
> On Fri, Feb 27, 2015 at 3:36 PM, Bruce Richardson <
> bruce.richardson@intel.com> wrote:
>
> > On Thu, Feb 26, 2015 at 10:46:58PM +0530, Srinivasreddy R wrote:
> > > hi Bruce ,
> > > Thank you for your response .
> > > I am accessing my VM via  " vncviewer " . so ssh doesn't come into
> > picture .
> > > Is there any way to find the root cause of my problem . does dpdk
> > > stores any logs while binding interfaces to igb_uio.
> > > i have seen my /var/log/messages . but could not find any clue.
> > >
> > > the movement i gave the below command my vm got struck and not
> > > responding untill i forcefully kill the qemu and relaunch .
> > > ./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0
> > >
> >
> > Does VNC not also connect using a network port? What is the output of
> > ./dpdk_nic_bind.py --status before you run this command?
> >
> > /Bruce
> >
> > >
> > >
> > > thanks,
> > > srinivas.
> > >
> > >
> > >
> > > On Thu, Feb 26, 2015 at 10:30 PM, Bruce Richardson <
> > > bruce.richardson@intel.com> wrote:
> > >
> > > > On Thu, Feb 26, 2015 at 10:08:59PM +0530, Srinivasreddy R wrote:
> > > > > hi Mike,
> > > > > Thanks for our detailed explanation of your example . usually i
> > > > > do
> > > > similar
> > > > > to u and i am aware of working with dpdk applications .
> > > > > my problem is :
> > > > > 1. i have written a code for  host to guest communication
> > > > > .[taken
> > form
> > > > > usvhost which is developed in ovdk vswitch] .
> > > > > 2. i launched VM with two  interfaces .
> > > > > 3. i am able to send and receive traffic to and from guest to
> > > > > host on
> > > > these
> > > > > interfaces .
> > > > > 4. when i  try to bind these interfaces to igb_uio  to run dpdk
> > > > application
> > > > > . i am not able to access my instance . it got struck and not
> > responding
> > > > .
> > > > > i need to hard reboot the vm.
> > > >
> > > > Are you sure you are not trying to access the vm via one of the
> > interfaces
> > > > now bount to igb_uio? If you bind the interface you use for ssh to
> > igb_uio,
> > > > you won't be able to ssh to that vm any more.
> > > >
> > > > /Bruce
> > > >
> > > > >
> > > > > My Question is  :
> > > > > surely i might done something wrong in code . as my VM is not
> > > > > able to access any more when i try to bind interfaces to igb_uio
> > > > > . not able
> > to
> > > > > debug the issue .
> > > > > someone please help me in figuring the issue . i dont find
> > > > > anything
> > in
> > > > > /var/log/messages after relaunching the instance .
> > > > >
> > > > >
> > > > > thanks,
> > > > > srinivas.
> > > > >
> > > > >
> > > > >
> > > > > On Thu, Feb 26, 2015 at 8:42 PM, Polehn, Mike A <
> > mike.a.polehn@intel.com
> > > > >
> > > > > wrote:
> > > > >
> > > > > > In this example, the control network 00:03.0, remains unbound
> > > > > > to
> > UIO
> > > > > > driver but remains attached
> > > > > >  to Linux device driver (ssh access with putty) and just the
> > > > > > target interfaces are bound.
> > > > > > Below, it shows all 3 interfaces bound to the uio driver,
> > > > > > which
> > are not
> > > > > > usable until a task uses the UIO driver.
> > > > > >
> > > > > > [root@F21vm l3fwd-vf]# lspci -nn
> > > > > > 00:00.0 Host bridge [0600]: Intel Corporation 440FX - 82441FX
> > > > > > PMC
> > > > [Natoma]
> > > > > > [8086:1237] (rev 02)
> > > > > > 00:01.0 ISA bridge [0601]: Intel Corporation 82371SB PIIX3 ISA
> > > > > > [Natoma/Triton II] [8086:7000]
> > > > > > 00:01.1 IDE interface [0101]: Intel Corporation 82371SB PIIX3
> > > > > > IDE [Natoma/Triton II] [8086:7010]
> > > > > > 00:01.3 Bridge [0680]: Intel Corporation 82371AB/EB/MB PIIX4
> > > > > > ACPI [8086:7113] (rev 03)
> > > > > > 00:02.0 VGA compatible controller [0300]: Cirrus Logic GD 5446
> > > > [1013:00b8]
> > > > > > 00:03.0 Ethernet controller [0200]: Red Hat, Inc Virtio
> > > > > > network
> > device
> > > > > > [1af4:1000]
> > > > > > 00:04.0 Ethernet controller [0200]: Intel Corporation
> > > > > > XL710/X710
> > > > Virtual
> > > > > > Function [8086:154c] (rev 01)
> > > > > > 00:05.0 Ethernet controller [0200]: Intel Corporation
> > > > > > XL710/X710
> > > > Virtual
> > > > > > Function [8086:154c] (rev 01)
> > > > > >
> > > > > > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> > > > > > --bind=igb_uio 00:04.0
> > > > > > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> > > > > > --bind=igb_uio 00:05.0
> > > > > > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> > --status
> > > > > >
> > > > > > Network devices using DPDK-compatible driver
> > > > > > ============================================
> > > > > > 0000:00:04.0 'XL710/X710 Virtual Function' drv=igb_uio
> > unused=i40evf
> > > > > > 0000:00:05.0 'XL710/X710 Virtual Function' drv=igb_uio
> > unused=i40evf
> > > > > >
> > > > > > Network devices using kernel driver
> > > > > > ===================================
> > > > > > 0000:00:03.0 'Virtio network device' if= drv=virtio-pci
> > > > > > unused=virtio_pci,igb_uio
> > > > > >
> > > > > > Other network devices
> > > > > > =====================
> > > > > > <none>
> > > > > >
> > > > > > -----Original Message-----
> > > > > > From: Dpdk-ovs [mailto:dpdk-ovs-bounces@lists.01.org] On
> > > > > > Behalf Of Srinivasreddy R
> > > > > > Sent: Thursday, February 26, 2015 6:11 AM
> > > > > > To: dev@dpdk.org; dpdk-ovs@lists.01.org
> > > > > > Subject: [Dpdk-ovs] problem in binding interfaces of
> > > > > > virtio-pci on
> > the
> > > > VM
> > > > > >
> > > > > > hi ,
> > > > > > I have written sample program for usvhost  supported by ovdk.
> > > > > >
> > > > > > i have initialized VM using the below command .
> > > > > > On the VM :
> > > > > >
> > > > > > I am able to see two interfaces . and working fine with
> > > > > > traffic in rawsocket mode .
> > > > > > my problem is when i bind the interfaces to pmd driver[
> > > > > > ibg_uio ]
> > my
> > > > > > virtual machine is getting hanged . and  i am not able to
> > > > > > access it
> > > > further
> > > > > > .
> > > > > > now my question is . what may be the reason for the behavior .
> > > > > > and
> > how
> > > > can
> > > > > > in debug the root cause .
> > > > > > please help in finding out the problem .
> > > > > >
> > > > > >
> > > > > >
> > > > > >  ./tools/dpdk_nic_bind.py --status
> > > > > >
> > > > > > Network devices using DPDK-compatible driver
> > > > > > ============================================
> > > > > > <none>
> > > > > >
> > > > > > Network devices using kernel driver
> > > > > > ===================================
> > > > > > 0000:00:03.0 '82540EM Gigabit Ethernet Controller' if=ens3
> > drv=e1000
> > > > > > unused=igb_uio *Active*
> > > > > > 0000:00:04.0 'Virtio network device' if= drv=virtio-pci
> > unused=igb_uio
> > > > > > 0000:00:05.0 'Virtio network device' if= drv=virtio-pci
> > unused=igb_uio
> > > > > >
> > > > > > Other network devices
> > > > > > =====================
> > > > > > <none>
> > > > > >
> > > > > >
> > > > > > ./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0
> > > > > >
> > > > > >
> > > > > >
> > > > > > ./x86_64-softmmu/qemu-system-x86_64 -cpu host -boot c  -hda
> > > > > > /home/utils/images/vm1.img  -m 2048M -smp 3 --enable-kvm -name
> > 'VM1'
> > > > > > -nographic -vnc :1 -pidfile /tmp/vm1.pid -drive
> > > > > > file=fat:rw:/tmp/qemu_share,snapshot=off -monitor
> > > > > > unix:/tmp/vm1monitor,server,nowait  -net none -no-reboot
> > > > > > -mem-path /dev/hugepages -mem-prealloc -netdev
> > > > > > type=tap,id=net1,script=no,downscript=no,ifname=usvhost1,vhost
> > > > > > =on
> > > > -device
> > > > > >
> > > >
> > virtio-net-pci,netdev=net1,mac=00:16:3e:00:03:03,csum=off,gso=off,gues
> > t_tso4=off,guest_tso6=off,guest_ecn=off
> > > > > > -netdev
> > > > type=tap,id=net2,script=no,downscript=no,ifname=usvhost2,vhost=on
> > > > > > -device
> > > > > >
> > > > > >
> > > >
> > virtio-net-pci,netdev=net2,mac=00:16:3e:00:03:04,csum=off,gso=off,gues
> > t_tso4=off,guest_tso6=off,guest_ecn=off
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > ----------
> > > > > > thanks
> > > > > > srinivas.
> > > > > > _______________________________________________
> > > > > > Dpdk-ovs mailing list
> > > > > > Dpdk-ovs@lists.01.org
> > > > > > https://lists.01.org/mailman/listinfo/dpdk-ovs
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > thanks
> > > > > srinivas.
> > > >
> > >
> > >
> > >
> > > --
> > > thanks
> > > srinivas.
> >
>
>
>
> --
> thanks
> srinivas.
> _______________________________________________
> Dpdk-ovs mailing list
> Dpdk-ovs@lists.01.org
> https://lists.01.org/mailman/listinfo/dpdk-ovs
> _______________________________________________
> Dpdk-ovs mailing list
> Dpdk-ovs@lists.01.org
> https://lists.01.org/mailman/listinfo/dpdk-ovs
>



-- 
thanks
srinivas.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2015-02-27 18:21 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-26 14:11 [dpdk-dev] problem in binding interfaces of virtio-pci on the VM Srinivasreddy R
2015-02-26 15:12 ` [dpdk-dev] [Dpdk-ovs] " Polehn, Mike A
2015-02-26 16:38   ` Srinivasreddy R
2015-02-26 17:00     ` Bruce Richardson
2015-02-26 17:16       ` Srinivasreddy R
2015-02-27 10:06         ` Bruce Richardson
2015-02-27 10:59           ` Srinivasreddy R
2015-02-27 11:09             ` Bruce Richardson
2015-02-27 14:17             ` Mussar, Gary
2015-02-27 18:21               ` Srinivasreddy R

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).