DPDK patches and discussions
 help / color / mirror / Atom feed
From: Srinivasreddy R <srinivasreddy4390@gmail.com>
To: "Mussar, Gary" <gmussar@ciena.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	"dpdk-ovs@lists.01.org" <dpdk-ovs@lists.01.org>
Subject: Re: [dpdk-dev] [Dpdk-ovs] problem in binding interfaces of virtio-pci on the VM
Date: Fri, 27 Feb 2015 23:51:22 +0530	[thread overview]
Message-ID: <CAJP4VWjOUoxj-zqqRx6ROrA710iYgShP90dKnJwdk+KuJbHyGg@mail.gmail.com> (raw)
In-Reply-To: <C281A17C31CFD745B242416D0E96EC637C75C5E2@ONWVEXCHMB04.ciena.com>

hi ,
Thanks for your reply .

Are you sure that esn3 is the device you are expecting to use to talk to
the host?
I am sure ens3 is the device i talk to the host . later on i removed ens3
and accessed my VM with  "vncviewer" .

when i bind interfaces on the VM with igb_uio . How the communication
between guest to host takes place ..
may be i am not handling properly on the host application .
what are the things to be taken care in the host application ?

thanks,
srinivas.


> Gary
>
> -----Original Message-----
> From: Dpdk-ovs [mailto:dpdk-ovs-bounces@lists.01.org] On Behalf Of
> Srinivasreddy R
> Sent: Friday, February 27, 2015 06:00
> To: Bruce Richardson
> Cc: dev@dpdk.org; dpdk-ovs@lists.01.org
> Subject: Re: [Dpdk-ovs] [dpdk-dev] problem in binding interfaces of
> virtio-pci on the VM
>
> hi ,
>
> please fine the oputput  On the VM .
>
> /tools/dpdk_nic_bind.py --status
>
> Network devices using DPDK-compatible driver
> ============================== ============== <none>
>
> Network devices using kernel driver
> ===================================
> 0000:00:03.0 '82540EM Gigabit Ethernet Controller' if=ens3 drv=e1000
> unused=igb_uio *Active*
> 0000:00:04.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio
> 0000:00:05.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio
>
> Other network devices
> =====================
> <none>
>
>
> i am trying to bind  "virtio network devices "   with pci  00:04.0 ,
> 00:05.0 .
>  .
> when i give the  below command i face the issue.
> ./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0
>
>
>
> when  qemu does not able to allocate memory for vm  on /dev/hugepages  .
> it gives the below error message . "Cannot allocate memory "
> In this case i am able to bind the interfaces to igb_uio .
> does this gives any hint on what wrong i am doing .
>
> do i need to handle any thing on the host when i bind to igb_uio on the
> guest  for usvhost .
>
>
>  ./x86_64-softmmu/qemu-system-x86_64 -cpu host -boot c  -hda
> /home/utils/images/vm1.img  -m 4096M -smp 3 --enable-kvm -name 'VM1'
> -nographic -vnc :1 -pidfile /tmp/vm1.pid -drive
> file=fat:rw:/tmp/qemu_share,snapshot=off -monitor
> unix:/tmp/vm1monitor,server,nowait  -net none -no-reboot -mem-path
> /dev/hugepages -mem-prealloc -netdev
> type=tap,id=net1,script=no,downscript=no,ifname=usvhost1,vhost=on -device
> virtio-net-pci,netdev=net1,mac=00:16:3e:00:03:03,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
> -netdev type=tap,id=net2,script=no,downscript=no,ifname=usvhost2,vhost=on
> -device
>
> virtio-net-pci,netdev=net2,mac=00:16:3e:00:03:04,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
> -net nic -net tap,ifname=tap6,script=no
> vvfat /tmp/qemu_share chs 1024,16,63
> file_ram_alloc: can't mmap RAM pages: Cannot allocate memory
> qemu-system-x86_64: unable to start vhost net: 22: falling back on
> userspace virtio
> qemu-system-x86_64: unable to start vhost net: 22: falling back on
> userspace virtio
>
>
>
>
> thanks,
> srinivas.
>
>
>
> On Fri, Feb 27, 2015 at 3:36 PM, Bruce Richardson <
> bruce.richardson@intel.com> wrote:
>
> > On Thu, Feb 26, 2015 at 10:46:58PM +0530, Srinivasreddy R wrote:
> > > hi Bruce ,
> > > Thank you for your response .
> > > I am accessing my VM via  " vncviewer " . so ssh doesn't come into
> > picture .
> > > Is there any way to find the root cause of my problem . does dpdk
> > > stores any logs while binding interfaces to igb_uio.
> > > i have seen my /var/log/messages . but could not find any clue.
> > >
> > > the movement i gave the below command my vm got struck and not
> > > responding untill i forcefully kill the qemu and relaunch .
> > > ./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0
> > >
> >
> > Does VNC not also connect using a network port? What is the output of
> > ./dpdk_nic_bind.py --status before you run this command?
> >
> > /Bruce
> >
> > >
> > >
> > > thanks,
> > > srinivas.
> > >
> > >
> > >
> > > On Thu, Feb 26, 2015 at 10:30 PM, Bruce Richardson <
> > > bruce.richardson@intel.com> wrote:
> > >
> > > > On Thu, Feb 26, 2015 at 10:08:59PM +0530, Srinivasreddy R wrote:
> > > > > hi Mike,
> > > > > Thanks for our detailed explanation of your example . usually i
> > > > > do
> > > > similar
> > > > > to u and i am aware of working with dpdk applications .
> > > > > my problem is :
> > > > > 1. i have written a code for  host to guest communication
> > > > > .[taken
> > form
> > > > > usvhost which is developed in ovdk vswitch] .
> > > > > 2. i launched VM with two  interfaces .
> > > > > 3. i am able to send and receive traffic to and from guest to
> > > > > host on
> > > > these
> > > > > interfaces .
> > > > > 4. when i  try to bind these interfaces to igb_uio  to run dpdk
> > > > application
> > > > > . i am not able to access my instance . it got struck and not
> > responding
> > > > .
> > > > > i need to hard reboot the vm.
> > > >
> > > > Are you sure you are not trying to access the vm via one of the
> > interfaces
> > > > now bount to igb_uio? If you bind the interface you use for ssh to
> > igb_uio,
> > > > you won't be able to ssh to that vm any more.
> > > >
> > > > /Bruce
> > > >
> > > > >
> > > > > My Question is  :
> > > > > surely i might done something wrong in code . as my VM is not
> > > > > able to access any more when i try to bind interfaces to igb_uio
> > > > > . not able
> > to
> > > > > debug the issue .
> > > > > someone please help me in figuring the issue . i dont find
> > > > > anything
> > in
> > > > > /var/log/messages after relaunching the instance .
> > > > >
> > > > >
> > > > > thanks,
> > > > > srinivas.
> > > > >
> > > > >
> > > > >
> > > > > On Thu, Feb 26, 2015 at 8:42 PM, Polehn, Mike A <
> > mike.a.polehn@intel.com
> > > > >
> > > > > wrote:
> > > > >
> > > > > > In this example, the control network 00:03.0, remains unbound
> > > > > > to
> > UIO
> > > > > > driver but remains attached
> > > > > >  to Linux device driver (ssh access with putty) and just the
> > > > > > target interfaces are bound.
> > > > > > Below, it shows all 3 interfaces bound to the uio driver,
> > > > > > which
> > are not
> > > > > > usable until a task uses the UIO driver.
> > > > > >
> > > > > > [root@F21vm l3fwd-vf]# lspci -nn
> > > > > > 00:00.0 Host bridge [0600]: Intel Corporation 440FX - 82441FX
> > > > > > PMC
> > > > [Natoma]
> > > > > > [8086:1237] (rev 02)
> > > > > > 00:01.0 ISA bridge [0601]: Intel Corporation 82371SB PIIX3 ISA
> > > > > > [Natoma/Triton II] [8086:7000]
> > > > > > 00:01.1 IDE interface [0101]: Intel Corporation 82371SB PIIX3
> > > > > > IDE [Natoma/Triton II] [8086:7010]
> > > > > > 00:01.3 Bridge [0680]: Intel Corporation 82371AB/EB/MB PIIX4
> > > > > > ACPI [8086:7113] (rev 03)
> > > > > > 00:02.0 VGA compatible controller [0300]: Cirrus Logic GD 5446
> > > > [1013:00b8]
> > > > > > 00:03.0 Ethernet controller [0200]: Red Hat, Inc Virtio
> > > > > > network
> > device
> > > > > > [1af4:1000]
> > > > > > 00:04.0 Ethernet controller [0200]: Intel Corporation
> > > > > > XL710/X710
> > > > Virtual
> > > > > > Function [8086:154c] (rev 01)
> > > > > > 00:05.0 Ethernet controller [0200]: Intel Corporation
> > > > > > XL710/X710
> > > > Virtual
> > > > > > Function [8086:154c] (rev 01)
> > > > > >
> > > > > > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> > > > > > --bind=igb_uio 00:04.0
> > > > > > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> > > > > > --bind=igb_uio 00:05.0
> > > > > > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py
> > --status
> > > > > >
> > > > > > Network devices using DPDK-compatible driver
> > > > > > ============================================
> > > > > > 0000:00:04.0 'XL710/X710 Virtual Function' drv=igb_uio
> > unused=i40evf
> > > > > > 0000:00:05.0 'XL710/X710 Virtual Function' drv=igb_uio
> > unused=i40evf
> > > > > >
> > > > > > Network devices using kernel driver
> > > > > > ===================================
> > > > > > 0000:00:03.0 'Virtio network device' if= drv=virtio-pci
> > > > > > unused=virtio_pci,igb_uio
> > > > > >
> > > > > > Other network devices
> > > > > > =====================
> > > > > > <none>
> > > > > >
> > > > > > -----Original Message-----
> > > > > > From: Dpdk-ovs [mailto:dpdk-ovs-bounces@lists.01.org] On
> > > > > > Behalf Of Srinivasreddy R
> > > > > > Sent: Thursday, February 26, 2015 6:11 AM
> > > > > > To: dev@dpdk.org; dpdk-ovs@lists.01.org
> > > > > > Subject: [Dpdk-ovs] problem in binding interfaces of
> > > > > > virtio-pci on
> > the
> > > > VM
> > > > > >
> > > > > > hi ,
> > > > > > I have written sample program for usvhost  supported by ovdk.
> > > > > >
> > > > > > i have initialized VM using the below command .
> > > > > > On the VM :
> > > > > >
> > > > > > I am able to see two interfaces . and working fine with
> > > > > > traffic in rawsocket mode .
> > > > > > my problem is when i bind the interfaces to pmd driver[
> > > > > > ibg_uio ]
> > my
> > > > > > virtual machine is getting hanged . and  i am not able to
> > > > > > access it
> > > > further
> > > > > > .
> > > > > > now my question is . what may be the reason for the behavior .
> > > > > > and
> > how
> > > > can
> > > > > > in debug the root cause .
> > > > > > please help in finding out the problem .
> > > > > >
> > > > > >
> > > > > >
> > > > > >  ./tools/dpdk_nic_bind.py --status
> > > > > >
> > > > > > Network devices using DPDK-compatible driver
> > > > > > ============================================
> > > > > > <none>
> > > > > >
> > > > > > Network devices using kernel driver
> > > > > > ===================================
> > > > > > 0000:00:03.0 '82540EM Gigabit Ethernet Controller' if=ens3
> > drv=e1000
> > > > > > unused=igb_uio *Active*
> > > > > > 0000:00:04.0 'Virtio network device' if= drv=virtio-pci
> > unused=igb_uio
> > > > > > 0000:00:05.0 'Virtio network device' if= drv=virtio-pci
> > unused=igb_uio
> > > > > >
> > > > > > Other network devices
> > > > > > =====================
> > > > > > <none>
> > > > > >
> > > > > >
> > > > > > ./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0
> > > > > >
> > > > > >
> > > > > >
> > > > > > ./x86_64-softmmu/qemu-system-x86_64 -cpu host -boot c  -hda
> > > > > > /home/utils/images/vm1.img  -m 2048M -smp 3 --enable-kvm -name
> > 'VM1'
> > > > > > -nographic -vnc :1 -pidfile /tmp/vm1.pid -drive
> > > > > > file=fat:rw:/tmp/qemu_share,snapshot=off -monitor
> > > > > > unix:/tmp/vm1monitor,server,nowait  -net none -no-reboot
> > > > > > -mem-path /dev/hugepages -mem-prealloc -netdev
> > > > > > type=tap,id=net1,script=no,downscript=no,ifname=usvhost1,vhost
> > > > > > =on
> > > > -device
> > > > > >
> > > >
> > virtio-net-pci,netdev=net1,mac=00:16:3e:00:03:03,csum=off,gso=off,gues
> > t_tso4=off,guest_tso6=off,guest_ecn=off
> > > > > > -netdev
> > > > type=tap,id=net2,script=no,downscript=no,ifname=usvhost2,vhost=on
> > > > > > -device
> > > > > >
> > > > > >
> > > >
> > virtio-net-pci,netdev=net2,mac=00:16:3e:00:03:04,csum=off,gso=off,gues
> > t_tso4=off,guest_tso6=off,guest_ecn=off
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > ----------
> > > > > > thanks
> > > > > > srinivas.
> > > > > > _______________________________________________
> > > > > > Dpdk-ovs mailing list
> > > > > > Dpdk-ovs@lists.01.org
> > > > > > https://lists.01.org/mailman/listinfo/dpdk-ovs
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > thanks
> > > > > srinivas.
> > > >
> > >
> > >
> > >
> > > --
> > > thanks
> > > srinivas.
> >
>
>
>
> --
> thanks
> srinivas.
> _______________________________________________
> Dpdk-ovs mailing list
> Dpdk-ovs@lists.01.org
> https://lists.01.org/mailman/listinfo/dpdk-ovs
> _______________________________________________
> Dpdk-ovs mailing list
> Dpdk-ovs@lists.01.org
> https://lists.01.org/mailman/listinfo/dpdk-ovs
>



-- 
thanks
srinivas.

      reply	other threads:[~2015-02-27 18:21 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-02-26 14:11 [dpdk-dev] " Srinivasreddy R
2015-02-26 15:12 ` [dpdk-dev] [Dpdk-ovs] " Polehn, Mike A
2015-02-26 16:38   ` Srinivasreddy R
2015-02-26 17:00     ` Bruce Richardson
2015-02-26 17:16       ` Srinivasreddy R
2015-02-27 10:06         ` Bruce Richardson
2015-02-27 10:59           ` Srinivasreddy R
2015-02-27 11:09             ` Bruce Richardson
2015-02-27 14:17             ` Mussar, Gary
2015-02-27 18:21               ` Srinivasreddy R [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJP4VWjOUoxj-zqqRx6ROrA710iYgShP90dKnJwdk+KuJbHyGg@mail.gmail.com \
    --to=srinivasreddy4390@gmail.com \
    --cc=dev@dpdk.org \
    --cc=dpdk-ovs@lists.01.org \
    --cc=gmussar@ciena.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).