From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wg0-f47.google.com (mail-wg0-f47.google.com [74.125.82.47]) by dpdk.org (Postfix) with ESMTP id 53FB73208 for ; Fri, 27 Feb 2015 11:59:37 +0100 (CET) Received: by wggz12 with SMTP id z12so19438696wgg.2 for ; Fri, 27 Feb 2015 02:59:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=nuyjQK7qvFDo4Mcd+7/5wnuZMszsDAd72dBMnsrZJ7w=; b=E7uvwkX1333hdQe0DghnbuGfDF/EA/oN/s23T8ndCH5z4mrKo8VcIaMu2JEtPH+S7M 3iKxiAQcuHE+qdOlVSnmJdJjpSHOoGaJG++EU2YfGjcpRbbHVBRJf2kCAPhlXhaWaSZW XUgFsh7hhyJy/7RZFK8vVY09QHYrdkTGiCC+u0bftzAoWbA3k+5FVb7333ioXsPar/uh HWTd6Deq1F7+0KJ7Epwi8uRubMdJCEkYeBQpBbO5579bkqu4u44ZQ0nAGirqqGTfouAt uF23pd37AWZhXYrA9R3F3RBgBgKKJJ2G85V5YIelcIzQks9cXW1FcmCV/21WeIUqE1hc LCXw== MIME-Version: 1.0 X-Received: by 10.194.89.163 with SMTP id bp3mr25740223wjb.145.1425034776977; Fri, 27 Feb 2015 02:59:36 -0800 (PST) Received: by 10.27.141.73 with HTTP; Fri, 27 Feb 2015 02:59:36 -0800 (PST) In-Reply-To: <20150227100631.GA10816@bricha3-MOBL3> References: <745DB4B8861F8E4B9849C970520ABBF1496BD5CC@ORSMSX102.amr.corp.intel.com> <20150226170001.GA11632@bricha3-MOBL3> <20150227100631.GA10816@bricha3-MOBL3> Date: Fri, 27 Feb 2015 16:29:36 +0530 Message-ID: From: Srinivasreddy R To: Bruce Richardson Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: "dev@dpdk.org" , "dpdk-ovs@lists.01.org" , "Polehn, Mike A" Subject: Re: [dpdk-dev] [Dpdk-ovs] problem in binding interfaces of virtio-pci on the VM X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 27 Feb 2015 10:59:37 -0000 hi , please fine the oputput On the VM . /tools/dpdk_nic_bind.py --status Network devices using DPDK-compatible driver ============================== ============== Network devices using kernel driver =================================== 0000:00:03.0 '82540EM Gigabit Ethernet Controller' if=ens3 drv=e1000 unused=igb_uio *Active* 0000:00:04.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio 0000:00:05.0 'Virtio network device' if= drv=virtio-pci unused=igb_uio Other network devices ===================== i am trying to bind "virtio network devices " with pci 00:04.0 , 00:05.0 . . when i give the below command i face the issue. ./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0 when qemu does not able to allocate memory for vm on /dev/hugepages . it gives the below error message . "Cannot allocate memory " In this case i am able to bind the interfaces to igb_uio . does this gives any hint on what wrong i am doing . do i need to handle any thing on the host when i bind to igb_uio on the guest for usvhost . ./x86_64-softmmu/qemu-system-x86_64 -cpu host -boot c -hda /home/utils/images/vm1.img -m 4096M -smp 3 --enable-kvm -name 'VM1' -nographic -vnc :1 -pidfile /tmp/vm1.pid -drive file=fat:rw:/tmp/qemu_share,snapshot=off -monitor unix:/tmp/vm1monitor,server,nowait -net none -no-reboot -mem-path /dev/hugepages -mem-prealloc -netdev type=tap,id=net1,script=no,downscript=no,ifname=usvhost1,vhost=on -device virtio-net-pci,netdev=net1,mac=00:16:3e:00:03:03,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off -netdev type=tap,id=net2,script=no,downscript=no,ifname=usvhost2,vhost=on -device virtio-net-pci,netdev=net2,mac=00:16:3e:00:03:04,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off -net nic -net tap,ifname=tap6,script=no vvfat /tmp/qemu_share chs 1024,16,63 file_ram_alloc: can't mmap RAM pages: Cannot allocate memory qemu-system-x86_64: unable to start vhost net: 22: falling back on userspace virtio qemu-system-x86_64: unable to start vhost net: 22: falling back on userspace virtio thanks, srinivas. On Fri, Feb 27, 2015 at 3:36 PM, Bruce Richardson < bruce.richardson@intel.com> wrote: > On Thu, Feb 26, 2015 at 10:46:58PM +0530, Srinivasreddy R wrote: > > hi Bruce , > > Thank you for your response . > > I am accessing my VM via " vncviewer " . so ssh doesn't come into > picture . > > Is there any way to find the root cause of my problem . does dpdk stores > > any logs while binding interfaces to igb_uio. > > i have seen my /var/log/messages . but could not find any clue. > > > > the movement i gave the below command my vm got struck and not responding > > untill i forcefully kill the qemu and relaunch . > > ./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0 > > > > Does VNC not also connect using a network port? What is the output of > ./dpdk_nic_bind.py --status before you run this command? > > /Bruce > > > > > > > thanks, > > srinivas. > > > > > > > > On Thu, Feb 26, 2015 at 10:30 PM, Bruce Richardson < > > bruce.richardson@intel.com> wrote: > > > > > On Thu, Feb 26, 2015 at 10:08:59PM +0530, Srinivasreddy R wrote: > > > > hi Mike, > > > > Thanks for our detailed explanation of your example . usually i do > > > similar > > > > to u and i am aware of working with dpdk applications . > > > > my problem is : > > > > 1. i have written a code for host to guest communication .[taken > form > > > > usvhost which is developed in ovdk vswitch] . > > > > 2. i launched VM with two interfaces . > > > > 3. i am able to send and receive traffic to and from guest to host on > > > these > > > > interfaces . > > > > 4. when i try to bind these interfaces to igb_uio to run dpdk > > > application > > > > . i am not able to access my instance . it got struck and not > responding > > > . > > > > i need to hard reboot the vm. > > > > > > Are you sure you are not trying to access the vm via one of the > interfaces > > > now bount to igb_uio? If you bind the interface you use for ssh to > igb_uio, > > > you won't be able to ssh to that vm any more. > > > > > > /Bruce > > > > > > > > > > > My Question is : > > > > surely i might done something wrong in code . as my VM is not able to > > > > access any more when i try to bind interfaces to igb_uio . not able > to > > > > debug the issue . > > > > someone please help me in figuring the issue . i dont find anything > in > > > > /var/log/messages after relaunching the instance . > > > > > > > > > > > > thanks, > > > > srinivas. > > > > > > > > > > > > > > > > On Thu, Feb 26, 2015 at 8:42 PM, Polehn, Mike A < > mike.a.polehn@intel.com > > > > > > > > wrote: > > > > > > > > > In this example, the control network 00:03.0, remains unbound to > UIO > > > > > driver but remains attached > > > > > to Linux device driver (ssh access with putty) and just the target > > > > > interfaces are bound. > > > > > Below, it shows all 3 interfaces bound to the uio driver, which > are not > > > > > usable until a task uses the UIO driver. > > > > > > > > > > [root@F21vm l3fwd-vf]# lspci -nn > > > > > 00:00.0 Host bridge [0600]: Intel Corporation 440FX - 82441FX PMC > > > [Natoma] > > > > > [8086:1237] (rev 02) > > > > > 00:01.0 ISA bridge [0601]: Intel Corporation 82371SB PIIX3 ISA > > > > > [Natoma/Triton II] [8086:7000] > > > > > 00:01.1 IDE interface [0101]: Intel Corporation 82371SB PIIX3 IDE > > > > > [Natoma/Triton II] [8086:7010] > > > > > 00:01.3 Bridge [0680]: Intel Corporation 82371AB/EB/MB PIIX4 ACPI > > > > > [8086:7113] (rev 03) > > > > > 00:02.0 VGA compatible controller [0300]: Cirrus Logic GD 5446 > > > [1013:00b8] > > > > > 00:03.0 Ethernet controller [0200]: Red Hat, Inc Virtio network > device > > > > > [1af4:1000] > > > > > 00:04.0 Ethernet controller [0200]: Intel Corporation XL710/X710 > > > Virtual > > > > > Function [8086:154c] (rev 01) > > > > > 00:05.0 Ethernet controller [0200]: Intel Corporation XL710/X710 > > > Virtual > > > > > Function [8086:154c] (rev 01) > > > > > > > > > > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py > > > > > --bind=igb_uio 00:04.0 > > > > > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py > > > > > --bind=igb_uio 00:05.0 > > > > > [root@F21vm l3fwd-vf]# /usr/src/dpdk/tools/dpdk_nic_bind.py > --status > > > > > > > > > > Network devices using DPDK-compatible driver > > > > > ============================================ > > > > > 0000:00:04.0 'XL710/X710 Virtual Function' drv=igb_uio > unused=i40evf > > > > > 0000:00:05.0 'XL710/X710 Virtual Function' drv=igb_uio > unused=i40evf > > > > > > > > > > Network devices using kernel driver > > > > > =================================== > > > > > 0000:00:03.0 'Virtio network device' if= drv=virtio-pci > > > > > unused=virtio_pci,igb_uio > > > > > > > > > > Other network devices > > > > > ===================== > > > > > > > > > > > > > > > -----Original Message----- > > > > > From: Dpdk-ovs [mailto:dpdk-ovs-bounces@lists.01.org] On Behalf Of > > > > > Srinivasreddy R > > > > > Sent: Thursday, February 26, 2015 6:11 AM > > > > > To: dev@dpdk.org; dpdk-ovs@lists.01.org > > > > > Subject: [Dpdk-ovs] problem in binding interfaces of virtio-pci on > the > > > VM > > > > > > > > > > hi , > > > > > I have written sample program for usvhost supported by ovdk. > > > > > > > > > > i have initialized VM using the below command . > > > > > On the VM : > > > > > > > > > > I am able to see two interfaces . and working fine with traffic in > > > > > rawsocket mode . > > > > > my problem is when i bind the interfaces to pmd driver[ ibg_uio ] > my > > > > > virtual machine is getting hanged . and i am not able to access it > > > further > > > > > . > > > > > now my question is . what may be the reason for the behavior . and > how > > > can > > > > > in debug the root cause . > > > > > please help in finding out the problem . > > > > > > > > > > > > > > > > > > > > ./tools/dpdk_nic_bind.py --status > > > > > > > > > > Network devices using DPDK-compatible driver > > > > > ============================================ > > > > > > > > > > > > > > > Network devices using kernel driver > > > > > =================================== > > > > > 0000:00:03.0 '82540EM Gigabit Ethernet Controller' if=ens3 > drv=e1000 > > > > > unused=igb_uio *Active* > > > > > 0000:00:04.0 'Virtio network device' if= drv=virtio-pci > unused=igb_uio > > > > > 0000:00:05.0 'Virtio network device' if= drv=virtio-pci > unused=igb_uio > > > > > > > > > > Other network devices > > > > > ===================== > > > > > > > > > > > > > > > > > > > > ./dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0 > > > > > > > > > > > > > > > > > > > > ./x86_64-softmmu/qemu-system-x86_64 -cpu host -boot c -hda > > > > > /home/utils/images/vm1.img -m 2048M -smp 3 --enable-kvm -name > 'VM1' > > > > > -nographic -vnc :1 -pidfile /tmp/vm1.pid -drive > > > > > file=fat:rw:/tmp/qemu_share,snapshot=off -monitor > > > > > unix:/tmp/vm1monitor,server,nowait -net none -no-reboot -mem-path > > > > > /dev/hugepages -mem-prealloc -netdev > > > > > type=tap,id=net1,script=no,downscript=no,ifname=usvhost1,vhost=on > > > -device > > > > > > > > > virtio-net-pci,netdev=net1,mac=00:16:3e:00:03:03,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off > > > > > -netdev > > > type=tap,id=net2,script=no,downscript=no,ifname=usvhost2,vhost=on > > > > > -device > > > > > > > > > > > > > > virtio-net-pci,netdev=net2,mac=00:16:3e:00:03:04,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off > > > > > > > > > > > > > > > > > > > > > > > > > ---------- > > > > > thanks > > > > > srinivas. > > > > > _______________________________________________ > > > > > Dpdk-ovs mailing list > > > > > Dpdk-ovs@lists.01.org > > > > > https://lists.01.org/mailman/listinfo/dpdk-ovs > > > > > > > > > > > > > > > > > > > > > -- > > > > thanks > > > > srinivas. > > > > > > > > > > > -- > > thanks > > srinivas. > -- thanks srinivas.