From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wg0-f51.google.com (mail-wg0-f51.google.com [74.125.82.51]) by dpdk.org (Postfix) with ESMTP id 925EE5AC0 for ; Fri, 30 Jan 2015 07:01:17 +0100 (CET) Received: by mail-wg0-f51.google.com with SMTP id k14so24936622wgh.10 for ; Thu, 29 Jan 2015 22:01:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=JTtYw4RRBgs9rqajuLH38JRR2e5yhKmtMclgrIOMkJs=; b=FFrjcjjMskQ0sDoVGBv0icu461s/4wyCr7K3OEjb3/acsLPASBp2fsnaMzyfU82ljl tQKb4Kw3zVUrHdQkvjGIsSWxI4kp5w4LiBlrPJFIC2uKBVZYf0BkFj6Pn2e++LuruTFj x3Ze32XDZDlZr6/1Rdl7gkaaKRTQxcJU+LdDnvYxNycjM1rBW9wJVOwwijSASXnjYH3o imj8NpOeZBTUG9jLodW/AmTXh5vwPdO+p0T9uFdARWCUtKTBziBAYBpWbp8SeEZoXk7W 6t1YxdamyOF1N6B8VuyIEQ7eUe8Lz2zd/tACVP1eIs6lHj79MvtmHRcwT1+H1IEW79Is 0mKw== MIME-Version: 1.0 X-Received: by 10.194.94.227 with SMTP id df3mr8911812wjb.34.1422597677375; Thu, 29 Jan 2015 22:01:17 -0800 (PST) Received: by 10.216.192.194 with HTTP; Thu, 29 Jan 2015 22:01:17 -0800 (PST) In-Reply-To: References: <54CB16CA.4040306@huawei.com> Date: Fri, 30 Jan 2015 11:31:17 +0530 Message-ID: From: Srinivasreddy R To: Linhaifeng Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: "dev@dpdk.org" , dpdk-ovs@lists.01.org Subject: Re: [dpdk-dev] [DISCUSSION] : ERROR while running vhost example in dpdk-1.8 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 30 Jan 2015 06:01:17 -0000 hi, May be I am missing something regarding hugetlbfs . I performed below steps for hugetlbfs . I am running on Ubuntu 14.04.1 LTS. cat /proc/cmdline BOOT_IMAGE=/boot/vmlinuz-3.13.0-24-generic root=UUID=628ff32b-dede-4b47-bd13-893c13c18d00 ro quiet splash hugepagesz=2M hugepages=512 default_hugepagesz=1G hugepagesz=1G hugepages=8 vt.handoff=7 mount -t hugetlbfs nodev /mnt/huge echo 512 > /sys/kernel/mm/hugepages/hugepages-2048kB/ nr_hugepages mount -t hugetlbfs nodev /mnt/huge -o pagesize=2M thanks , Srinivas. On Fri, Jan 30, 2015 at 11:19 AM, Srinivasreddy R < srinivasreddy4390@gmail.com> wrote: > thanks for your reply . even I face the same issue .any pointers to > proceed .. > > > ./build/app/vhost-switch -c f -n 4 -- -p 0x1 --dev-basename usvhost-1 > --stats 2 > EAL: Detected lcore 0 as core 0 on socket 0 > EAL: Detected lcore 1 as core 1 on socket 0 > EAL: Detected lcore 2 as core 2 on socket 0 > EAL: Detected lcore 3 as core 3 on socket 0 > EAL: Detected lcore 4 as core 0 on socket 0 > EAL: Detected lcore 5 as core 1 on socket 0 > EAL: Detected lcore 6 as core 2 on socket 0 > EAL: Detected lcore 7 as core 3 on socket 0 > EAL: Support maximum 128 logical core(s) by configuration. > EAL: Detected 8 lcore(s) > EAL: cannot open VFIO container, error 2 (No such file or directory) > EAL: VFIO support could not be initialized > EAL: Setting up memory... > EAL: Ask a virtual area of 0x200000000 bytes > EAL: Virtual area found at 0x7fb6c0000000 (size = 0x200000000) > EAL: Ask a virtual area of 0x1a00000 bytes > EAL: Virtual area found at 0x7fb8f5600000 (size = 0x1a00000) > EAL: Ask a virtual area of 0x200000 bytes > EAL: Virtual area found at 0x7fb8f5200000 (size = 0x200000) > EAL: Ask a virtual area of 0x200000 bytes > EAL: Virtual area found at 0x7fb8f4e00000 (size = 0x200000) > EAL: Ask a virtual area of 0x6c00000 bytes > EAL: Virtual area found at 0x7fb8ee000000 (size = 0x6c00000) > EAL: Ask a virtual area of 0x200000 bytes > EAL: Virtual area found at 0x7fb8edc00000 (size = 0x200000) > EAL: Ask a virtual area of 0x200000 bytes > EAL: Virtual area found at 0x7fb8ed800000 (size = 0x200000) > EAL: Ask a virtual area of 0x200000 bytes > EAL: Virtual area found at 0x7fb8ed400000 (size = 0x200000) > EAL: Ask a virtual area of 0x9e00000 bytes > EAL: Virtual area found at 0x7fb8e3400000 (size = 0x9e00000) > EAL: Ask a virtual area of 0x19000000 bytes > EAL: Virtual area found at 0x7fb8ca200000 (size = 0x19000000) > EAL: Ask a virtual area of 0x200000 bytes > EAL: Virtual area found at 0x7fb8c9e00000 (size = 0x200000) > EAL: Ask a virtual area of 0x200000 bytes > EAL: Virtual area found at 0x7fb8c9a00000 (size = 0x200000) > EAL: Ask a virtual area of 0x10000000 bytes > EAL: Virtual area found at 0x7fb6afe00000 (size = 0x10000000) > EAL: Ask a virtual area of 0x3c00000 bytes > EAL: Virtual area found at 0x7fb8c5c00000 (size = 0x3c00000) > EAL: Ask a virtual area of 0x200000 bytes > EAL: Virtual area found at 0x7fb8c5800000 (size = 0x200000) > EAL: Requesting 8 pages of size 1024MB from socket 0 > EAL: Requesting 512 pages of size 2MB from socket 0 > EAL: TSC frequency is ~3092840 KHz > EAL: Master core 0 is ready (tid=f83c0880) > PMD: ENICPMD trace: rte_enic_pmd_init > EAL: Core 3 is ready (tid=c3ded700) > EAL: Core 2 is ready (tid=c45ee700) > EAL: Core 1 is ready (tid=c4def700) > EAL: PCI device 0000:01:00.0 on NUMA socket -1 > EAL: probe driver: 8086:1521 rte_igb_pmd > EAL: PCI memory mapped at 0x7fb8f7000000 > EAL: PCI memory mapped at 0x7fb8f7100000 > PMD: eth_igb_dev_init(): port_id 0 vendorID=0x8086 deviceID=0x1521 > EAL: PCI device 0000:01:00.1 on NUMA socket -1 > EAL: probe driver: 8086:1521 rte_igb_pmd > EAL: 0000:01:00.1 not managed by UIO driver, skipping > EAL: PCI device 0000:03:00.0 on NUMA socket -1 > EAL: probe driver: 8086:10d3 rte_em_pmd > EAL: 0000:03:00.0 not managed by UIO driver, skipping > EAL: PCI device 0000:04:00.0 on NUMA socket -1 > EAL: probe driver: 8086:10d3 rte_em_pmd > EAL: 0000:04:00.0 not managed by UIO driver, skipping > pf queue num: 0, configured vmdq pool num: 8, each vmdq pool has 1 queues > PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60f7e00 > hw_ring=0x7fb8f5228580 dma_addr=0x36628580 > PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60f5d00 > hw_ring=0x7fb8f5238580 dma_addr=0x36638580 > PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60f3c00 > hw_ring=0x7fb8f5248580 dma_addr=0x36648580 > PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60f1b00 > hw_ring=0x7fb8f5258580 dma_addr=0x36658580 > PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60efa00 > hw_ring=0x7fb8f5268580 dma_addr=0x36668580 > PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60ed900 > hw_ring=0x7fb8f5278580 dma_addr=0x36678580 > PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60eb800 > hw_ring=0x7fb8f5288580 dma_addr=0x36688580 > PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60e9700 > hw_ring=0x7fb8f5298580 dma_addr=0x36698580 > PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider > setting the TX WTHRESH value to 4, 8, or 16. > PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fb8f60e7600 > hw_ring=0x7fb8f52a8580 dma_addr=0x366a8580 > PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider > setting the TX WTHRESH value to 4, 8, or 16. > PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fb8f60e5500 > hw_ring=0x7fb8f52b8580 dma_addr=0x366b8580 > PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider > setting the TX WTHRESH value to 4, 8, or 16. > PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fb8f60e3400 > hw_ring=0x7fb8f52c8580 dma_addr=0x366c8580 > PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider > setting the TX WTHRESH value to 4, 8, or 16. > PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fb8f60e1300 > hw_ring=0x7fb8f52d8580 dma_addr=0x366d8580 > PMD: eth_igb_start(): << > VHOST_PORT: Max virtio devices supported: 8 > VHOST_PORT: Port 0 MAC: 2c 53 4a 00 28 68 > VHOST_DATA: Procesing on Core 1 started > VHOST_DATA: Procesing on Core 2 started > VHOST_DATA: Procesing on Core 3 started > Device statistics ==================================== > ====================================================== > VHOST_CONFIG: (0) Device configuration started > VHOST_CONFIG: (0) Failed to find memory file for pid 845 > Device statistics ==================================== > > > > ./qemu-wrap.py -machine pc-i440fx-1.4,accel=kvm,usb=off -cpu host -smp > 2,sockets=2,cores=1,threads=1 -netdev tap,id=hostnet1,vhost=on -device > virtio-net-pci,netdev=hostnet1,id=net1 -hda /home/utils/images/vm1.img -m > 2048 -vnc 0.0.0.0:2 -net nic -net tap,ifname=tap3,script=no -mem-path > /dev/hugepages -mem-prealloc > W: /etc/qemu-ifup: no bridge for guest interface found > file_ram_alloc: can't mmap RAM pages: Cannot allocate memory > qemu-system-x86_64: unable to start vhost net: 22: falling back on > userspace virtio > > > mount | grep huge > cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,relatime,hugetlb) > nodev on /dev/hugepages type hugetlbfs (rw) > nodev on /mnt/huge type hugetlbfs (rw,pagesize=2M) > > > > cat /proc/meminfo > MemTotal: 16345340 kB > MemFree: 4591596 kB > Buffers: 466472 kB > Cached: 1218728 kB > SwapCached: 0 kB > Active: 1147228 kB > Inactive: 762992 kB > Active(anon): 232732 kB > Inactive(anon): 14760 kB > Active(file): 914496 kB > Inactive(file): 748232 kB > Unevictable: 3704 kB > Mlocked: 3704 kB > SwapTotal: 16686076 kB > SwapFree: 16686076 kB > Dirty: 488 kB > Writeback: 0 kB > AnonPages: 230800 kB > Mapped: 55248 kB > Shmem: 17932 kB > Slab: 245116 kB > SReclaimable: 214372 kB > SUnreclaim: 30744 kB > KernelStack: 3664 kB > PageTables: 13900 kB > NFS_Unstable: 0 kB > Bounce: 0 kB > WritebackTmp: 0 kB > CommitLimit: 20140152 kB > Committed_AS: 1489760 kB > VmallocTotal: 34359738367 kB > VmallocUsed: 374048 kB > VmallocChunk: 34359356412 kB > HardwareCorrupted: 0 kB > AnonHugePages: 106496 kB > HugePages_Total: 8 > HugePages_Free: 0 > HugePages_Rsvd: 0 > HugePages_Surp: 0 > Hugepagesize: 1048576 kB > DirectMap4k: 91600 kB > DirectMap2M: 2965504 kB > DirectMap1G: 13631488 kB > > > > sysctl -A | grep huge > vm.hugepages_treat_as_movable = 0 > vm.hugetlb_shm_group = 0 > vm.nr_hugepages = 8 > vm.nr_hugepages_mempolicy = 8 > vm.nr_overcommit_hugepages = 0 > > > thanks > Srinivas. > > > > On Fri, Jan 30, 2015 at 10:59 AM, Linhaifeng > wrote: > >> >> >> On 2015/1/30 0:48, Srinivasreddy R wrote: >> > EAL: 512 hugepages of size 2097152 reserved, but no mounted hugetlbfs >> found >> > for that size >> >> Maybe you haven't mount hugetlbfs. >> -- >> Regards, >> Haifeng >> >> > > > -- > thanks > srinivas. > -- thanks srinivas.