DPDK patches and discussions
 help / color / mirror / Atom feed
From: Linhaifeng <haifeng.lin@huawei.com>
To: Srinivasreddy R <srinivasreddy4390@gmail.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, dpdk-ovs@lists.01.org
Subject: Re: [dpdk-dev] [DISCUSSION] : ERROR while running vhost example in dpdk-1.8
Date: Sat, 31 Jan 2015 09:06:31 +0800	[thread overview]
Message-ID: <54CC2A97.5020701@huawei.com> (raw)
In-Reply-To: <CAJP4VWidABxVJyP9vyzDrd=1Rke+DOqqAUf+iQTOokR=XKJt-A@mail.gmail.com>



On 2015/1/30 14:01, Srinivasreddy R wrote:
> hi,
> 
> May be I am missing something regarding hugetlbfs .
> I performed below steps for hugetlbfs .
> I am running on Ubuntu 14.04.1 LTS.
> 
> cat /proc/cmdline
> BOOT_IMAGE=/boot/vmlinuz-3.13.0-24-generic
> root=UUID=628ff32b-dede-4b47-bd13-893c13c18d00 ro quiet splash
> hugepagesz=2M hugepages=512 default_hugepagesz=1G hugepagesz=1G hugepages=8
> vt.handoff=7
> 
> mount -t hugetlbfs nodev /mnt/huge
> 
> echo 512 > /sys/kernel/mm/hugepages/hugepages-2048kB/ nr_hugepages
> mount -t hugetlbfs nodev /mnt/huge -o pagesize=2M
> 

mount /mnt/huge twice?

> thanks ,
> Srinivas.
> 
> 
> 
> On Fri, Jan 30, 2015 at 11:19 AM, Srinivasreddy R <
> srinivasreddy4390@gmail.com> wrote:
> 
>> thanks for your reply . even I face the same issue .any pointers to
>> proceed ..
>>
>>
>> ./build/app/vhost-switch -c f -n 4  -- -p 0x1   --dev-basename usvhost-1
>> --stats 2
>> EAL: Detected lcore 0 as core 0 on socket 0
>> EAL: Detected lcore 1 as core 1 on socket 0
>> EAL: Detected lcore 2 as core 2 on socket 0
>> EAL: Detected lcore 3 as core 3 on socket 0
>> EAL: Detected lcore 4 as core 0 on socket 0
>> EAL: Detected lcore 5 as core 1 on socket 0
>> EAL: Detected lcore 6 as core 2 on socket 0
>> EAL: Detected lcore 7 as core 3 on socket 0
>> EAL: Support maximum 128 logical core(s) by configuration.
>> EAL: Detected 8 lcore(s)
>> EAL:   cannot open VFIO container, error 2 (No such file or directory)
>> EAL: VFIO support could not be initialized
>> EAL: Setting up memory...
>> EAL: Ask a virtual area of 0x200000000 bytes
>> EAL: Virtual area found at 0x7fb6c0000000 (size = 0x200000000)
>> EAL: Ask a virtual area of 0x1a00000 bytes
>> EAL: Virtual area found at 0x7fb8f5600000 (size = 0x1a00000)
>> EAL: Ask a virtual area of 0x200000 bytes
>> EAL: Virtual area found at 0x7fb8f5200000 (size = 0x200000)
>> EAL: Ask a virtual area of 0x200000 bytes
>> EAL: Virtual area found at 0x7fb8f4e00000 (size = 0x200000)
>> EAL: Ask a virtual area of 0x6c00000 bytes
>> EAL: Virtual area found at 0x7fb8ee000000 (size = 0x6c00000)
>> EAL: Ask a virtual area of 0x200000 bytes
>> EAL: Virtual area found at 0x7fb8edc00000 (size = 0x200000)
>> EAL: Ask a virtual area of 0x200000 bytes
>> EAL: Virtual area found at 0x7fb8ed800000 (size = 0x200000)
>> EAL: Ask a virtual area of 0x200000 bytes
>> EAL: Virtual area found at 0x7fb8ed400000 (size = 0x200000)
>> EAL: Ask a virtual area of 0x9e00000 bytes
>> EAL: Virtual area found at 0x7fb8e3400000 (size = 0x9e00000)
>> EAL: Ask a virtual area of 0x19000000 bytes
>> EAL: Virtual area found at 0x7fb8ca200000 (size = 0x19000000)
>> EAL: Ask a virtual area of 0x200000 bytes
>> EAL: Virtual area found at 0x7fb8c9e00000 (size = 0x200000)
>> EAL: Ask a virtual area of 0x200000 bytes
>> EAL: Virtual area found at 0x7fb8c9a00000 (size = 0x200000)
>> EAL: Ask a virtual area of 0x10000000 bytes
>> EAL: Virtual area found at 0x7fb6afe00000 (size = 0x10000000)
>> EAL: Ask a virtual area of 0x3c00000 bytes
>> EAL: Virtual area found at 0x7fb8c5c00000 (size = 0x3c00000)
>> EAL: Ask a virtual area of 0x200000 bytes
>> EAL: Virtual area found at 0x7fb8c5800000 (size = 0x200000)
>> EAL: Requesting 8 pages of size 1024MB from socket 0
>> EAL: Requesting 512 pages of size 2MB from socket 0
>> EAL: TSC frequency is ~3092840 KHz
>> EAL: Master core 0 is ready (tid=f83c0880)
>> PMD: ENICPMD trace: rte_enic_pmd_init
>> EAL: Core 3 is ready (tid=c3ded700)
>> EAL: Core 2 is ready (tid=c45ee700)
>> EAL: Core 1 is ready (tid=c4def700)
>> EAL: PCI device 0000:01:00.0 on NUMA socket -1
>> EAL:   probe driver: 8086:1521 rte_igb_pmd
>> EAL:   PCI memory mapped at 0x7fb8f7000000
>> EAL:   PCI memory mapped at 0x7fb8f7100000
>> PMD: eth_igb_dev_init(): port_id 0 vendorID=0x8086 deviceID=0x1521
>> EAL: PCI device 0000:01:00.1 on NUMA socket -1
>> EAL:   probe driver: 8086:1521 rte_igb_pmd
>> EAL:   0000:01:00.1 not managed by UIO driver, skipping
>> EAL: PCI device 0000:03:00.0 on NUMA socket -1
>> EAL:   probe driver: 8086:10d3 rte_em_pmd
>> EAL:   0000:03:00.0 not managed by UIO driver, skipping
>> EAL: PCI device 0000:04:00.0 on NUMA socket -1
>> EAL:   probe driver: 8086:10d3 rte_em_pmd
>> EAL:   0000:04:00.0 not managed by UIO driver, skipping
>> pf queue num: 0, configured vmdq pool num: 8, each vmdq pool has 1 queues
>> PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60f7e00
>> hw_ring=0x7fb8f5228580 dma_addr=0x36628580
>> PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60f5d00
>> hw_ring=0x7fb8f5238580 dma_addr=0x36638580
>> PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60f3c00
>> hw_ring=0x7fb8f5248580 dma_addr=0x36648580
>> PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60f1b00
>> hw_ring=0x7fb8f5258580 dma_addr=0x36658580
>> PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60efa00
>> hw_ring=0x7fb8f5268580 dma_addr=0x36668580
>> PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60ed900
>> hw_ring=0x7fb8f5278580 dma_addr=0x36678580
>> PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60eb800
>> hw_ring=0x7fb8f5288580 dma_addr=0x36688580
>> PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60e9700
>> hw_ring=0x7fb8f5298580 dma_addr=0x36698580
>> PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider
>> setting the TX WTHRESH value to 4, 8, or 16.
>> PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fb8f60e7600
>> hw_ring=0x7fb8f52a8580 dma_addr=0x366a8580
>> PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider
>> setting the TX WTHRESH value to 4, 8, or 16.
>> PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fb8f60e5500
>> hw_ring=0x7fb8f52b8580 dma_addr=0x366b8580
>> PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider
>> setting the TX WTHRESH value to 4, 8, or 16.
>> PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fb8f60e3400
>> hw_ring=0x7fb8f52c8580 dma_addr=0x366c8580
>> PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider
>> setting the TX WTHRESH value to 4, 8, or 16.
>> PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fb8f60e1300
>> hw_ring=0x7fb8f52d8580 dma_addr=0x366d8580
>> PMD: eth_igb_start(): <<
>> VHOST_PORT: Max virtio devices supported: 8
>> VHOST_PORT: Port 0 MAC: 2c 53 4a 00 28 68
>> VHOST_DATA: Procesing on Core 1 started
>> VHOST_DATA: Procesing on Core 2 started
>> VHOST_DATA: Procesing on Core 3 started
>> Device statistics ====================================
>> ======================================================
>> VHOST_CONFIG: (0) Device configuration started
>> VHOST_CONFIG: (0) Failed to find memory file for pid 845
>> Device statistics ====================================
>>
>>
>>
>> ./qemu-wrap.py -machine pc-i440fx-1.4,accel=kvm,usb=off -cpu host -smp
>> 2,sockets=2,cores=1,threads=1  -netdev tap,id=hostnet1,vhost=on -device
>> virtio-net-pci,netdev=hostnet1,id=net1  -hda /home/utils/images/vm1.img  -m
>> 2048  -vnc 0.0.0.0:2   -net nic -net tap,ifname=tap3,script=no -mem-path
>> /dev/hugepages -mem-prealloc
>> W: /etc/qemu-ifup: no bridge for guest interface found
>> file_ram_alloc: can't mmap RAM pages: Cannot allocate memory
>> qemu-system-x86_64: unable to start vhost net: 22: falling back on
>> userspace virtio
>>
>>
>>  mount  | grep huge
>> cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,relatime,hugetlb)
>> nodev on /dev/hugepages type hugetlbfs (rw)
>> nodev on /mnt/huge type hugetlbfs (rw,pagesize=2M)
>>
>>
>>
>>  cat /proc/meminfo
>> MemTotal:       16345340 kB
>> MemFree:         4591596 kB
>> Buffers:          466472 kB
>> Cached:          1218728 kB
>> SwapCached:            0 kB
>> Active:          1147228 kB
>> Inactive:         762992 kB
>> Active(anon):     232732 kB
>> Inactive(anon):    14760 kB
>> Active(file):     914496 kB
>> Inactive(file):   748232 kB
>> Unevictable:        3704 kB
>> Mlocked:            3704 kB
>> SwapTotal:      16686076 kB
>> SwapFree:       16686076 kB
>> Dirty:               488 kB
>> Writeback:             0 kB
>> AnonPages:        230800 kB
>> Mapped:            55248 kB
>> Shmem:             17932 kB
>> Slab:             245116 kB
>> SReclaimable:     214372 kB
>> SUnreclaim:        30744 kB
>> KernelStack:        3664 kB
>> PageTables:        13900 kB
>> NFS_Unstable:          0 kB
>> Bounce:                0 kB
>> WritebackTmp:          0 kB
>> CommitLimit:    20140152 kB
>> Committed_AS:    1489760 kB
>> VmallocTotal:   34359738367 kB
>> VmallocUsed:      374048 kB
>> VmallocChunk:   34359356412 kB
>> HardwareCorrupted:     0 kB
>> AnonHugePages:    106496 kB
>> HugePages_Total:       8
>> HugePages_Free:        0
>> HugePages_Rsvd:        0
>> HugePages_Surp:        0
>> Hugepagesize:    1048576 kB
>> DirectMap4k:       91600 kB
>> DirectMap2M:     2965504 kB
>> DirectMap1G:    13631488 kB
>>
>>
>>
>> sysctl -A | grep huge
>> vm.hugepages_treat_as_movable = 0
>> vm.hugetlb_shm_group = 0
>> vm.nr_hugepages = 8
>> vm.nr_hugepages_mempolicy = 8
>> vm.nr_overcommit_hugepages = 0
>>
>>
>> thanks
>> Srinivas.
>>
>>
>>
>> On Fri, Jan 30, 2015 at 10:59 AM, Linhaifeng <haifeng.lin@huawei.com>
>> wrote:
>>
>>>
>>>
>>> On 2015/1/30 0:48, Srinivasreddy R wrote:
>>>> EAL: 512 hugepages of size 2097152 reserved, but no mounted hugetlbfs
>>> found
>>>> for that size
>>>
>>> Maybe you haven't mount hugetlbfs.
>>> --
>>> Regards,
>>> Haifeng
>>>
>>>
>>
>>
>> --
>> thanks
>> srinivas.
>>
> 
> 
> 

-- 
Regards,
Haifeng

      reply	other threads:[~2015-01-31  1:06 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-01-29 16:48 Srinivasreddy R
2015-01-30  5:29 ` Linhaifeng
2015-01-30  5:49   ` Srinivasreddy R
2015-01-30  6:01     ` Srinivasreddy R
2015-01-31  1:06       ` Linhaifeng [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=54CC2A97.5020701@huawei.com \
    --to=haifeng.lin@huawei.com \
    --cc=dev@dpdk.org \
    --cc=dpdk-ovs@lists.01.org \
    --cc=srinivasreddy4390@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).