* [dpdk-dev] [DISCUSSION] : ERROR while running vhost example in dpdk-1.8
@ 2015-01-29 16:48 Srinivasreddy R
2015-01-30 5:29 ` Linhaifeng
0 siblings, 1 reply; 5+ messages in thread
From: Srinivasreddy R @ 2015-01-29 16:48 UTC (permalink / raw)
To: dev, dpdk-ovs
Hi,
I am using dpdk-1.8.0.
I am trying to run vhost example . I followed sample app user guide at
below link.
http://www.dpdk.org/doc/guides/sample_app_ug/vhost.html
what may be the reason . may be I am missing some thing .
Facing problem while running ,
VHOST_CONFIG: (0) Failed to find memory file for pid 5235
file_ram_alloc: can't mmap RAM pages: Cannot allocate memory
qemu-system-x86_64: unable to start vhost net: 22: falling back on
userspace virtio
Vhost switch app :
/home/utils/dpdk-1.8.0/examples/vhost# ./build/app/vhost-switch -c f -n 4
-- -p 0x1 --dev-basename usvhost-1 --stats 2
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Detected lcore 2 as core 2 on socket 0
EAL: Detected lcore 3 as core 3 on socket 0
EAL: Detected lcore 4 as core 0 on socket 0
EAL: Detected lcore 5 as core 1 on socket 0
EAL: Detected lcore 6 as core 2 on socket 0
EAL: Detected lcore 7 as core 3 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 8 lcore(s)
EAL: 512 hugepages of size 2097152 reserved, but no mounted hugetlbfs found
for that size
EAL: cannot open VFIO container, error 2 (No such file or directory)
EAL: VFIO support could not be initialized
EAL: Setting up memory...
EAL: Ask a virtual area of 0x200000000 bytes
EAL: Virtual area found at 0x7f89c0000000 (size = 0x200000000)
EAL: Requesting 8 pages of size 1024MB from socket 0
EAL: TSC frequency is ~3092841 KHz
EAL: Master core 0 is ready (tid=f500d880)
PMD: ENICPMD trace: rte_enic_pmd_init
EAL: Core 3 is ready (tid=f26e5700)
EAL: Core 2 is ready (tid=f2ee6700)
EAL: Core 1 is ready (tid=f36e7700)
EAL: PCI device 0000:01:00.0 on NUMA socket -1
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI memory mapped at 0x7f8bc0000000
EAL: PCI memory mapped at 0x7f8bc0100000
PMD: eth_igb_dev_init(): port_id 0 vendorID=0x8086 deviceID=0x1521
EAL: PCI device 0000:01:00.1 on NUMA socket -1
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: 0000:01:00.1 not managed by UIO driver, skipping
EAL: PCI device 0000:03:00.0 on NUMA socket -1
EAL: probe driver: 8086:10d3 rte_em_pmd
EAL: 0000:03:00.0 not managed by UIO driver, skipping
EAL: PCI device 0000:04:00.0 on NUMA socket -1
EAL: probe driver: 8086:10d3 rte_em_pmd
EAL: 0000:04:00.0 not managed by UIO driver, skipping
pf queue num: 0, configured vmdq pool num: 8, each vmdq pool has 1 queues
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7f89c0af7e00
hw_ring=0x7f8a0bbc2000 dma_addr=0x24bbc2000
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7f89c0af5d00
hw_ring=0x7f8a0bbd2000 dma_addr=0x24bbd2000
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7f89c0af3c00
hw_ring=0x7f8a0bbe2000 dma_addr=0x24bbe2000
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7f89c0af1b00
hw_ring=0x7f8a0bbf2000 dma_addr=0x24bbf2000
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7f89c0aefa00
hw_ring=0x7f8a0bc02000 dma_addr=0x24bc02000
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7f89c0aed900
hw_ring=0x7f8a0bc12000 dma_addr=0x24bc12000
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7f89c0aeb800
hw_ring=0x7f8a0bc22000 dma_addr=0x24bc22000
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7f89c0ae9700
hw_ring=0x7f8a0bc32000 dma_addr=0x24bc32000
PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider
setting the TX WTHRESH value to 4, 8, or 16.
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f89c0ae7600
hw_ring=0x7f8a0bc42000 dma_addr=0x24bc42000
PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider
setting the TX WTHRESH value to 4, 8, or 16.
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f89c0ae5500
hw_ring=0x7f8a0bc52000 dma_addr=0x24bc52000
PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider
setting the TX WTHRESH value to 4, 8, or 16.
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f89c0ae3400
hw_ring=0x7f8a0bc62000 dma_addr=0x24bc62000
PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider
setting the TX WTHRESH value to 4, 8, or 16.
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f89c0ae1300
hw_ring=0x7f8a0bc72000 dma_addr=0x24bc72000
PMD: eth_igb_start(): <<
VHOST_PORT: Max virtio devices supported: 8
VHOST_PORT: Port 0 MAC: 2c 53 4a 00 28 68
VHOST_DATA: Procesing on Core 1 started
VHOST_DATA: Procesing on Core 2 started
VHOST_DATA: Procesing on Core 3 started
Device statistics ====================================
======================================================
VHOST_CONFIG: (0) Device configuration started
Device statistics ====================================
======================================================
VHOST_CONFIG: (0) Failed to find memory file for pid 5235
Device statistics ====================================
======================================================
Qemu :
./qemu-wrap.py -machine pc-i440fx-1.4,accel=kvm,usb=off -cpu host -smp
2,sockets=2,cores=1,threads=1 -netdev tap,id=hostnet1,vhost=on -device
virtio-net-pci,netdev=hostnet1,id=net1 -hda /home/utils/images/vm1.img -m
2048 -vnc 0.0.0.0:2 -net nic -net tap,ifname=tap3,script=no -mem-path
/dev/hugepages -mem-prealloc
W: /etc/qemu-ifup: no bridge for guest interface found
file_ram_alloc: can't mmap RAM pages: Cannot allocate memory
qemu-system-x86_64: unable to start vhost net: 22: falling back on
userspace virtio
--------
thanks
srinivas.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [dpdk-dev] [DISCUSSION] : ERROR while running vhost example in dpdk-1.8
2015-01-29 16:48 [dpdk-dev] [DISCUSSION] : ERROR while running vhost example in dpdk-1.8 Srinivasreddy R
@ 2015-01-30 5:29 ` Linhaifeng
2015-01-30 5:49 ` Srinivasreddy R
0 siblings, 1 reply; 5+ messages in thread
From: Linhaifeng @ 2015-01-30 5:29 UTC (permalink / raw)
To: Srinivasreddy R, dev, dpdk-ovs
On 2015/1/30 0:48, Srinivasreddy R wrote:
> EAL: 512 hugepages of size 2097152 reserved, but no mounted hugetlbfs found
> for that size
Maybe you haven't mount hugetlbfs.
--
Regards,
Haifeng
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [dpdk-dev] [DISCUSSION] : ERROR while running vhost example in dpdk-1.8
2015-01-30 5:29 ` Linhaifeng
@ 2015-01-30 5:49 ` Srinivasreddy R
2015-01-30 6:01 ` Srinivasreddy R
0 siblings, 1 reply; 5+ messages in thread
From: Srinivasreddy R @ 2015-01-30 5:49 UTC (permalink / raw)
To: Linhaifeng; +Cc: dev, dpdk-ovs
thanks for your reply . even I face the same issue .any pointers to proceed
..
./build/app/vhost-switch -c f -n 4 -- -p 0x1 --dev-basename usvhost-1
--stats 2
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Detected lcore 2 as core 2 on socket 0
EAL: Detected lcore 3 as core 3 on socket 0
EAL: Detected lcore 4 as core 0 on socket 0
EAL: Detected lcore 5 as core 1 on socket 0
EAL: Detected lcore 6 as core 2 on socket 0
EAL: Detected lcore 7 as core 3 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 8 lcore(s)
EAL: cannot open VFIO container, error 2 (No such file or directory)
EAL: VFIO support could not be initialized
EAL: Setting up memory...
EAL: Ask a virtual area of 0x200000000 bytes
EAL: Virtual area found at 0x7fb6c0000000 (size = 0x200000000)
EAL: Ask a virtual area of 0x1a00000 bytes
EAL: Virtual area found at 0x7fb8f5600000 (size = 0x1a00000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fb8f5200000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fb8f4e00000 (size = 0x200000)
EAL: Ask a virtual area of 0x6c00000 bytes
EAL: Virtual area found at 0x7fb8ee000000 (size = 0x6c00000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fb8edc00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fb8ed800000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fb8ed400000 (size = 0x200000)
EAL: Ask a virtual area of 0x9e00000 bytes
EAL: Virtual area found at 0x7fb8e3400000 (size = 0x9e00000)
EAL: Ask a virtual area of 0x19000000 bytes
EAL: Virtual area found at 0x7fb8ca200000 (size = 0x19000000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fb8c9e00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fb8c9a00000 (size = 0x200000)
EAL: Ask a virtual area of 0x10000000 bytes
EAL: Virtual area found at 0x7fb6afe00000 (size = 0x10000000)
EAL: Ask a virtual area of 0x3c00000 bytes
EAL: Virtual area found at 0x7fb8c5c00000 (size = 0x3c00000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fb8c5800000 (size = 0x200000)
EAL: Requesting 8 pages of size 1024MB from socket 0
EAL: Requesting 512 pages of size 2MB from socket 0
EAL: TSC frequency is ~3092840 KHz
EAL: Master core 0 is ready (tid=f83c0880)
PMD: ENICPMD trace: rte_enic_pmd_init
EAL: Core 3 is ready (tid=c3ded700)
EAL: Core 2 is ready (tid=c45ee700)
EAL: Core 1 is ready (tid=c4def700)
EAL: PCI device 0000:01:00.0 on NUMA socket -1
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI memory mapped at 0x7fb8f7000000
EAL: PCI memory mapped at 0x7fb8f7100000
PMD: eth_igb_dev_init(): port_id 0 vendorID=0x8086 deviceID=0x1521
EAL: PCI device 0000:01:00.1 on NUMA socket -1
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: 0000:01:00.1 not managed by UIO driver, skipping
EAL: PCI device 0000:03:00.0 on NUMA socket -1
EAL: probe driver: 8086:10d3 rte_em_pmd
EAL: 0000:03:00.0 not managed by UIO driver, skipping
EAL: PCI device 0000:04:00.0 on NUMA socket -1
EAL: probe driver: 8086:10d3 rte_em_pmd
EAL: 0000:04:00.0 not managed by UIO driver, skipping
pf queue num: 0, configured vmdq pool num: 8, each vmdq pool has 1 queues
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60f7e00
hw_ring=0x7fb8f5228580 dma_addr=0x36628580
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60f5d00
hw_ring=0x7fb8f5238580 dma_addr=0x36638580
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60f3c00
hw_ring=0x7fb8f5248580 dma_addr=0x36648580
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60f1b00
hw_ring=0x7fb8f5258580 dma_addr=0x36658580
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60efa00
hw_ring=0x7fb8f5268580 dma_addr=0x36668580
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60ed900
hw_ring=0x7fb8f5278580 dma_addr=0x36678580
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60eb800
hw_ring=0x7fb8f5288580 dma_addr=0x36688580
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60e9700
hw_ring=0x7fb8f5298580 dma_addr=0x36698580
PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider
setting the TX WTHRESH value to 4, 8, or 16.
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fb8f60e7600
hw_ring=0x7fb8f52a8580 dma_addr=0x366a8580
PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider
setting the TX WTHRESH value to 4, 8, or 16.
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fb8f60e5500
hw_ring=0x7fb8f52b8580 dma_addr=0x366b8580
PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider
setting the TX WTHRESH value to 4, 8, or 16.
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fb8f60e3400
hw_ring=0x7fb8f52c8580 dma_addr=0x366c8580
PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider
setting the TX WTHRESH value to 4, 8, or 16.
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fb8f60e1300
hw_ring=0x7fb8f52d8580 dma_addr=0x366d8580
PMD: eth_igb_start(): <<
VHOST_PORT: Max virtio devices supported: 8
VHOST_PORT: Port 0 MAC: 2c 53 4a 00 28 68
VHOST_DATA: Procesing on Core 1 started
VHOST_DATA: Procesing on Core 2 started
VHOST_DATA: Procesing on Core 3 started
Device statistics ====================================
======================================================
VHOST_CONFIG: (0) Device configuration started
VHOST_CONFIG: (0) Failed to find memory file for pid 845
Device statistics ====================================
./qemu-wrap.py -machine pc-i440fx-1.4,accel=kvm,usb=off -cpu host -smp
2,sockets=2,cores=1,threads=1 -netdev tap,id=hostnet1,vhost=on -device
virtio-net-pci,netdev=hostnet1,id=net1 -hda /home/utils/images/vm1.img -m
2048 -vnc 0.0.0.0:2 -net nic -net tap,ifname=tap3,script=no -mem-path
/dev/hugepages -mem-prealloc
W: /etc/qemu-ifup: no bridge for guest interface found
file_ram_alloc: can't mmap RAM pages: Cannot allocate memory
qemu-system-x86_64: unable to start vhost net: 22: falling back on
userspace virtio
mount | grep huge
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,relatime,hugetlb)
nodev on /dev/hugepages type hugetlbfs (rw)
nodev on /mnt/huge type hugetlbfs (rw,pagesize=2M)
cat /proc/meminfo
MemTotal: 16345340 kB
MemFree: 4591596 kB
Buffers: 466472 kB
Cached: 1218728 kB
SwapCached: 0 kB
Active: 1147228 kB
Inactive: 762992 kB
Active(anon): 232732 kB
Inactive(anon): 14760 kB
Active(file): 914496 kB
Inactive(file): 748232 kB
Unevictable: 3704 kB
Mlocked: 3704 kB
SwapTotal: 16686076 kB
SwapFree: 16686076 kB
Dirty: 488 kB
Writeback: 0 kB
AnonPages: 230800 kB
Mapped: 55248 kB
Shmem: 17932 kB
Slab: 245116 kB
SReclaimable: 214372 kB
SUnreclaim: 30744 kB
KernelStack: 3664 kB
PageTables: 13900 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 20140152 kB
Committed_AS: 1489760 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 374048 kB
VmallocChunk: 34359356412 kB
HardwareCorrupted: 0 kB
AnonHugePages: 106496 kB
HugePages_Total: 8
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 1048576 kB
DirectMap4k: 91600 kB
DirectMap2M: 2965504 kB
DirectMap1G: 13631488 kB
sysctl -A | grep huge
vm.hugepages_treat_as_movable = 0
vm.hugetlb_shm_group = 0
vm.nr_hugepages = 8
vm.nr_hugepages_mempolicy = 8
vm.nr_overcommit_hugepages = 0
thanks
Srinivas.
On Fri, Jan 30, 2015 at 10:59 AM, Linhaifeng <haifeng.lin@huawei.com> wrote:
>
>
> On 2015/1/30 0:48, Srinivasreddy R wrote:
> > EAL: 512 hugepages of size 2097152 reserved, but no mounted hugetlbfs
> found
> > for that size
>
> Maybe you haven't mount hugetlbfs.
> --
> Regards,
> Haifeng
>
>
--
thanks
srinivas.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [dpdk-dev] [DISCUSSION] : ERROR while running vhost example in dpdk-1.8
2015-01-30 5:49 ` Srinivasreddy R
@ 2015-01-30 6:01 ` Srinivasreddy R
2015-01-31 1:06 ` Linhaifeng
0 siblings, 1 reply; 5+ messages in thread
From: Srinivasreddy R @ 2015-01-30 6:01 UTC (permalink / raw)
To: Linhaifeng; +Cc: dev, dpdk-ovs
hi,
May be I am missing something regarding hugetlbfs .
I performed below steps for hugetlbfs .
I am running on Ubuntu 14.04.1 LTS.
cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.13.0-24-generic
root=UUID=628ff32b-dede-4b47-bd13-893c13c18d00 ro quiet splash
hugepagesz=2M hugepages=512 default_hugepagesz=1G hugepagesz=1G hugepages=8
vt.handoff=7
mount -t hugetlbfs nodev /mnt/huge
echo 512 > /sys/kernel/mm/hugepages/hugepages-2048kB/ nr_hugepages
mount -t hugetlbfs nodev /mnt/huge -o pagesize=2M
thanks ,
Srinivas.
On Fri, Jan 30, 2015 at 11:19 AM, Srinivasreddy R <
srinivasreddy4390@gmail.com> wrote:
> thanks for your reply . even I face the same issue .any pointers to
> proceed ..
>
>
> ./build/app/vhost-switch -c f -n 4 -- -p 0x1 --dev-basename usvhost-1
> --stats 2
> EAL: Detected lcore 0 as core 0 on socket 0
> EAL: Detected lcore 1 as core 1 on socket 0
> EAL: Detected lcore 2 as core 2 on socket 0
> EAL: Detected lcore 3 as core 3 on socket 0
> EAL: Detected lcore 4 as core 0 on socket 0
> EAL: Detected lcore 5 as core 1 on socket 0
> EAL: Detected lcore 6 as core 2 on socket 0
> EAL: Detected lcore 7 as core 3 on socket 0
> EAL: Support maximum 128 logical core(s) by configuration.
> EAL: Detected 8 lcore(s)
> EAL: cannot open VFIO container, error 2 (No such file or directory)
> EAL: VFIO support could not be initialized
> EAL: Setting up memory...
> EAL: Ask a virtual area of 0x200000000 bytes
> EAL: Virtual area found at 0x7fb6c0000000 (size = 0x200000000)
> EAL: Ask a virtual area of 0x1a00000 bytes
> EAL: Virtual area found at 0x7fb8f5600000 (size = 0x1a00000)
> EAL: Ask a virtual area of 0x200000 bytes
> EAL: Virtual area found at 0x7fb8f5200000 (size = 0x200000)
> EAL: Ask a virtual area of 0x200000 bytes
> EAL: Virtual area found at 0x7fb8f4e00000 (size = 0x200000)
> EAL: Ask a virtual area of 0x6c00000 bytes
> EAL: Virtual area found at 0x7fb8ee000000 (size = 0x6c00000)
> EAL: Ask a virtual area of 0x200000 bytes
> EAL: Virtual area found at 0x7fb8edc00000 (size = 0x200000)
> EAL: Ask a virtual area of 0x200000 bytes
> EAL: Virtual area found at 0x7fb8ed800000 (size = 0x200000)
> EAL: Ask a virtual area of 0x200000 bytes
> EAL: Virtual area found at 0x7fb8ed400000 (size = 0x200000)
> EAL: Ask a virtual area of 0x9e00000 bytes
> EAL: Virtual area found at 0x7fb8e3400000 (size = 0x9e00000)
> EAL: Ask a virtual area of 0x19000000 bytes
> EAL: Virtual area found at 0x7fb8ca200000 (size = 0x19000000)
> EAL: Ask a virtual area of 0x200000 bytes
> EAL: Virtual area found at 0x7fb8c9e00000 (size = 0x200000)
> EAL: Ask a virtual area of 0x200000 bytes
> EAL: Virtual area found at 0x7fb8c9a00000 (size = 0x200000)
> EAL: Ask a virtual area of 0x10000000 bytes
> EAL: Virtual area found at 0x7fb6afe00000 (size = 0x10000000)
> EAL: Ask a virtual area of 0x3c00000 bytes
> EAL: Virtual area found at 0x7fb8c5c00000 (size = 0x3c00000)
> EAL: Ask a virtual area of 0x200000 bytes
> EAL: Virtual area found at 0x7fb8c5800000 (size = 0x200000)
> EAL: Requesting 8 pages of size 1024MB from socket 0
> EAL: Requesting 512 pages of size 2MB from socket 0
> EAL: TSC frequency is ~3092840 KHz
> EAL: Master core 0 is ready (tid=f83c0880)
> PMD: ENICPMD trace: rte_enic_pmd_init
> EAL: Core 3 is ready (tid=c3ded700)
> EAL: Core 2 is ready (tid=c45ee700)
> EAL: Core 1 is ready (tid=c4def700)
> EAL: PCI device 0000:01:00.0 on NUMA socket -1
> EAL: probe driver: 8086:1521 rte_igb_pmd
> EAL: PCI memory mapped at 0x7fb8f7000000
> EAL: PCI memory mapped at 0x7fb8f7100000
> PMD: eth_igb_dev_init(): port_id 0 vendorID=0x8086 deviceID=0x1521
> EAL: PCI device 0000:01:00.1 on NUMA socket -1
> EAL: probe driver: 8086:1521 rte_igb_pmd
> EAL: 0000:01:00.1 not managed by UIO driver, skipping
> EAL: PCI device 0000:03:00.0 on NUMA socket -1
> EAL: probe driver: 8086:10d3 rte_em_pmd
> EAL: 0000:03:00.0 not managed by UIO driver, skipping
> EAL: PCI device 0000:04:00.0 on NUMA socket -1
> EAL: probe driver: 8086:10d3 rte_em_pmd
> EAL: 0000:04:00.0 not managed by UIO driver, skipping
> pf queue num: 0, configured vmdq pool num: 8, each vmdq pool has 1 queues
> PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60f7e00
> hw_ring=0x7fb8f5228580 dma_addr=0x36628580
> PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60f5d00
> hw_ring=0x7fb8f5238580 dma_addr=0x36638580
> PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60f3c00
> hw_ring=0x7fb8f5248580 dma_addr=0x36648580
> PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60f1b00
> hw_ring=0x7fb8f5258580 dma_addr=0x36658580
> PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60efa00
> hw_ring=0x7fb8f5268580 dma_addr=0x36668580
> PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60ed900
> hw_ring=0x7fb8f5278580 dma_addr=0x36678580
> PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60eb800
> hw_ring=0x7fb8f5288580 dma_addr=0x36688580
> PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60e9700
> hw_ring=0x7fb8f5298580 dma_addr=0x36698580
> PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider
> setting the TX WTHRESH value to 4, 8, or 16.
> PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fb8f60e7600
> hw_ring=0x7fb8f52a8580 dma_addr=0x366a8580
> PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider
> setting the TX WTHRESH value to 4, 8, or 16.
> PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fb8f60e5500
> hw_ring=0x7fb8f52b8580 dma_addr=0x366b8580
> PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider
> setting the TX WTHRESH value to 4, 8, or 16.
> PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fb8f60e3400
> hw_ring=0x7fb8f52c8580 dma_addr=0x366c8580
> PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider
> setting the TX WTHRESH value to 4, 8, or 16.
> PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fb8f60e1300
> hw_ring=0x7fb8f52d8580 dma_addr=0x366d8580
> PMD: eth_igb_start(): <<
> VHOST_PORT: Max virtio devices supported: 8
> VHOST_PORT: Port 0 MAC: 2c 53 4a 00 28 68
> VHOST_DATA: Procesing on Core 1 started
> VHOST_DATA: Procesing on Core 2 started
> VHOST_DATA: Procesing on Core 3 started
> Device statistics ====================================
> ======================================================
> VHOST_CONFIG: (0) Device configuration started
> VHOST_CONFIG: (0) Failed to find memory file for pid 845
> Device statistics ====================================
>
>
>
> ./qemu-wrap.py -machine pc-i440fx-1.4,accel=kvm,usb=off -cpu host -smp
> 2,sockets=2,cores=1,threads=1 -netdev tap,id=hostnet1,vhost=on -device
> virtio-net-pci,netdev=hostnet1,id=net1 -hda /home/utils/images/vm1.img -m
> 2048 -vnc 0.0.0.0:2 -net nic -net tap,ifname=tap3,script=no -mem-path
> /dev/hugepages -mem-prealloc
> W: /etc/qemu-ifup: no bridge for guest interface found
> file_ram_alloc: can't mmap RAM pages: Cannot allocate memory
> qemu-system-x86_64: unable to start vhost net: 22: falling back on
> userspace virtio
>
>
> mount | grep huge
> cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,relatime,hugetlb)
> nodev on /dev/hugepages type hugetlbfs (rw)
> nodev on /mnt/huge type hugetlbfs (rw,pagesize=2M)
>
>
>
> cat /proc/meminfo
> MemTotal: 16345340 kB
> MemFree: 4591596 kB
> Buffers: 466472 kB
> Cached: 1218728 kB
> SwapCached: 0 kB
> Active: 1147228 kB
> Inactive: 762992 kB
> Active(anon): 232732 kB
> Inactive(anon): 14760 kB
> Active(file): 914496 kB
> Inactive(file): 748232 kB
> Unevictable: 3704 kB
> Mlocked: 3704 kB
> SwapTotal: 16686076 kB
> SwapFree: 16686076 kB
> Dirty: 488 kB
> Writeback: 0 kB
> AnonPages: 230800 kB
> Mapped: 55248 kB
> Shmem: 17932 kB
> Slab: 245116 kB
> SReclaimable: 214372 kB
> SUnreclaim: 30744 kB
> KernelStack: 3664 kB
> PageTables: 13900 kB
> NFS_Unstable: 0 kB
> Bounce: 0 kB
> WritebackTmp: 0 kB
> CommitLimit: 20140152 kB
> Committed_AS: 1489760 kB
> VmallocTotal: 34359738367 kB
> VmallocUsed: 374048 kB
> VmallocChunk: 34359356412 kB
> HardwareCorrupted: 0 kB
> AnonHugePages: 106496 kB
> HugePages_Total: 8
> HugePages_Free: 0
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 1048576 kB
> DirectMap4k: 91600 kB
> DirectMap2M: 2965504 kB
> DirectMap1G: 13631488 kB
>
>
>
> sysctl -A | grep huge
> vm.hugepages_treat_as_movable = 0
> vm.hugetlb_shm_group = 0
> vm.nr_hugepages = 8
> vm.nr_hugepages_mempolicy = 8
> vm.nr_overcommit_hugepages = 0
>
>
> thanks
> Srinivas.
>
>
>
> On Fri, Jan 30, 2015 at 10:59 AM, Linhaifeng <haifeng.lin@huawei.com>
> wrote:
>
>>
>>
>> On 2015/1/30 0:48, Srinivasreddy R wrote:
>> > EAL: 512 hugepages of size 2097152 reserved, but no mounted hugetlbfs
>> found
>> > for that size
>>
>> Maybe you haven't mount hugetlbfs.
>> --
>> Regards,
>> Haifeng
>>
>>
>
>
> --
> thanks
> srinivas.
>
--
thanks
srinivas.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [dpdk-dev] [DISCUSSION] : ERROR while running vhost example in dpdk-1.8
2015-01-30 6:01 ` Srinivasreddy R
@ 2015-01-31 1:06 ` Linhaifeng
0 siblings, 0 replies; 5+ messages in thread
From: Linhaifeng @ 2015-01-31 1:06 UTC (permalink / raw)
To: Srinivasreddy R; +Cc: dev, dpdk-ovs
On 2015/1/30 14:01, Srinivasreddy R wrote:
> hi,
>
> May be I am missing something regarding hugetlbfs .
> I performed below steps for hugetlbfs .
> I am running on Ubuntu 14.04.1 LTS.
>
> cat /proc/cmdline
> BOOT_IMAGE=/boot/vmlinuz-3.13.0-24-generic
> root=UUID=628ff32b-dede-4b47-bd13-893c13c18d00 ro quiet splash
> hugepagesz=2M hugepages=512 default_hugepagesz=1G hugepagesz=1G hugepages=8
> vt.handoff=7
>
> mount -t hugetlbfs nodev /mnt/huge
>
> echo 512 > /sys/kernel/mm/hugepages/hugepages-2048kB/ nr_hugepages
> mount -t hugetlbfs nodev /mnt/huge -o pagesize=2M
>
mount /mnt/huge twice?
> thanks ,
> Srinivas.
>
>
>
> On Fri, Jan 30, 2015 at 11:19 AM, Srinivasreddy R <
> srinivasreddy4390@gmail.com> wrote:
>
>> thanks for your reply . even I face the same issue .any pointers to
>> proceed ..
>>
>>
>> ./build/app/vhost-switch -c f -n 4 -- -p 0x1 --dev-basename usvhost-1
>> --stats 2
>> EAL: Detected lcore 0 as core 0 on socket 0
>> EAL: Detected lcore 1 as core 1 on socket 0
>> EAL: Detected lcore 2 as core 2 on socket 0
>> EAL: Detected lcore 3 as core 3 on socket 0
>> EAL: Detected lcore 4 as core 0 on socket 0
>> EAL: Detected lcore 5 as core 1 on socket 0
>> EAL: Detected lcore 6 as core 2 on socket 0
>> EAL: Detected lcore 7 as core 3 on socket 0
>> EAL: Support maximum 128 logical core(s) by configuration.
>> EAL: Detected 8 lcore(s)
>> EAL: cannot open VFIO container, error 2 (No such file or directory)
>> EAL: VFIO support could not be initialized
>> EAL: Setting up memory...
>> EAL: Ask a virtual area of 0x200000000 bytes
>> EAL: Virtual area found at 0x7fb6c0000000 (size = 0x200000000)
>> EAL: Ask a virtual area of 0x1a00000 bytes
>> EAL: Virtual area found at 0x7fb8f5600000 (size = 0x1a00000)
>> EAL: Ask a virtual area of 0x200000 bytes
>> EAL: Virtual area found at 0x7fb8f5200000 (size = 0x200000)
>> EAL: Ask a virtual area of 0x200000 bytes
>> EAL: Virtual area found at 0x7fb8f4e00000 (size = 0x200000)
>> EAL: Ask a virtual area of 0x6c00000 bytes
>> EAL: Virtual area found at 0x7fb8ee000000 (size = 0x6c00000)
>> EAL: Ask a virtual area of 0x200000 bytes
>> EAL: Virtual area found at 0x7fb8edc00000 (size = 0x200000)
>> EAL: Ask a virtual area of 0x200000 bytes
>> EAL: Virtual area found at 0x7fb8ed800000 (size = 0x200000)
>> EAL: Ask a virtual area of 0x200000 bytes
>> EAL: Virtual area found at 0x7fb8ed400000 (size = 0x200000)
>> EAL: Ask a virtual area of 0x9e00000 bytes
>> EAL: Virtual area found at 0x7fb8e3400000 (size = 0x9e00000)
>> EAL: Ask a virtual area of 0x19000000 bytes
>> EAL: Virtual area found at 0x7fb8ca200000 (size = 0x19000000)
>> EAL: Ask a virtual area of 0x200000 bytes
>> EAL: Virtual area found at 0x7fb8c9e00000 (size = 0x200000)
>> EAL: Ask a virtual area of 0x200000 bytes
>> EAL: Virtual area found at 0x7fb8c9a00000 (size = 0x200000)
>> EAL: Ask a virtual area of 0x10000000 bytes
>> EAL: Virtual area found at 0x7fb6afe00000 (size = 0x10000000)
>> EAL: Ask a virtual area of 0x3c00000 bytes
>> EAL: Virtual area found at 0x7fb8c5c00000 (size = 0x3c00000)
>> EAL: Ask a virtual area of 0x200000 bytes
>> EAL: Virtual area found at 0x7fb8c5800000 (size = 0x200000)
>> EAL: Requesting 8 pages of size 1024MB from socket 0
>> EAL: Requesting 512 pages of size 2MB from socket 0
>> EAL: TSC frequency is ~3092840 KHz
>> EAL: Master core 0 is ready (tid=f83c0880)
>> PMD: ENICPMD trace: rte_enic_pmd_init
>> EAL: Core 3 is ready (tid=c3ded700)
>> EAL: Core 2 is ready (tid=c45ee700)
>> EAL: Core 1 is ready (tid=c4def700)
>> EAL: PCI device 0000:01:00.0 on NUMA socket -1
>> EAL: probe driver: 8086:1521 rte_igb_pmd
>> EAL: PCI memory mapped at 0x7fb8f7000000
>> EAL: PCI memory mapped at 0x7fb8f7100000
>> PMD: eth_igb_dev_init(): port_id 0 vendorID=0x8086 deviceID=0x1521
>> EAL: PCI device 0000:01:00.1 on NUMA socket -1
>> EAL: probe driver: 8086:1521 rte_igb_pmd
>> EAL: 0000:01:00.1 not managed by UIO driver, skipping
>> EAL: PCI device 0000:03:00.0 on NUMA socket -1
>> EAL: probe driver: 8086:10d3 rte_em_pmd
>> EAL: 0000:03:00.0 not managed by UIO driver, skipping
>> EAL: PCI device 0000:04:00.0 on NUMA socket -1
>> EAL: probe driver: 8086:10d3 rte_em_pmd
>> EAL: 0000:04:00.0 not managed by UIO driver, skipping
>> pf queue num: 0, configured vmdq pool num: 8, each vmdq pool has 1 queues
>> PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60f7e00
>> hw_ring=0x7fb8f5228580 dma_addr=0x36628580
>> PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60f5d00
>> hw_ring=0x7fb8f5238580 dma_addr=0x36638580
>> PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60f3c00
>> hw_ring=0x7fb8f5248580 dma_addr=0x36648580
>> PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60f1b00
>> hw_ring=0x7fb8f5258580 dma_addr=0x36658580
>> PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60efa00
>> hw_ring=0x7fb8f5268580 dma_addr=0x36668580
>> PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60ed900
>> hw_ring=0x7fb8f5278580 dma_addr=0x36678580
>> PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60eb800
>> hw_ring=0x7fb8f5288580 dma_addr=0x36688580
>> PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fb8f60e9700
>> hw_ring=0x7fb8f5298580 dma_addr=0x36698580
>> PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider
>> setting the TX WTHRESH value to 4, 8, or 16.
>> PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fb8f60e7600
>> hw_ring=0x7fb8f52a8580 dma_addr=0x366a8580
>> PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider
>> setting the TX WTHRESH value to 4, 8, or 16.
>> PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fb8f60e5500
>> hw_ring=0x7fb8f52b8580 dma_addr=0x366b8580
>> PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider
>> setting the TX WTHRESH value to 4, 8, or 16.
>> PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fb8f60e3400
>> hw_ring=0x7fb8f52c8580 dma_addr=0x366c8580
>> PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider
>> setting the TX WTHRESH value to 4, 8, or 16.
>> PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fb8f60e1300
>> hw_ring=0x7fb8f52d8580 dma_addr=0x366d8580
>> PMD: eth_igb_start(): <<
>> VHOST_PORT: Max virtio devices supported: 8
>> VHOST_PORT: Port 0 MAC: 2c 53 4a 00 28 68
>> VHOST_DATA: Procesing on Core 1 started
>> VHOST_DATA: Procesing on Core 2 started
>> VHOST_DATA: Procesing on Core 3 started
>> Device statistics ====================================
>> ======================================================
>> VHOST_CONFIG: (0) Device configuration started
>> VHOST_CONFIG: (0) Failed to find memory file for pid 845
>> Device statistics ====================================
>>
>>
>>
>> ./qemu-wrap.py -machine pc-i440fx-1.4,accel=kvm,usb=off -cpu host -smp
>> 2,sockets=2,cores=1,threads=1 -netdev tap,id=hostnet1,vhost=on -device
>> virtio-net-pci,netdev=hostnet1,id=net1 -hda /home/utils/images/vm1.img -m
>> 2048 -vnc 0.0.0.0:2 -net nic -net tap,ifname=tap3,script=no -mem-path
>> /dev/hugepages -mem-prealloc
>> W: /etc/qemu-ifup: no bridge for guest interface found
>> file_ram_alloc: can't mmap RAM pages: Cannot allocate memory
>> qemu-system-x86_64: unable to start vhost net: 22: falling back on
>> userspace virtio
>>
>>
>> mount | grep huge
>> cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,relatime,hugetlb)
>> nodev on /dev/hugepages type hugetlbfs (rw)
>> nodev on /mnt/huge type hugetlbfs (rw,pagesize=2M)
>>
>>
>>
>> cat /proc/meminfo
>> MemTotal: 16345340 kB
>> MemFree: 4591596 kB
>> Buffers: 466472 kB
>> Cached: 1218728 kB
>> SwapCached: 0 kB
>> Active: 1147228 kB
>> Inactive: 762992 kB
>> Active(anon): 232732 kB
>> Inactive(anon): 14760 kB
>> Active(file): 914496 kB
>> Inactive(file): 748232 kB
>> Unevictable: 3704 kB
>> Mlocked: 3704 kB
>> SwapTotal: 16686076 kB
>> SwapFree: 16686076 kB
>> Dirty: 488 kB
>> Writeback: 0 kB
>> AnonPages: 230800 kB
>> Mapped: 55248 kB
>> Shmem: 17932 kB
>> Slab: 245116 kB
>> SReclaimable: 214372 kB
>> SUnreclaim: 30744 kB
>> KernelStack: 3664 kB
>> PageTables: 13900 kB
>> NFS_Unstable: 0 kB
>> Bounce: 0 kB
>> WritebackTmp: 0 kB
>> CommitLimit: 20140152 kB
>> Committed_AS: 1489760 kB
>> VmallocTotal: 34359738367 kB
>> VmallocUsed: 374048 kB
>> VmallocChunk: 34359356412 kB
>> HardwareCorrupted: 0 kB
>> AnonHugePages: 106496 kB
>> HugePages_Total: 8
>> HugePages_Free: 0
>> HugePages_Rsvd: 0
>> HugePages_Surp: 0
>> Hugepagesize: 1048576 kB
>> DirectMap4k: 91600 kB
>> DirectMap2M: 2965504 kB
>> DirectMap1G: 13631488 kB
>>
>>
>>
>> sysctl -A | grep huge
>> vm.hugepages_treat_as_movable = 0
>> vm.hugetlb_shm_group = 0
>> vm.nr_hugepages = 8
>> vm.nr_hugepages_mempolicy = 8
>> vm.nr_overcommit_hugepages = 0
>>
>>
>> thanks
>> Srinivas.
>>
>>
>>
>> On Fri, Jan 30, 2015 at 10:59 AM, Linhaifeng <haifeng.lin@huawei.com>
>> wrote:
>>
>>>
>>>
>>> On 2015/1/30 0:48, Srinivasreddy R wrote:
>>>> EAL: 512 hugepages of size 2097152 reserved, but no mounted hugetlbfs
>>> found
>>>> for that size
>>>
>>> Maybe you haven't mount hugetlbfs.
>>> --
>>> Regards,
>>> Haifeng
>>>
>>>
>>
>>
>> --
>> thanks
>> srinivas.
>>
>
>
>
--
Regards,
Haifeng
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2015-01-31 1:06 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-29 16:48 [dpdk-dev] [DISCUSSION] : ERROR while running vhost example in dpdk-1.8 Srinivasreddy R
2015-01-30 5:29 ` Linhaifeng
2015-01-30 5:49 ` Srinivasreddy R
2015-01-30 6:01 ` Srinivasreddy R
2015-01-31 1:06 ` Linhaifeng
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).