From: "McCullough, Harrison" <harrison_mccullough@labs.att.com>
To: "Wiles, Keith" <keith.wiles@intel.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] 0MB available on socket when running pktgen-dpdk
Date: Wed, 28 Jun 2017 17:36:25 +0000 [thread overview]
Message-ID: <942AD08E4186F644A54168E4F84117C9317E0F@SAUSMAILMBX1.ad.tri.sbc.com> (raw)
In-Reply-To: <CE1FF105-A7C1-4650-B06C-E314E524B96D@intel.com>
That was the problem, thank you!
-----Original Message-----
From: Wiles, Keith [mailto:keith.wiles@intel.com]
Sent: Wednesday, June 28, 2017 12:30 PM
To: McCullough, Harrison <harrison_mccullough@labs.att.com>
Cc: users@dpdk.org
Subject: Re: [dpdk-users] 0MB available on socket when running pktgen-dpdk
> On Jun 28, 2017, at 10:03 AM, McCullough, Harrison <harrison_mccullough@labs.att.com> wrote:
>
> When I run pktgen-dpdk I always get the error that there are 0MB available (no
> matter how much I request). However, if I run testpmd, then it seems to run
> without complaint. Does anybody know what's going on?
—socket-mem 512,512 is trying to allocate memory on socket 0 and socket 1 if you only have one socket or CPU installed then remove the ,512 and that should work.
>
> P.S. Why does /proc/meminfo always report that there are 0 hugepages free after
> I do anything with DPDK? Does it not clean up afterward and free them? If I rm
> /mnt/huge/* then /proc/meminfo reports that the hugepages were freed, but it
> seems like I shouldn't have to manually do that.
>
> P.P.S. If /proc/meminfo reports that there are 0 hugepages free, then why does
> testpmd work? It appears to be using them, since they are no longer free after
> running testpmd.
>
> root@ubuntu:/home/harrison/git/dpdk# grep -i huge /proc/meminfo
> AnonHugePages: 208896 kB
> HugePages_Total: 1024
> HugePages_Free: 0
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 2048 kB
> root@ubuntu:/home/harrison/git/dpdk# ./x86_64-native-linuxapp-gcc/app/testpmd
> EAL: Detected 8 lcore(s)
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> EAL: PCI device 0000:05:00.0 on NUMA socket -1
> EAL: probe driver: 8086:10e8 net_e1000_igb
> EAL: PCI device 0000:05:00.1 on NUMA socket -1
> EAL: probe driver: 8086:10e8 net_e1000_igb
> EAL: PCI device 0000:06:00.0 on NUMA socket -1
> EAL: probe driver: 8086:10e8 net_e1000_igb
> EAL: PCI device 0000:06:00.1 on NUMA socket -1
> EAL: probe driver: 8086:10e8 net_e1000_igb
> Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
> USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=203456, size=2176, socket=0
> Configuring Port 0 (socket 0)
> Port 0: 00:1B:21:6C:FC:9C
> Configuring Port 1 (socket 0)
> Port 1: 00:1B:21:6C:FC:9D
> Checking link statuses...
> Done
> No commandline core given, start packet forwarding
> io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP over anonymous pages disabled
> Logical Core 1 (socket 0) forwards packets on 2 streams:
> RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
> RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
>
> io packet forwarding - CRC stripping enabled - packets/burst=32
> nb forwarding cores=1 - nb forwarding ports=2
> RX queues=1 - RX desc=128 - RX free threshold=32
> RX threshold registers: pthresh=8 hthresh=8 wthresh=1
> TX queues=1 - TX desc=512 - TX free threshold=0
> TX threshold registers: pthresh=8 hthresh=1 wthresh=1
> TX RS bit threshold=0 - TXQ flags=0x0
> Press enter to exit
>
> Telling cores to stop...
> Waiting for lcores to finish...
>
> ---------------------- Forward statistics for port 0 ----------------------
> RX-packets: 0 RX-dropped: 0 RX-total: 0
> TX-packets: 0 TX-dropped: 0 TX-total: 0
> ----------------------------------------------------------------------------
>
> ---------------------- Forward statistics for port 1 ----------------------
> RX-packets: 0 RX-dropped: 0 RX-total: 0
> TX-packets: 0 TX-dropped: 0 TX-total: 0
> ----------------------------------------------------------------------------
>
> +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
> RX-packets: 0 RX-dropped: 0 RX-total: 0
> TX-packets: 0 TX-dropped: 0 TX-total: 0
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>
> Done.
>
> Shutting down port 0...
> Stopping ports...
> Done
> Closing ports...
> Done
>
> Shutting down port 1...
> Stopping ports...
> Done
> Closing ports...
> Done
>
> Bye...
> root@ubuntu:/home/harrison/git/dpdk# grep -i huge /proc/meminfo
> AnonHugePages: 208896 kB
> HugePages_Total: 1024
> HugePages_Free: 0
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 2048 kB
> root@ubuntu:/home/harrison/git/dpdk# cd ../pktgen-dpdk/
> root@ubuntu:/home/harrison/git/pktgen-dpdk# ./tools/run.py harrison
> sudo ./app/x86_64-native-linuxapp-gcc/pktgen -l 0,1-4 -n 2 --proc-type auto --log-level 7 --socket-mem 512,512 --file-prefix pg -- -T -P --crc-strip -m [1:2].0 -m [3:4].1 -f themes/black-yellow.theme
>
> Copyright (c) <2010-2017>, Intel Corporation. All rights reserved. Powered by Intel(r) DPDK
> EAL: Detected 8 lcore(s)
> EAL: Auto-detected process type: PRIMARY
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> EAL: Not enough memory available on socket 1! Requested: 512MB, available: 0MB
> EAL: FATAL: Cannot init memory
>
> EAL: Cannot init memory
>
> root@ubuntu:/home/harrison/git/pktgen-dpdk# grep -i huge /proc/meminfo
> AnonHugePages: 208896 kB
> HugePages_Total: 1024
> HugePages_Free: 0
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 2048 kB
> root@ubuntu:/home/harrison/git/pktgen-dpdk# sudo rm /mnt/huge/*
> root@ubuntu:/home/harrison/git/pktgen-dpdk# sudo rm /mnt/huge
> huge/ huge_1GB/
> root@ubuntu:/home/harrison/git/pktgen-dpdk# grep -i huge /proc/meminfo
> AnonHugePages: 208896 kB
> HugePages_Total: 1024
> HugePages_Free: 1024
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 2048 kB
> root@ubuntu:/home/harrison/git/pktgen-dpdk# ./tools/run.py harrison
> sudo ./app/x86_64-native-linuxapp-gcc/pktgen -l 0,1-4 -n 2 --proc-type auto --log-level 7 --socket-mem 512,512 --file-prefix pg -- -T -P --crc-strip -m [1:2].0 -m [3:4].1 -f themes/black-yellow.theme
>
> Copyright (c) <2010-2017>, Intel Corporation. All rights reserved. Powered by Intel(r) DPDK
> EAL: Detected 8 lcore(s)
> EAL: Auto-detected process type: PRIMARY
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> EAL: Not enough memory available on socket 1! Requested: 512MB, available: 0MB
> EAL: FATAL: Cannot init memory
>
> EAL: Cannot init memory
>
> root@ubuntu:/home/harrison/git/pktgen-dpdk# grep -i huge /proc/meminfo
> AnonHugePages: 208896 kB
> HugePages_Total: 1024
> HugePages_Free: 0
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 2048 kB
Regards,
Keith
prev parent reply other threads:[~2017-06-28 17:37 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-06-28 17:03 McCullough, Harrison
2017-06-28 17:28 ` Wiles, Keith
2017-06-28 17:30 ` Wiles, Keith
2017-06-28 17:36 ` McCullough, Harrison [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=942AD08E4186F644A54168E4F84117C9317E0F@SAUSMAILMBX1.ad.tri.sbc.com \
--to=harrison_mccullough@labs.att.com \
--cc=keith.wiles@intel.com \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).