DPDK usage discussions
 help / color / mirror / Atom feed
From: Ian Trick <ian.trick@multapplied.net>
To: users@dpdk.org
Cc: cristian.dumitrescu@intel.com
Subject: [dpdk-users] qos_sched in DPDK 17.11.0 fails to initialize mbuf pool
Date: Thu, 16 Nov 2017 17:24:03 -0800	[thread overview]
Message-ID: <ffabc15f-f69d-483d-9198-5ae433fb0d73@multapplied.net> (raw)

Hi. I'm having an issue starting the qos_sched example program.

# ./examples/qos_sched/build/qos_sched --no-huge -l 1,2,3 --vdev
net_af_packet0,iface=eth1 -- --pfc "0,0,2,3" --cfg
examples/qos_sched/profile_ov.cfg

EAL: Detected 16 lcore(s)
EAL: Probing VFIO support...
EAL: Started without hugepages support, physical addresses not available
EAL: PCI device 0000:08:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:10d3 net_e1000_em
PMD: Initializing pmd_af_packet for net_af_packet0
PMD: net_af_packet0: AF_PACKET MMAP parameters:
PMD: net_af_packet0:    block size 4096
PMD: net_af_packet0:    block count 256
PMD: net_af_packet0:    frame size 2048
PMD: net_af_packet0:    frame count 512
PMD: net_af_packet0: creating AF_PACKET-backed ethdev on numa socket 0
EAL: Error - exiting with code: 1
  Cause: Cannot init mbuf pool for socket 0


This is version 17.11.0 from the repo. My RTE_TARGET is
x86_64-native-linuxapp-clang. eth1 is a veth. I've tried running with
`-m` and using a low value but the issue still happens.

>From what I can tell, rte_pktmbuf_pool_create() is failing and rte_errno
is set to EINVAL.

In librte_mempool/rte_mempool.c, the function
rte_mempool_populate_virt() is succeeding this test and returning -EINVAL:

	if (RTE_ALIGN_CEIL(len, pg_sz) != len)
		return -EINVAL;

In that context, len is mz->len, the length of a memzone passed by the
caller, rte_mempool_populate_default(). Which got it here:

	mz = rte_memzone_reserve_aligned(mz_name, size,
		mp->socket_id, mz_flags, align);
	/* not enough memory, retry with the biggest zone we have */
	if (mz == NULL)
		mz = rte_memzone_reserve_aligned(mz_name, 0,
			mp->socket_id, mz_flags, align);

This fails the first call, and succeeds the second when it passes 0 as
the size. memzone_reserve_aligned_thread_unsafe(), in
librte_eal/common/eal_common_memzone.c, gets the length this way:

	requested_len = find_heap_max_free_elem(&socket_id, align);

So the align value is 4096. But the value returned by
find_heap_max_free_elem() isn't aligned to that -- I think? Since it
fails the check later on.

I'm not sure if this is a thing with my environment where I don't have
enough memory? (Although I would have expected a different error for
that.) Or I don't have the right program arguments? Or one of these
functions isn't doing what it's supposed to?

             reply	other threads:[~2017-11-17  1:24 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-11-17  1:24 Ian Trick [this message]
2017-11-17 12:19 ` Dumitrescu, Cristian
2017-11-17 17:50   ` Ian Trick
2017-11-17 18:34     ` Dumitrescu, Cristian
2017-11-17 21:26       ` Ian Trick

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ffabc15f-f69d-483d-9198-5ae433fb0d73@multapplied.net \
    --to=ian.trick@multapplied.net \
    --cc=cristian.dumitrescu@intel.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).