From: Pavel Vazharov <freakpv@gmail.com>
To: users <users@dpdk.org>
Subject: [dpdk-users] rte_malloc behavior
Date: Sat, 24 Jul 2021 17:54:29 +0300 [thread overview]
Message-ID: <CAK9EM1-34xATiHNfoMVs=1mc+MKy-w4cwL4-Jcu5u9ixe3wAEw@mail.gmail.com> (raw)
Hi,
Short intro to the original cause of my problem.
We have an application where we use the FreeBSD stack running on top of
DPDK. It's based on F-stack <https://github.com/F-Stack/f-stack> but with
lots of modifications at this point. There are situations where we send
lots of TCP data in a short period of time. Lots of 16KB blocks. The
FreeBSD networking stack internally splits these blocks into so-called jumbo
clusters <http://nginx.org/en/docs/freebsd_tuning.html#mbufs> (4K size)
before putting them into the TCP socket buffers. All allocations needed
from the FreeBSD stack are redirected to call rte_malloc. I observed that
during such TCP sends we get peak delays in the sendmsg API call and
tracked these delays down to the rte_malloc calls.
After that I did some tests with the following piece of C++ code trying to
isolate the issue further.
[[gnu::noinline]] static void
test_allocations(int allocations, int size, int align) noexcept
{
using namespace std::chrono;
std::vector<void*> mem(allocations);
const auto beg = high_resolution_clock::now();
for (int i = 0; i < allocations; ++i) {
mem[i] = rte_malloc(nullptr, size, align);
X3ME_ENFORCE(mem[i]);
}
const auto end = high_resolution_clock::now();
fmt::print(
"Allocations:{} Size:{} Align:{} Time_msecs:{}
Avg_time_usecs:{}\n",
allocations, size, align,
duration_cast<milliseconds>(end - beg).count(),
duration_cast<microseconds>(end - beg).count() / allocations);
for (void* m : mem) rte_free(m);
}
The results show big delays in the rte_malloc function if we ask for 1K or
4K. These delays are not present if the size is not an exact multiple of 2
like these values.
Allocations:4096 Size:4096 Align:4096 Time_msecs:330 Avg_time_usecs:80
Allocations:16384 Size:4096 Align:4096 Time_msecs:8724 Avg_time_usecs:532
Allocations:32768 Size:4096 Align:4096 Time_msecs:38291 Avg_time_usecs:1168
Allocations:4096 Size:4112 Align:4096 Time_msecs:12 Avg_time_usecs:3
Allocations:16384 Size:4112 Align:4096 Time_msecs:45 Avg_time_usecs:2
Allocations:32768 Size:4112 Align:4096 Time_msecs:83 Avg_time_usecs:2
Allocations:4096 Size:1024 Align:1024 Time_msecs:244 Avg_time_usecs:59
Allocations:16384 Size:1024 Align:1024 Time_msecs:4428 Avg_time_usecs:270
Allocations:32768 Size:1024 Align:1024 Time_msecs:26901 Avg_time_usecs:820
Allocations:4096 Size:1040 Align:1024 Time_msecs:4 Avg_time_usecs:1
Allocations:16384 Size:1040 Align:1024 Time_msecs:16 Avg_time_usecs:1
Allocations:32768 Size:1040 Align:1024 Time_msecs:30 Avg_time_usecs:0
And just for a reference the speed of the allocations using the standard
"aligned_alloc/free" API instead of rte_malloc/rte_free.
Allocations:32768 Size:1024 Align:1024 Time_msecs:66 Avg_time_usecs:2
Allocations:32768 Size:4096 Align:4096 Time_msecs:118 Avg_time_usecs:3
As far as I know some allocators have inefficiencies working with
particular allocation sizes but I haven't expected such a big difference.
Am I missing something from the documentation explaining this behavior?
Should I report it to the devs mailing list?
Regards,
Pavel.
reply other threads:[~2021-07-24 14:54 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAK9EM1-34xATiHNfoMVs=1mc+MKy-w4cwL4-Jcu5u9ixe3wAEw@mail.gmail.com' \
--to=freakpv@gmail.com \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).