From: bugzilla@dpdk.org
To: dev@dpdk.org
Subject: [Bug 1101] [dpdk-22.11.0rc1][meson test] driver-tests/eventdev_selftest_sw test failed
Date: Wed, 12 Oct 2022 05:33:03 +0000 [thread overview]
Message-ID: <bug-1101-3@http.bugs.dpdk.org/> (raw)
https://bugs.dpdk.org/show_bug.cgi?id=1101
Bug ID: 1101
Summary: [dpdk-22.11.0rc1][meson test]
driver-tests/eventdev_selftest_sw test failed
Product: DPDK
Version: 22.11
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: Normal
Component: meson
Assignee: dev@dpdk.org
Reporter: weiyuanx.li@intel.com
Target Milestone: ---
[Environment]
DPDK version: Use make showversion or for a non-released version: git remote -v
&& git show-ref --heads
dpdk22.11.0rc1 a74b1b25136a592c275afbfa6b70771469750aee
Other software versions: name/version for QEMU, OVS, etc. Repeat as required.
OS: Ubuntu 22.04.1 LTS (Jammy Jellyfish)/5.15.0-27-generic
Compiler: gcc (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Hardware platform: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
NIC hardware: Ethernet Controller XXV710 for 25GbE SFP28 158b.
NIC firmware:
driver: i40e
version: 5.15.0-27-generic
firmware-version: 9.00 0x8000cead 1.3179.0
[Test Setup]
Steps to reproduce
1. Use the following command to build DPDK:
CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static
x86_64-native-linuxapp-gcc/
ninja -C x86_64-native-linuxapp-gcc/
2. Execute the following command in the dpdk directory.
MALLOC_PERTURB_=204 DPDK_TEST=eventdev_selftest_sw
/root/dpdk/x86_64-native-linuxapp-gcc/app/test/dpdk-test -c 0xff
[Show the output from the previous commands]
root@dpdk-VF-dut247:~/dpdk# MALLOC_PERTURB_=204 DPDK_TEST=eventdev_selftest_sw
/root/dpdk/x86_64-native-linuxapp-gcc/app/test/dpdk-test -c 0xff
EAL: Detected CPU lcores: 72
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
APP: HPET is not enabled, using TSC as default timer
RTE>>eventdev_selftest_sw
*** Running Single Directed Packet test...
*** Running Directed Forward Credit test...
*** Running Single Load Balanced Packet test...
*** Running Unordered Basic test...
*** Running Ordered Basic test...
*** Running Burst Packets test...
*** Running Load Balancing test...
*** Running Prioritized Directed test...
*** Running Prioritized Atomic test...
*** Running Prioritized Ordered test...
*** Running Prioritized Unordered test...
*** Running Invalid QID test...
*** Running Load Balancing History test...
*** Running Inflight Count test...
*** Running Abuse Inflights test...
*** Running XStats test...
*** Running XStats ID Reset test...
12: 1761: qid_0_port_2_pinned_flows value , expected 1 got 7
1778: qid_0_port_2_pinned_flows value incorrect, expected 1 got 7
ERROR - XStats ID Reset test FAILED.
SW Eventdev Selftest Failed.
Test Failed
[Expected Result]
Test ok.
[Regression]
Is this issue a regression: (Y/N) Y
a2833ecc5ea4adcbc3b77e7aeac2a6fd945da6a0 is the first bad commit
commit a2833ecc5ea4adcbc3b77e7aeac2a6fd945da6a0
Author: Morten Brørup <mb@smartsharesystems.com>
Date: Fri Oct 7 13:44:50 2022 +0300
mempool: fix get objects from mempool with cache
A flush threshold for the mempool cache was introduced in DPDK version
1.3, but rte_mempool_do_generic_get() was not completely updated back
then, and some inefficiencies were introduced.
Fix the following in rte_mempool_do_generic_get():
1. The code that initially screens the cache request was not updated
with the change in DPDK version 1.3.
The initial screening compared the request length to the cache size,
which was correct before, but became irrelevant with the introduction of
the flush threshold. E.g. the cache can hold up to flushthresh objects,
which is more than its size, so some requests were not served from the
cache, even though they could be.
The initial screening has now been corrected to match the initial
screening in rte_mempool_do_generic_put(), which verifies that a cache
is present, and that the length of the request does not overflow the
memory allocated for the cache.
This bug caused a major performance degradation in scenarios where the
application burst length is the same as the cache size. In such cases,
the objects were not ever fetched from the mempool cache, regardless if
they could have been.
This scenario occurs e.g. if an application has configured a mempool
with a size matching the application's burst size.
2. The function is a helper for rte_mempool_generic_get(), so it must
behave according to the description of that function.
Specifically, objects must first be returned from the cache,
subsequently from the backend.
After the change in DPDK version 1.3, this was not the behavior when
the request was partially satisfied from the cache; instead, the objects
from the backend were returned ahead of the objects from the cache.
This bug degraded application performance on CPUs with a small L1 cache,
which benefit from having the hot objects first in the returned array.
(This is probably also the reason why the function returns the objects
in reverse order, which it still does.)
Now, all code paths first return objects from the cache, subsequently
from the backend.
The function was not behaving as described (by the function using it)
and expected by applications using it. This in itself is also a bug.
3. If the cache could not be backfilled, the function would attempt
to get all the requested objects from the backend (instead of only the
number of requested objects minus the objects available in the backend),
and the function would fail if that failed.
Now, the first part of the request is always satisfied from the cache,
and if the subsequent backfilling of the cache from the backend fails,
only the remaining requested objects are retrieved from the backend.
The function would fail despite there are enough objects in the cache
plus the common pool.
4. The code flow for satisfying the request from the cache was slightly
inefficient:
The likely code path where the objects are simply served from the cache
was treated as unlikely. Now it is treated as likely.
Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
lib/mempool/rte_mempool.h | 80 ++++++++++++++++++++++++++++++-----------------
1 file changed, 51 insertions(+), 29 deletions(-)
--
You are receiving this mail because:
You are the assignee for the bug.
next reply other threads:[~2022-10-12 5:33 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-10-12 5:33 bugzilla [this message]
2022-10-12 7:24 ` bugzilla
2022-10-28 5:24 ` bugzilla
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bug-1101-3@http.bugs.dpdk.org/ \
--to=bugzilla@dpdk.org \
--cc=dev@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).