From: andrei <andrei@null.ro>
To: users@dpdk.org
Subject: [dpdk-users] dpdk concurrency
Date: Mon, 26 Feb 2018 16:36:39 +0200 [thread overview]
Message-ID: <b4f71112-5f81-4286-a717-1f5ee931e35b@null.ro> (raw)
Hi,
I had run into a deadlock by using DPDK, and I am out of ideas on how to
debug the issue.
Scenario: (The application is more complicated this is the simplified
version)
4 mempools, 4 txBuffers, 3 threads, (CPU 4 cores; irrelevant).
One thread extracts buffers(rte_mbuff) from a random memory(rte_mempool)
pool and place them into a ring buffer.
Other thread (Sender) extracts the buffers from the ring populates them
with data and place them into a rte_eth_dev_tx_buffer by calling
rte_eth_tx_buffer();
One thread (Flusher) is used as a flusher. It goes trough the
rte_eth_dev_tx_buffer's and calls rte_eth_tx_buffer_flush.
The deadlock occurs, in my opinion, when the Sender and the Flusher
threads try to place back buffers into the same memory pool.
This is a fragment of the core dump.
Thread 2 (Thread 0x7f5932e69700 (LWP 14014)):
#0 0x00007f59388e933a in common_ring_mp_enqueue () from
/usr/local/lib/librte_mempool.so.2.1
#1 0x00007f59386b27e0 in ixgbe_xmit_pkts () from
/usr/local/lib/librte_pmd_ixgbe.so.1.1
#2 0x00007f593d00aab7 in rte_eth_tx_burst (nb_pkts=<optimized out>,
tx_pkts=<optimized out>,
queue_id=0, port_id=<optimized out>) at
/usr/local/include/dpdk/rte_ethdev.h:2858
#3 rte_eth_tx_buffer_flush (buffer=<optimized out>, queue_id=0,
port_id=<optimized out>)
at /usr/local/include/dpdk/rte_ethdev.h:3040
#4 rte_eth_tx_buffer (tx_pkt=<optimized out>, buffer=<optimized out>,
queue_id=0,
port_id=<optimized out>) at /usr/local/include/dpdk/rte_ethdev.h:3090
Thread 30 (Thread 0x7f5933175700 (LWP 13958)):
#0 0x00007f59388e91cc in common_ring_mp_enqueue () from
/usr/local/lib/librte_mempool.so.2.1
#1 0x00007f59386b27e0 in ixgbe_xmit_pkts () from
/usr/local/lib/librte_pmd_ixgbe.so.1.1
#2 0x00007f593d007dfd in rte_eth_tx_burst (nb_pkts=<optimized out>,
tx_pkts=0x7f587a410358,
queue_id=<optimized out>, port_id=0 '\000') at
/usr/local/include/dpdk/rte_ethdev.h:2858
#3 rte_eth_tx_buffer_flush (buffer=0x7f587a410340, queue_id=<optimized
out>, port_id=0 '\000')
at /usr/local/include/dpdk/rte_ethdev.h:3040
Questions:
1. I am using DPDK 17.02.01. Was a bug solved in newer releases
that can be mapped to this behavior?
2. If two threads try to place buffers into the same pool, the
operation should be synchronized by DPDK or by application?
Thank you,
Andrei Comandatu
next reply other threads:[~2018-02-26 14:36 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-02-26 14:36 andrei [this message]
2018-02-26 15:08 ` Wiles, Keith
2018-02-27 7:57 ` andrei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b4f71112-5f81-4286-a717-1f5ee931e35b@null.ro \
--to=andrei@null.ro \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).