From: andrei <andrei@null.ro>
To: "Wiles, Keith" <keith.wiles@intel.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] dpdk concurrency
Date: Tue, 27 Feb 2018 09:57:17 +0200 [thread overview]
Message-ID: <9bfd9dc6-7284-e0eb-5c93-8e12b2377943@null.ro> (raw)
In-Reply-To: <C530B3F5-254B-4BD9-92A9-004595149931@intel.com>
Hi Keith,
Thank you very much for your advice. I will try to redesign the access
to the mempool's.
Regards,
Andrei Comandatu
On 02/26/2018 05:08 PM, Wiles, Keith wrote:
>
>> On Feb 26, 2018, at 8:36 AM, andrei <andrei@null.ro> wrote:
>>
>> Hi,
>>
>>
>> I had run into a deadlock by using DPDK, and I am out of ideas on how to
>> debug the issue.
>>
>>
>> Scenario: (The application is more complicated this is the simplified
>> version)
>>
>> 4 mempools, 4 txBuffers, 3 threads, (CPU 4 cores; irrelevant).
>>
>> One thread extracts buffers(rte_mbuff) from a random memory(rte_mempool)
>> pool and place them into a ring buffer.
>>
>> Other thread (Sender) extracts the buffers from the ring populates them
>> with data and place them into a rte_eth_dev_tx_buffer by calling
>> rte_eth_tx_buffer();
>>
>> One thread (Flusher) is used as a flusher. It goes trough the
>> rte_eth_dev_tx_buffer's and calls rte_eth_tx_buffer_flush.
>>
>>
>> The deadlock occurs, in my opinion, when the Sender and the Flusher
>> threads try to place back buffers into the same memory pool.
>>
>> This is a fragment of the core dump.
>>
>>
>> Thread 2 (Thread 0x7f5932e69700 (LWP 14014)):
>> #0 0x00007f59388e933a in common_ring_mp_enqueue () from
>> /usr/local/lib/librte_mempool.so.2.1
>> #1 0x00007f59386b27e0 in ixgbe_xmit_pkts () from
>> /usr/local/lib/librte_pmd_ixgbe.so.1.1
>> #2 0x00007f593d00aab7 in rte_eth_tx_burst (nb_pkts=<optimized out>,
>> tx_pkts=<optimized out>,
>> queue_id=0, port_id=<optimized out>) at
>> /usr/local/include/dpdk/rte_ethdev.h:2858
>> #3 rte_eth_tx_buffer_flush (buffer=<optimized out>, queue_id=0,
>> port_id=<optimized out>)
>> at /usr/local/include/dpdk/rte_ethdev.h:3040
>> #4 rte_eth_tx_buffer (tx_pkt=<optimized out>, buffer=<optimized out>,
>> queue_id=0,
>> port_id=<optimized out>) at /usr/local/include/dpdk/rte_ethdev.h:3090
>>
>>
>> Thread 30 (Thread 0x7f5933175700 (LWP 13958)):
>> #0 0x00007f59388e91cc in common_ring_mp_enqueue () from
>> /usr/local/lib/librte_mempool.so.2.1
>> #1 0x00007f59386b27e0 in ixgbe_xmit_pkts () from
>> /usr/local/lib/librte_pmd_ixgbe.so.1.1
>> #2 0x00007f593d007dfd in rte_eth_tx_burst (nb_pkts=<optimized out>,
>> tx_pkts=0x7f587a410358,
>> queue_id=<optimized out>, port_id=0 '\000') at
>> /usr/local/include/dpdk/rte_ethdev.h:2858
>> #3 rte_eth_tx_buffer_flush (buffer=0x7f587a410340, queue_id=<optimized
>> out>, port_id=0 '\000')
>> at /usr/local/include/dpdk/rte_ethdev.h:3040
>>
>>
>> Questions:
>>
>> 1. I am using DPDK 17.02.01. Was a bug solved in newer releases
>> that can be mapped to this behavior?
>>
>> 2. If two threads try to place buffers into the same pool, the
>> operation should be synchronized by DPDK or by application?
> Not sure this will help, but if you are running more then one thread per core then you can have problems. I assume you have a mempool cache here and the per lcore caches are not thread safe (If I remember correctly), the main cache in the mempool is thread safe. If you are not using multiple threads per lcore, then I expect something else is going on here as this is the normal mode of operation for mempools. If you turn off the mempool caches you should not see the problem, but then your performance will drop some.
>
> I had a similar problem and made sure only one thread puts or gets buffers from the same mempool.
>
>>
>> Thank you,
>>
>> Andrei Comandatu
>>
>>
> Regards,
> Keith
>
prev parent reply other threads:[~2018-02-27 7:57 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-02-26 14:36 andrei
2018-02-26 15:08 ` Wiles, Keith
2018-02-27 7:57 ` andrei [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9bfd9dc6-7284-e0eb-5c93-8e12b2377943@null.ro \
--to=andrei@null.ro \
--cc=keith.wiles@intel.com \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).