From: Olivier MATZ <olivier.matz@6wind.com>
To: "Hanoch Haim (hhaim)" <hhaim@cisco.com>,
"bruce.richardson@intel.com" <bruce.richardson@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
"Ido Barnea \(ibarnea\)" <ibarnea@cisco.com>,
"Itay Marom \(imarom\)" <imarom@cisco.com>
Subject: Re: [dpdk-dev] [PATCH v2] mbuf: optimize rte_mbuf_refcnt_update
Date: Tue, 05 Jan 2016 11:57:37 +0100 [thread overview]
Message-ID: <568BA1A1.2070300@6wind.com> (raw)
In-Reply-To: <7f5255b98dcb4f2396ada16d5eb43e5a@XCH-RTP-017.cisco.com>
Hi Hanoch,
On 01/04/2016 03:43 PM, Hanoch Haim (hhaim) wrote:
> Hi Oliver,
>
> Let's take your drawing as a reference and add my question
> The use case is sending a duplicate multicast packet by many threads.
> I can split it to x threads to do the job and with atomic-ref (my multicast not mbuf) count it until it reaches zero.
>
> In my following example the two cores (0 and 1) sending the indirect m1/m2 do alloc/attach/send
>
> core0 | core1
> --------------------------------- |---------------------------------------
> m_const=rte_pktmbuf_alloc(mp) |
> |
> while true: | while True:
> m1 =rte_pktmbuf_alloc(mp_64) | m2 =rte_pktmbuf_alloc(mp_64)
> rte_pktmbuf_attach(m1, m_const) | rte_pktmbuf_attach(m1, m_const)
> tx_burst(m1) | tx_burst(m2)
>
> Is this example is not valid?
For me, m_const is not expected to be used concurrently on
several cores. By "used", I mean calling a function that modifies
the mbuf, which is the case for rte_pktmbuf_attach().
> BTW this is our workaround
>
>
> core0 | core1
> --------------------------------- |---------------------------------------
> m_const=rte_pktmbuf_alloc(mp) |
> rte_mbuf_refcnt_update(m_const,1)| <<-- workaround
> |
> while true: | while True:
> m1 =rte_pktmbuf_alloc(mp_64) | m2 =rte_pktmbuf_alloc(mp_64)
> rte_pktmbuf_attach(m1, m_const) | rte_pktmbuf_attach(m1, m_const)
> tx_burst(m1) | tx_burst(m2)
This workaround indeed solves the issue. Another solution would be to
protect the call to attach() with a lock, or call all the
rte_pktmbuf_attach() on the same core.
I'm open to discuss this behavior for rte_pktmbuf_attach() function
(should concurrent calls be allowed or not). In any case, we may
want to better document it in the doxygen API comments.
Regards,
Olivier
next prev parent reply other threads:[~2016-01-05 10:58 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-12-27 9:39 Hanoch Haim (hhaim)
2016-01-04 13:53 ` Olivier MATZ
2016-01-04 14:43 ` Hanoch Haim (hhaim)
2016-01-05 10:57 ` Olivier MATZ [this message]
2016-01-05 11:11 ` Hanoch Haim (hhaim)
2016-01-05 12:12 ` Olivier MATZ
2016-01-13 11:48 ` Bruce Richardson
2016-01-13 16:28 ` Hanoch Haim (hhaim)
2016-01-13 16:40 ` Bruce Richardson
-- strict thread matches above, loose matches on Subject: below --
2015-06-01 9:32 [dpdk-dev] [PATCH] mbuf: optimize first reference increment in rte_pktmbuf_attach Olivier Matz
2015-06-08 14:57 ` [dpdk-dev] [PATCH v2] mbuf: optimize rte_mbuf_refcnt_update Olivier Matz
2015-06-09 12:57 ` Bruce Richardson
2015-06-12 14:10 ` Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=568BA1A1.2070300@6wind.com \
--to=olivier.matz@6wind.com \
--cc=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=hhaim@cisco.com \
--cc=ibarnea@cisco.com \
--cc=imarom@cisco.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).