* [dpdk-dev] [PATCH] mbuf: optimize first reference increment in rte_pktmbuf_attach @ 2015-06-01 9:32 Olivier Matz 2015-06-03 10:59 ` Bruce Richardson 2015-06-08 14:57 ` [dpdk-dev] [PATCH v2] mbuf: optimize rte_mbuf_refcnt_update Olivier Matz 0 siblings, 2 replies; 6+ messages in thread From: Olivier Matz @ 2015-06-01 9:32 UTC (permalink / raw) To: dev As it's done in __rte_pktmbuf_prefree_seg(), we can avoid using an atomic increment in rte_pktmbuf_attach() by checking if we are the only owner of the mbuf first. Signed-off-by: Olivier Matz <olivier.matz@6wind.com> --- lib/librte_mbuf/rte_mbuf.h | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h index ab6de67..cea35b7 100644 --- a/lib/librte_mbuf/rte_mbuf.h +++ b/lib/librte_mbuf/rte_mbuf.h @@ -838,7 +838,11 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *m) else md = rte_mbuf_from_indirect(m); - rte_mbuf_refcnt_update(md, 1); + /* optimize the case where we are the only owner */ + if (likely(rte_mbuf_refcnt_read(md) == 1)) + rte_mbuf_refcnt_set(md, 2); + else + rte_mbuf_refcnt_update(md, 1); mi->priv_size = m->priv_size; mi->buf_physaddr = m->buf_physaddr; mi->buf_addr = m->buf_addr; -- 2.1.4 ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [dpdk-dev] [PATCH] mbuf: optimize first reference increment in rte_pktmbuf_attach 2015-06-01 9:32 [dpdk-dev] [PATCH] mbuf: optimize first reference increment in rte_pktmbuf_attach Olivier Matz @ 2015-06-03 10:59 ` Bruce Richardson 2015-06-05 9:07 ` Olivier MATZ 2015-06-08 14:57 ` [dpdk-dev] [PATCH v2] mbuf: optimize rte_mbuf_refcnt_update Olivier Matz 1 sibling, 1 reply; 6+ messages in thread From: Bruce Richardson @ 2015-06-03 10:59 UTC (permalink / raw) To: Olivier Matz; +Cc: dev On Mon, Jun 01, 2015 at 11:32:25AM +0200, Olivier Matz wrote: > As it's done in __rte_pktmbuf_prefree_seg(), we can avoid using an > atomic increment in rte_pktmbuf_attach() by checking if we are the > only owner of the mbuf first. > > Signed-off-by: Olivier Matz <olivier.matz@6wind.com> > --- > lib/librte_mbuf/rte_mbuf.h | 6 +++++- > 1 file changed, 5 insertions(+), 1 deletion(-) > > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h > index ab6de67..cea35b7 100644 > --- a/lib/librte_mbuf/rte_mbuf.h > +++ b/lib/librte_mbuf/rte_mbuf.h > @@ -838,7 +838,11 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *m) > else > md = rte_mbuf_from_indirect(m); > > - rte_mbuf_refcnt_update(md, 1); > + /* optimize the case where we are the only owner */ > + if (likely(rte_mbuf_refcnt_read(md) == 1)) > + rte_mbuf_refcnt_set(md, 2); > + else > + rte_mbuf_refcnt_update(md, 1); > mi->priv_size = m->priv_size; > mi->buf_physaddr = m->buf_physaddr; > mi->buf_addr = m->buf_addr; > -- > 2.1.4 > Why not make the change inside rte_mbuf_refcnt_update itself? If it is ever called with a current refcnt of 1, it should always be safe to do the update without a cmpset. /Bruce ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [dpdk-dev] [PATCH] mbuf: optimize first reference increment in rte_pktmbuf_attach 2015-06-03 10:59 ` Bruce Richardson @ 2015-06-05 9:07 ` Olivier MATZ 0 siblings, 0 replies; 6+ messages in thread From: Olivier MATZ @ 2015-06-05 9:07 UTC (permalink / raw) To: Bruce Richardson; +Cc: dev Hi Bruce, On 06/03/2015 12:59 PM, Bruce Richardson wrote: > On Mon, Jun 01, 2015 at 11:32:25AM +0200, Olivier Matz wrote: >> As it's done in __rte_pktmbuf_prefree_seg(), we can avoid using an >> atomic increment in rte_pktmbuf_attach() by checking if we are the >> only owner of the mbuf first. >> >> Signed-off-by: Olivier Matz <olivier.matz@6wind.com> >> --- >> lib/librte_mbuf/rte_mbuf.h | 6 +++++- >> 1 file changed, 5 insertions(+), 1 deletion(-) >> >> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h >> index ab6de67..cea35b7 100644 >> --- a/lib/librte_mbuf/rte_mbuf.h >> +++ b/lib/librte_mbuf/rte_mbuf.h >> @@ -838,7 +838,11 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *m) >> else >> md = rte_mbuf_from_indirect(m); >> >> - rte_mbuf_refcnt_update(md, 1); >> + /* optimize the case where we are the only owner */ >> + if (likely(rte_mbuf_refcnt_read(md) == 1)) >> + rte_mbuf_refcnt_set(md, 2); >> + else >> + rte_mbuf_refcnt_update(md, 1); >> mi->priv_size = m->priv_size; >> mi->buf_physaddr = m->buf_physaddr; >> mi->buf_addr = m->buf_addr; >> -- >> 2.1.4 >> > Why not make the change inside rte_mbuf_refcnt_update itself? If it is ever > called with a current refcnt of 1, it should always be safe to do the update > without a cmpset. Good idea, I'll see if I can do that in a new version. Thanks, Olivier > > /Bruce > ^ permalink raw reply [flat|nested] 6+ messages in thread
* [dpdk-dev] [PATCH v2] mbuf: optimize rte_mbuf_refcnt_update 2015-06-01 9:32 [dpdk-dev] [PATCH] mbuf: optimize first reference increment in rte_pktmbuf_attach Olivier Matz 2015-06-03 10:59 ` Bruce Richardson @ 2015-06-08 14:57 ` Olivier Matz 2015-06-09 12:57 ` Bruce Richardson 1 sibling, 1 reply; 6+ messages in thread From: Olivier Matz @ 2015-06-08 14:57 UTC (permalink / raw) To: dev In __rte_pktmbuf_prefree_seg(), there was an optimization to avoid using a costly atomic operation when updating the mbuf reference counter if its value is 1. Indeed, it means that we are the only owner of the mbuf, and therefore nobody can change it at the same time. We can generalize this optimization directly in rte_mbuf_refcnt_update() so the other callers of this function, like rte_pktmbuf_attach(), can also take advantage of this optimization. Signed-off-by: Olivier Matz <olivier.matz@6wind.com> --- lib/librte_mbuf/rte_mbuf.h | 57 +++++++++++++++++++++++----------------------- 1 file changed, 28 insertions(+), 29 deletions(-) diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h index ab6de67..6c9cfd6 100644 --- a/lib/librte_mbuf/rte_mbuf.h +++ b/lib/librte_mbuf/rte_mbuf.h @@ -426,21 +426,6 @@ if (!(exp)) { \ #ifdef RTE_MBUF_REFCNT_ATOMIC /** - * Adds given value to an mbuf's refcnt and returns its new value. - * @param m - * Mbuf to update - * @param value - * Value to add/subtract - * @return - * Updated value - */ -static inline uint16_t -rte_mbuf_refcnt_update(struct rte_mbuf *m, int16_t value) -{ - return (uint16_t)(rte_atomic16_add_return(&m->refcnt_atomic, value)); -} - -/** * Reads the value of an mbuf's refcnt. * @param m * Mbuf to read @@ -466,6 +451,33 @@ rte_mbuf_refcnt_set(struct rte_mbuf *m, uint16_t new_value) rte_atomic16_set(&m->refcnt_atomic, new_value); } +/** + * Adds given value to an mbuf's refcnt and returns its new value. + * @param m + * Mbuf to update + * @param value + * Value to add/subtract + * @return + * Updated value + */ +static inline uint16_t +rte_mbuf_refcnt_update(struct rte_mbuf *m, int16_t value) +{ + /* + * The atomic_add is an expensive operation, so we don't want to + * call it in the case where we know we are the uniq holder of + * this mbuf (i.e. ref_cnt == 1). Otherwise, an atomic + * operation has to be used because concurrent accesses on the + * reference counter can occur. + */ + if (likely(rte_mbuf_refcnt_read(m) == 1)) { + rte_mbuf_refcnt_set(m, 1 + value); + return 1 + value; + } + + return (uint16_t)(rte_atomic16_add_return(&m->refcnt_atomic, value)); +} + #else /* ! RTE_MBUF_REFCNT_ATOMIC */ /** @@ -895,20 +907,7 @@ __rte_pktmbuf_prefree_seg(struct rte_mbuf *m) { __rte_mbuf_sanity_check(m, 0); - /* - * Check to see if this is the last reference to the mbuf. - * Note: the double check here is deliberate. If the ref_cnt is "atomic" - * the call to "refcnt_update" is a very expensive operation, so we - * don't want to call it in the case where we know we are the holder - * of the last reference to this mbuf i.e. ref_cnt == 1. - * If however, ref_cnt != 1, it's still possible that we may still be - * the final decrementer of the count, so we need to check that - * result also, to make sure the mbuf is freed properly. - */ - if (likely (rte_mbuf_refcnt_read(m) == 1) || - likely (rte_mbuf_refcnt_update(m, -1) == 0)) { - - rte_mbuf_refcnt_set(m, 0); + if (likely(rte_mbuf_refcnt_update(m, -1) == 0)) { /* if this is an indirect mbuf, then * - detach mbuf -- 2.1.4 ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [dpdk-dev] [PATCH v2] mbuf: optimize rte_mbuf_refcnt_update 2015-06-08 14:57 ` [dpdk-dev] [PATCH v2] mbuf: optimize rte_mbuf_refcnt_update Olivier Matz @ 2015-06-09 12:57 ` Bruce Richardson 2015-06-12 14:10 ` Thomas Monjalon 0 siblings, 1 reply; 6+ messages in thread From: Bruce Richardson @ 2015-06-09 12:57 UTC (permalink / raw) To: Olivier Matz; +Cc: dev On Mon, Jun 08, 2015 at 04:57:22PM +0200, Olivier Matz wrote: > In __rte_pktmbuf_prefree_seg(), there was an optimization to avoid using > a costly atomic operation when updating the mbuf reference counter if > its value is 1. Indeed, it means that we are the only owner of the mbuf, > and therefore nobody can change it at the same time. > > We can generalize this optimization directly in rte_mbuf_refcnt_update() > so the other callers of this function, like rte_pktmbuf_attach(), can > also take advantage of this optimization. > > Signed-off-by: Olivier Matz <olivier.matz@6wind.com> Acked-by: Bruce Richardson <bruce.richardson@intel.com> ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [dpdk-dev] [PATCH v2] mbuf: optimize rte_mbuf_refcnt_update 2015-06-09 12:57 ` Bruce Richardson @ 2015-06-12 14:10 ` Thomas Monjalon 0 siblings, 0 replies; 6+ messages in thread From: Thomas Monjalon @ 2015-06-12 14:10 UTC (permalink / raw) To: Olivier Matz; +Cc: dev 2015-06-09 13:57, Bruce Richardson: > On Mon, Jun 08, 2015 at 04:57:22PM +0200, Olivier Matz wrote: > > In __rte_pktmbuf_prefree_seg(), there was an optimization to avoid using > > a costly atomic operation when updating the mbuf reference counter if > > its value is 1. Indeed, it means that we are the only owner of the mbuf, > > and therefore nobody can change it at the same time. > > > > We can generalize this optimization directly in rte_mbuf_refcnt_update() > > so the other callers of this function, like rte_pktmbuf_attach(), can > > also take advantage of this optimization. > > > > Signed-off-by: Olivier Matz <olivier.matz@6wind.com> > > Acked-by: Bruce Richardson <bruce.richardson@intel.com> Applied, thanks ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2015-06-12 14:11 UTC | newest] Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2015-06-01 9:32 [dpdk-dev] [PATCH] mbuf: optimize first reference increment in rte_pktmbuf_attach Olivier Matz 2015-06-03 10:59 ` Bruce Richardson 2015-06-05 9:07 ` Olivier MATZ 2015-06-08 14:57 ` [dpdk-dev] [PATCH v2] mbuf: optimize rte_mbuf_refcnt_update Olivier Matz 2015-06-09 12:57 ` Bruce Richardson 2015-06-12 14:10 ` Thomas Monjalon
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).