From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wg0-f42.google.com (mail-wg0-f42.google.com [74.125.82.42]) by dpdk.org (Postfix) with ESMTP id 129A15954 for ; Mon, 8 Jun 2015 16:57:40 +0200 (CEST) Received: by wgv5 with SMTP id 5so105711826wgv.1 for ; Mon, 08 Jun 2015 07:57:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=5Wtp1ljeoVee8CL0XsPuqmc1ALyVU3qrkSxzgxrzIEw=; b=M0/K9PnfcoYTyo2I0sKyDvz3K2T0tcyozr68Er4UuHbr7HNQk/w3CQNTiMKnlb7Ox1 UZs3H3iFJg1Qc8x89LgPD6Nunw63UYnYu3pEocAD1FcnlLxSyFfZN6cgNST6izIbzYuH pbxchoMSvPY5IsfzjzEPaP5xgtI+bgL5Q/4z8CidC9m0lIX2w3rZpC4i/FhTPMJ0bRNm /0LoRzqGvzZalhglsxPneOGBbTzsC7tD7cghlPU1lRRM8kCRvzYtk5wdytETk9OMrEFx 6gbFcwsfjH9uOZdq+BXUU6pAW2sND2S8iiliJ/S7uPyMvz1AX4KDaipnfTzgd432yvVt ARoQ== X-Gm-Message-State: ALoCoQkRcpu0ibW07bOod5EAI6mO30TbVNO8a2dhjURyOB//SaN/cnISljup9y/TffIGBFfK+XyQ X-Received: by 10.194.82.38 with SMTP id f6mr31678966wjy.16.1433775459902; Mon, 08 Jun 2015 07:57:39 -0700 (PDT) Received: from glumotte.dev.6wind.com (guy78-3-82-239-227-177.fbx.proxad.net. [82.239.227.177]) by mx.google.com with ESMTPSA id q4sm4690755wja.24.2015.06.08.07.57.38 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 08 Jun 2015 07:57:39 -0700 (PDT) From: Olivier Matz To: dev@dpdk.org Date: Mon, 8 Jun 2015 16:57:22 +0200 Message-Id: <1433775442-31438-1-git-send-email-olivier.matz@6wind.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1433151145-8176-1-git-send-email-olivier.matz@6wind.com> References: <1433151145-8176-1-git-send-email-olivier.matz@6wind.com> Subject: [dpdk-dev] [PATCH v2] mbuf: optimize rte_mbuf_refcnt_update X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 08 Jun 2015 14:57:40 -0000 In __rte_pktmbuf_prefree_seg(), there was an optimization to avoid using a costly atomic operation when updating the mbuf reference counter if its value is 1. Indeed, it means that we are the only owner of the mbuf, and therefore nobody can change it at the same time. We can generalize this optimization directly in rte_mbuf_refcnt_update() so the other callers of this function, like rte_pktmbuf_attach(), can also take advantage of this optimization. Signed-off-by: Olivier Matz --- lib/librte_mbuf/rte_mbuf.h | 57 +++++++++++++++++++++++----------------------- 1 file changed, 28 insertions(+), 29 deletions(-) diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h index ab6de67..6c9cfd6 100644 --- a/lib/librte_mbuf/rte_mbuf.h +++ b/lib/librte_mbuf/rte_mbuf.h @@ -426,21 +426,6 @@ if (!(exp)) { \ #ifdef RTE_MBUF_REFCNT_ATOMIC /** - * Adds given value to an mbuf's refcnt and returns its new value. - * @param m - * Mbuf to update - * @param value - * Value to add/subtract - * @return - * Updated value - */ -static inline uint16_t -rte_mbuf_refcnt_update(struct rte_mbuf *m, int16_t value) -{ - return (uint16_t)(rte_atomic16_add_return(&m->refcnt_atomic, value)); -} - -/** * Reads the value of an mbuf's refcnt. * @param m * Mbuf to read @@ -466,6 +451,33 @@ rte_mbuf_refcnt_set(struct rte_mbuf *m, uint16_t new_value) rte_atomic16_set(&m->refcnt_atomic, new_value); } +/** + * Adds given value to an mbuf's refcnt and returns its new value. + * @param m + * Mbuf to update + * @param value + * Value to add/subtract + * @return + * Updated value + */ +static inline uint16_t +rte_mbuf_refcnt_update(struct rte_mbuf *m, int16_t value) +{ + /* + * The atomic_add is an expensive operation, so we don't want to + * call it in the case where we know we are the uniq holder of + * this mbuf (i.e. ref_cnt == 1). Otherwise, an atomic + * operation has to be used because concurrent accesses on the + * reference counter can occur. + */ + if (likely(rte_mbuf_refcnt_read(m) == 1)) { + rte_mbuf_refcnt_set(m, 1 + value); + return 1 + value; + } + + return (uint16_t)(rte_atomic16_add_return(&m->refcnt_atomic, value)); +} + #else /* ! RTE_MBUF_REFCNT_ATOMIC */ /** @@ -895,20 +907,7 @@ __rte_pktmbuf_prefree_seg(struct rte_mbuf *m) { __rte_mbuf_sanity_check(m, 0); - /* - * Check to see if this is the last reference to the mbuf. - * Note: the double check here is deliberate. If the ref_cnt is "atomic" - * the call to "refcnt_update" is a very expensive operation, so we - * don't want to call it in the case where we know we are the holder - * of the last reference to this mbuf i.e. ref_cnt == 1. - * If however, ref_cnt != 1, it's still possible that we may still be - * the final decrementer of the count, so we need to check that - * result also, to make sure the mbuf is freed properly. - */ - if (likely (rte_mbuf_refcnt_read(m) == 1) || - likely (rte_mbuf_refcnt_update(m, -1) == 0)) { - - rte_mbuf_refcnt_set(m, 0); + if (likely(rte_mbuf_refcnt_update(m, -1) == 0)) { /* if this is an indirect mbuf, then * - detach mbuf -- 2.1.4