From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 466D5A046B for ; Thu, 25 Jul 2019 07:43:22 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CD4491C268; Thu, 25 Jul 2019 07:43:20 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 4B3891C267 for ; Thu, 25 Jul 2019 07:43:19 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x6P5eYV9022559 for ; Wed, 24 Jul 2019 22:43:18 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-type; s=pfpt0818; bh=7KuiH+xjzvlZGe5v4z9MsirNLm9XXJVNYhBXG0HxWRk=; b=dJkTobC+fFIpGmo8Loayloa5MyDw4vQlFFksc6JhuOqbcx4oMJdxV8U5WUAUx4iEfrX7 ANiiPmJNpjABtu0vQ/ikKGX3pTJOPO9xUvQ+gPwUMudBOz86eBqGahwffVmVWjC25J9f Wu8CRkaN6ZjED64TZhiQM5VI0nmAAZvXsPn+SFisCaz3+kuK7AGoXmPkWJUGLs3LBXKz z01ZJ3zeEtMumTnfyXEbvsbK55F40OIQQAHotBhpDvyRZZMW5m2Wu5G/LB1nRN2jnYJz b60M9N16XPR7n6StsY3v+M503j1BFAyesQW4NWBCbWW6VIFXu5j/ocXAWrx32hCcJYE4 Ig== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2tx61rg238-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 24 Jul 2019 22:43:18 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Wed, 24 Jul 2019 22:43:17 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Wed, 24 Jul 2019 22:43:17 -0700 Received: from dc7-eodlnx05.marvell.com (dc7-eodlnx05.marvell.com [10.28.113.55]) by maili.marvell.com (Postfix) with ESMTP id BAB333F703F; Wed, 24 Jul 2019 22:43:15 -0700 (PDT) From: Sunil Kumar Kori To: Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Sunil Kumar Kori Date: Thu, 25 Jul 2019 11:13:06 +0530 Message-ID: <1564033387-14927-1-git-send-email-skori@marvell.com> X-Mailer: git-send-email 1.8.3.1 MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:5.22.84,1.0.8 definitions=2019-07-25_03:2019-07-25,2019-07-25 signatures=0 Subject: [dpdk-dev] [PATCH] net/octeontx2: fix handling indirect mbufs during TX X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Multi segmented packet may be spliced with indirect mbufs also. Currently driver causes buffer leak for indirect mbufs as they were not being freed to packet pool. Patch fixes handling of indirect mbufs for following use cases: - packet contains all indirect mbufs only. - packet contains mixed mbufs i.e. direct and indirect both. Signed-off-by: Sunil Kumar Kori --- drivers/net/octeontx2/otx2_tx.c | 8 ++-- drivers/net/octeontx2/otx2_tx.h | 82 +++++++++++++++++++++++++++++++++++++---- 2 files changed, 79 insertions(+), 11 deletions(-) diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c index 6bce551..0dcadff 100644 --- a/drivers/net/octeontx2/otx2_tx.c +++ b/drivers/net/octeontx2/otx2_tx.c @@ -178,7 +178,7 @@ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 - offsetof(struct rte_mbuf, buf_iova)); - if (rte_pktmbuf_prefree_seg(mbuf) == NULL) + if (otx2_nix_prefree_seg(mbuf)) vsetq_lane_u64(0x80000, xmask01, 0); else __mempool_check_cookies(mbuf->pool, @@ -187,7 +187,7 @@ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf1 - offsetof(struct rte_mbuf, buf_iova)); - if (rte_pktmbuf_prefree_seg(mbuf) == NULL) + if (otx2_nix_prefree_seg(mbuf)) vsetq_lane_u64(0x80000, xmask01, 1); else __mempool_check_cookies(mbuf->pool, @@ -196,7 +196,7 @@ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf2 - offsetof(struct rte_mbuf, buf_iova)); - if (rte_pktmbuf_prefree_seg(mbuf) == NULL) + if (otx2_nix_prefree_seg(mbuf)) vsetq_lane_u64(0x80000, xmask23, 0); else __mempool_check_cookies(mbuf->pool, @@ -205,7 +205,7 @@ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf3 - offsetof(struct rte_mbuf, buf_iova)); - if (rte_pktmbuf_prefree_seg(mbuf) == NULL) + if (otx2_nix_prefree_seg(mbuf)) vsetq_lane_u64(0x80000, xmask23, 1); else __mempool_check_cookies(mbuf->pool, diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h index b75a220..87e747f 100644 --- a/drivers/net/octeontx2/otx2_tx.h +++ b/drivers/net/octeontx2/otx2_tx.h @@ -58,6 +58,72 @@ } } +static __rte_always_inline uint64_t +otx2_pktmbuf_detach(struct rte_mbuf *m) +{ + struct rte_mempool *mp = m->pool; + uint32_t mbuf_size, buf_len; + struct rte_mbuf *md; + uint16_t priv_size; + uint16_t refcount; + + /* Update refcount of direct mbuf */ + md = rte_mbuf_from_indirect(m); + refcount = rte_mbuf_refcnt_update(md, -1); + + priv_size = rte_pktmbuf_priv_size(mp); + mbuf_size = (uint32_t)(sizeof(struct rte_mbuf) + priv_size); + buf_len = rte_pktmbuf_data_room_size(mp); + + m->priv_size = priv_size; + m->buf_addr = (char *)m + mbuf_size; + m->buf_iova = rte_mempool_virt2iova(m) + mbuf_size; + m->buf_len = (uint16_t)buf_len; + rte_pktmbuf_reset_headroom(m); + m->data_len = 0; + m->ol_flags = 0; + m->next = NULL; + m->nb_segs = 1; + + /* Now indirect mbuf is safe to free */ + rte_pktmbuf_free(m); + + if (refcount == 0) { + rte_mbuf_refcnt_set(md, 1); + md->data_len = 0; + md->ol_flags = 0; + md->next = NULL; + md->nb_segs = 1; + return 0; + } else { + return 1; + } +} + +static __rte_always_inline uint64_t +otx2_nix_prefree_seg(struct rte_mbuf *m) +{ + if (likely(rte_mbuf_refcnt_read(m) == 1)) { + if (!RTE_MBUF_DIRECT(m)) + return otx2_pktmbuf_detach(m); + + m->next = NULL; + m->nb_segs = 1; + return 0; + } else if (rte_mbuf_refcnt_update(m, -1) == 0) { + if (!RTE_MBUF_DIRECT(m)) + return otx2_pktmbuf_detach(m); + + rte_mbuf_refcnt_set(m, 1); + m->next = NULL; + m->nb_segs = 1; + return 0; + } + + /* Mbuf is having refcount more than 1 so need not to be freed */ + return 1; +} + static inline void otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags) { @@ -189,9 +255,11 @@ *(rte_iova_t *)(++sg) = rte_mbuf_data_iova(m); if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) { - /* Set don't free bit if reference count > 1 */ - if (rte_pktmbuf_prefree_seg(m) == NULL) - send_hdr->w0.df = 1; /* SET DF */ + /* DF bit = 1 if refcount of current mbuf or parent mbuf + * is greater than 1 + * DF bit = 0 otherwise + */ + send_hdr->w0.df = otx2_nix_prefree_seg(m); } /* Mark mempool object as "put" since it is freed by NIX */ if (!send_hdr->w0.df) @@ -233,6 +301,8 @@ off = 0; sg = (union nix_send_sg_s *)&cmd[2 + off]; + /* Clear sg->u header before use */ + sg->u &= 0xFC00000000000000; sg_u = sg->u; slist = &cmd[3 + off]; @@ -244,11 +314,9 @@ m_next = m->next; sg_u = sg_u | ((uint64_t)m->data_len << (i << 4)); *slist = rte_mbuf_data_iova(m); - /* Set invert df if reference count > 1 */ + /* Set invert df if buffer is not to be freed by H/W */ if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) - sg_u |= - ((uint64_t)(rte_pktmbuf_prefree_seg(m) == NULL) << - (i + 55)); + sg_u |= (otx2_nix_prefree_seg(m) << (i + 55)); /* Mark mempool object as "put" since it is freed by NIX */ if (!(sg_u & (1ULL << (i + 55)))) { m->next = NULL; -- 1.8.3.1