From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7DBF1A046B for ; Thu, 25 Jul 2019 10:21:32 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3EE3F1C289; Thu, 25 Jul 2019 10:21:32 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id E8D3B1C288 for ; Thu, 25 Jul 2019 10:21:30 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x6P8FrWl024863 for ; Thu, 25 Jul 2019 01:21:30 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=H7X9PG2eLcuaYEhIPrFJl+Jlp0J6WlnnBUyXPanCFcg=; b=TBHotulQFPvrpOJpf6vbb2QbxOgWnlJZ+HilpUFUTgoCCrFRccqz4aBhrq/yopN6lpPB uKBeVX4OpvmItwe760F1utKIGlU2OcEH4UMzg7ZKh7F/OBWPaFz/tI4IgOi4guUy6azW atdFopA4wrixXMg46wjyXwrSdvkkgLf2ukl31LL9O5t6kfTRGKxzKsVvPrOwMZhrOPoy DrAFl5t+h5pIOuWhEnTt5y9iXdRAiZXmrez43Yk3Cmj271BWu28Kqso6C7ZY+GI8XLQ9 JDKeeZ2JRF23TPyKuEvjCPFvCK0eBTUTwuU0XG74AaWPfNHKCcO9wSqjUEXoMuCXus6f UA== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2tx61rgk52-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 25 Jul 2019 01:21:30 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 25 Jul 2019 01:21:28 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 25 Jul 2019 01:21:28 -0700 Received: from dc7-eodlnx05.marvell.com (dc7-eodlnx05.marvell.com [10.28.113.55]) by maili.marvell.com (Postfix) with ESMTP id 3962E3F703F; Thu, 25 Jul 2019 01:21:26 -0700 (PDT) From: Sunil Kumar Kori To: Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Sunil Kumar Kori Date: Thu, 25 Jul 2019 13:50:59 +0530 Message-ID: <1564042860-27927-1-git-send-email-skori@marvell.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1564033387-14927-1-git-send-email-skori@marvell.com> References: <1564033387-14927-1-git-send-email-skori@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:5.22.84,1.0.8 definitions=2019-07-25_04:2019-07-25,2019-07-25 signatures=0 Subject: [dpdk-dev] [PATCH v2 1/1] net/octeontx2: fix handling indirect mbufs during Tx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Multi segmented packet may be spliced with indirect mbufs also. Currently driver causes buffer leak for indirect mbufs as they were not being freed to packet pool. Patch fixes handling of indirect mbufs for following use cases - packet contains all indirect mbufs only. - packet contains mixed mbufs i.e. direct and indirect both. Fixes: cbd5710db48d ("net/octeontx2: add Tx multi segment version") Signed-off-by: Sunil Kumar Kori --- v2: - Add Fixes tag drivers/net/octeontx2/otx2_tx.c | 8 ++-- drivers/net/octeontx2/otx2_tx.h | 82 +++++++++++++++++++++++++++++++++++++---- 2 files changed, 79 insertions(+), 11 deletions(-) diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c index 6bce551..0dcadff 100644 --- a/drivers/net/octeontx2/otx2_tx.c +++ b/drivers/net/octeontx2/otx2_tx.c @@ -178,7 +178,7 @@ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 - offsetof(struct rte_mbuf, buf_iova)); - if (rte_pktmbuf_prefree_seg(mbuf) == NULL) + if (otx2_nix_prefree_seg(mbuf)) vsetq_lane_u64(0x80000, xmask01, 0); else __mempool_check_cookies(mbuf->pool, @@ -187,7 +187,7 @@ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf1 - offsetof(struct rte_mbuf, buf_iova)); - if (rte_pktmbuf_prefree_seg(mbuf) == NULL) + if (otx2_nix_prefree_seg(mbuf)) vsetq_lane_u64(0x80000, xmask01, 1); else __mempool_check_cookies(mbuf->pool, @@ -196,7 +196,7 @@ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf2 - offsetof(struct rte_mbuf, buf_iova)); - if (rte_pktmbuf_prefree_seg(mbuf) == NULL) + if (otx2_nix_prefree_seg(mbuf)) vsetq_lane_u64(0x80000, xmask23, 0); else __mempool_check_cookies(mbuf->pool, @@ -205,7 +205,7 @@ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf3 - offsetof(struct rte_mbuf, buf_iova)); - if (rte_pktmbuf_prefree_seg(mbuf) == NULL) + if (otx2_nix_prefree_seg(mbuf)) vsetq_lane_u64(0x80000, xmask23, 1); else __mempool_check_cookies(mbuf->pool, diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h index b75a220..87e747f 100644 --- a/drivers/net/octeontx2/otx2_tx.h +++ b/drivers/net/octeontx2/otx2_tx.h @@ -58,6 +58,72 @@ } } +static __rte_always_inline uint64_t +otx2_pktmbuf_detach(struct rte_mbuf *m) +{ + struct rte_mempool *mp = m->pool; + uint32_t mbuf_size, buf_len; + struct rte_mbuf *md; + uint16_t priv_size; + uint16_t refcount; + + /* Update refcount of direct mbuf */ + md = rte_mbuf_from_indirect(m); + refcount = rte_mbuf_refcnt_update(md, -1); + + priv_size = rte_pktmbuf_priv_size(mp); + mbuf_size = (uint32_t)(sizeof(struct rte_mbuf) + priv_size); + buf_len = rte_pktmbuf_data_room_size(mp); + + m->priv_size = priv_size; + m->buf_addr = (char *)m + mbuf_size; + m->buf_iova = rte_mempool_virt2iova(m) + mbuf_size; + m->buf_len = (uint16_t)buf_len; + rte_pktmbuf_reset_headroom(m); + m->data_len = 0; + m->ol_flags = 0; + m->next = NULL; + m->nb_segs = 1; + + /* Now indirect mbuf is safe to free */ + rte_pktmbuf_free(m); + + if (refcount == 0) { + rte_mbuf_refcnt_set(md, 1); + md->data_len = 0; + md->ol_flags = 0; + md->next = NULL; + md->nb_segs = 1; + return 0; + } else { + return 1; + } +} + +static __rte_always_inline uint64_t +otx2_nix_prefree_seg(struct rte_mbuf *m) +{ + if (likely(rte_mbuf_refcnt_read(m) == 1)) { + if (!RTE_MBUF_DIRECT(m)) + return otx2_pktmbuf_detach(m); + + m->next = NULL; + m->nb_segs = 1; + return 0; + } else if (rte_mbuf_refcnt_update(m, -1) == 0) { + if (!RTE_MBUF_DIRECT(m)) + return otx2_pktmbuf_detach(m); + + rte_mbuf_refcnt_set(m, 1); + m->next = NULL; + m->nb_segs = 1; + return 0; + } + + /* Mbuf is having refcount more than 1 so need not to be freed */ + return 1; +} + static inline void otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags) { @@ -189,9 +255,11 @@ *(rte_iova_t *)(++sg) = rte_mbuf_data_iova(m); if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) { - /* Set don't free bit if reference count > 1 */ - if (rte_pktmbuf_prefree_seg(m) == NULL) - send_hdr->w0.df = 1; /* SET DF */ + /* DF bit = 1 if refcount of current mbuf or parent mbuf + * is greater than 1 + * DF bit = 0 otherwise + */ + send_hdr->w0.df = otx2_nix_prefree_seg(m); } /* Mark mempool object as "put" since it is freed by NIX */ if (!send_hdr->w0.df) @@ -233,6 +301,8 @@ off = 0; sg = (union nix_send_sg_s *)&cmd[2 + off]; + /* Clear sg->u header before use */ + sg->u &= 0xFC00000000000000; sg_u = sg->u; slist = &cmd[3 + off]; @@ -244,11 +314,9 @@ m_next = m->next; sg_u = sg_u | ((uint64_t)m->data_len << (i << 4)); *slist = rte_mbuf_data_iova(m); - /* Set invert df if reference count > 1 */ + /* Set invert df if buffer is not to be freed by H/W */ if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) - sg_u |= - ((uint64_t)(rte_pktmbuf_prefree_seg(m) == NULL) << - (i + 55)); + sg_u |= (otx2_nix_prefree_seg(m) << (i + 55)); /* Mark mempool object as "put" since it is freed by NIX */ if (!(sg_u & (1ULL << (i + 55)))) { m->next = NULL; -- 1.8.3.1