From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 212DBA0548; Mon, 20 Sep 2021 16:49:36 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9AA5F40DF7; Mon, 20 Sep 2021 16:49:35 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 994A840DF5; Mon, 20 Sep 2021 16:49:33 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 18KDAISo018249; Mon, 20 Sep 2021 07:49:32 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-type; s=pfpt0220; bh=Lic/scNw6+1jYf8lNtCa4sixREKm2c1KLDpatIlBi3Q=; b=aajJ23vWC+r3rWtU//GrIE5PDVmbh3YkyWDYK1NY7FQr8TO6mop2RVdJZb+eDzhP0ViC kRfJJaNbw1vZExMp0txMzLD8PoEgw2mm3wKWBMfHmDvjjpNSsyT/Z5Zban17aKjUb9ga S6YR6mlwLIBf7D3cZ5LumRyAk5wioBVeh3aiODna9XdxHgyvY5wHzqNiqbEUiPuC58Kn 7nbgjoe7ZoStO90Dj37wSfkt/ChP1eeI7rxW+nbxBJSwzWyUyMQbtoLe3G85VNfSeTEr /VEhVSmAD1aJkqrF98+2ucxJvY0M7szLv9FF3YJKZa08T89bDxVIWu7NjtkUUrA81i7d Zw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 3b6ascjkbg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 20 Sep 2021 07:49:32 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 20 Sep 2021 07:49:30 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 20 Sep 2021 07:49:30 -0700 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id DC0D83F704B; Mon, 20 Sep 2021 07:49:28 -0700 (PDT) From: Harman Kalra To: , Harman Kalra CC: , David George Date: Mon, 20 Sep 2021 20:19:25 +0530 Message-ID: <20210920144925.118704-1-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-GUID: Wf6EDyNWcSrjel5bVZPgzkCbNOREQVZY X-Proofpoint-ORIG-GUID: Wf6EDyNWcSrjel5bVZPgzkCbNOREQVZY X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-20_07,2021-09-20_01,2020-04-07_01 Subject: [dpdk-dev] [PATCH] net/octeontx: fix invalid access to indirect buffers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Issue has been observed where fields of indirect buffers are accessed after being set free by the diver. Also fixing freeing of direct buffers to correct aura. Fixes: 5cbe184802aa ("net/octeontx: support fast mbuf free") Cc: stable@dpdk.org Signed-off-by: David George Signed-off-by: Harman Kalra --- drivers/net/octeontx/octeontx_rxtx.h | 69 ++++++++++++++++++---------- 1 file changed, 46 insertions(+), 23 deletions(-) diff --git a/drivers/net/octeontx/octeontx_rxtx.h b/drivers/net/octeontx/octeontx_rxtx.h index 2ed28ea563..e0723ac26a 100644 --- a/drivers/net/octeontx/octeontx_rxtx.h +++ b/drivers/net/octeontx/octeontx_rxtx.h @@ -161,7 +161,7 @@ ptype_table[PTYPE_SIZE][PTYPE_SIZE][PTYPE_SIZE] = { static __rte_always_inline uint64_t -octeontx_pktmbuf_detach(struct rte_mbuf *m) +octeontx_pktmbuf_detach(struct rte_mbuf *m, struct rte_mbuf **m_tofree) { struct rte_mempool *mp = m->pool; uint32_t mbuf_size, buf_len; @@ -171,6 +171,8 @@ octeontx_pktmbuf_detach(struct rte_mbuf *m) /* Update refcount of direct mbuf */ md = rte_mbuf_from_indirect(m); + /* The real data will be in the direct buffer, inform callers this */ + *m_tofree = md; refcount = rte_mbuf_refcnt_update(md, -1); priv_size = rte_pktmbuf_priv_size(mp); @@ -203,18 +205,18 @@ octeontx_pktmbuf_detach(struct rte_mbuf *m) } static __rte_always_inline uint64_t -octeontx_prefree_seg(struct rte_mbuf *m) +octeontx_prefree_seg(struct rte_mbuf *m, struct rte_mbuf **m_tofree) { if (likely(rte_mbuf_refcnt_read(m) == 1)) { if (!RTE_MBUF_DIRECT(m)) - return octeontx_pktmbuf_detach(m); + return octeontx_pktmbuf_detach(m, m_tofree); m->next = NULL; m->nb_segs = 1; return 0; } else if (rte_mbuf_refcnt_update(m, -1) == 0) { if (!RTE_MBUF_DIRECT(m)) - return octeontx_pktmbuf_detach(m); + return octeontx_pktmbuf_detach(m, m_tofree); rte_mbuf_refcnt_set(m, 1); m->next = NULL; @@ -315,6 +317,14 @@ __octeontx_xmit_prepare(struct rte_mbuf *tx_pkt, uint64_t *cmd_buf, const uint16_t flag) { uint16_t gaura_id, nb_desc = 0; + struct rte_mbuf *m_tofree; + rte_iova_t iova; + uint16_t data_len; + + m_tofree = tx_pkt; + + data_len = tx_pkt->data_len; + iova = rte_mbuf_data_iova(tx_pkt); /* Setup PKO_SEND_HDR_S */ cmd_buf[nb_desc++] = tx_pkt->data_len & 0xffff; @@ -329,22 +339,23 @@ __octeontx_xmit_prepare(struct rte_mbuf *tx_pkt, uint64_t *cmd_buf, * not, as SG_DESC[I] and SEND_HDR[II] are clear. */ if (flag & OCCTX_TX_OFFLOAD_MBUF_NOFF_F) - cmd_buf[0] |= (octeontx_prefree_seg(tx_pkt) << + cmd_buf[0] |= (octeontx_prefree_seg(tx_pkt, &m_tofree) << 58); /* Mark mempool object as "put" since it is freed by PKO */ if (!(cmd_buf[0] & (1ULL << 58))) - __mempool_check_cookies(tx_pkt->pool, (void **)&tx_pkt, + __mempool_check_cookies(m_tofree->pool, (void **)&m_tofree, 1, 0); /* Get the gaura Id */ - gaura_id = octeontx_fpa_bufpool_gaura((uintptr_t)tx_pkt->pool->pool_id); + gaura_id = + octeontx_fpa_bufpool_gaura((uintptr_t)m_tofree->pool->pool_id); /* Setup PKO_SEND_BUFLINK_S */ cmd_buf[nb_desc++] = PKO_SEND_BUFLINK_SUBDC | PKO_SEND_BUFLINK_LDTYPE(0x1ull) | PKO_SEND_BUFLINK_GAUAR((long)gaura_id) | - tx_pkt->data_len; - cmd_buf[nb_desc++] = rte_mbuf_data_iova(tx_pkt); + data_len; + cmd_buf[nb_desc++] = iova; return nb_desc; } @@ -355,7 +366,9 @@ __octeontx_xmit_mseg_prepare(struct rte_mbuf *tx_pkt, uint64_t *cmd_buf, { uint16_t nb_segs, nb_desc = 0; uint16_t gaura_id, len = 0; - struct rte_mbuf *m_next = NULL; + struct rte_mbuf *m_next = NULL, *m_tofree; + rte_iova_t iova; + uint16_t data_len; nb_segs = tx_pkt->nb_segs; /* Setup PKO_SEND_HDR_S */ @@ -369,40 +382,50 @@ __octeontx_xmit_mseg_prepare(struct rte_mbuf *tx_pkt, uint64_t *cmd_buf, do { m_next = tx_pkt->next; - /* To handle case where mbufs belong to diff pools, like - * fragmentation + /* Get TX parameters up front, octeontx_prefree_seg might change + * them */ - gaura_id = octeontx_fpa_bufpool_gaura((uintptr_t) - tx_pkt->pool->pool_id); + m_tofree = tx_pkt; + data_len = tx_pkt->data_len; + iova = rte_mbuf_data_iova(tx_pkt); /* Setup PKO_SEND_GATHER_S */ - cmd_buf[nb_desc] = PKO_SEND_GATHER_SUBDC | - PKO_SEND_GATHER_LDTYPE(0x1ull) | - PKO_SEND_GATHER_GAUAR((long)gaura_id) | - tx_pkt->data_len; + cmd_buf[nb_desc] = 0; /* SG_DESC[I] bit controls if buffer is to be freed or * not, as SEND_HDR[DF] and SEND_HDR[II] are clear. */ if (flag & OCCTX_TX_OFFLOAD_MBUF_NOFF_F) { cmd_buf[nb_desc] |= - (octeontx_prefree_seg(tx_pkt) << 57); + (octeontx_prefree_seg(tx_pkt, &m_tofree) << 57); } + /* To handle case where mbufs belong to diff pools, like + * fragmentation + */ + gaura_id = octeontx_fpa_bufpool_gaura((uintptr_t) + m_tofree->pool->pool_id); + + /* Setup PKO_SEND_GATHER_S */ + cmd_buf[nb_desc] |= PKO_SEND_GATHER_SUBDC | + PKO_SEND_GATHER_LDTYPE(0x1ull) | + PKO_SEND_GATHER_GAUAR((long)gaura_id) | + data_len; + /* Mark mempool object as "put" since it is freed by * PKO. */ if (!(cmd_buf[nb_desc] & (1ULL << 57))) { tx_pkt->next = NULL; - __mempool_check_cookies(tx_pkt->pool, - (void **)&tx_pkt, 1, 0); + __mempool_check_cookies(m_tofree->pool, + (void **)&m_tofree, 1, 0); } nb_desc++; - cmd_buf[nb_desc++] = rte_mbuf_data_iova(tx_pkt); + cmd_buf[nb_desc++] = iova; nb_segs--; - len += tx_pkt->data_len; + len += data_len; tx_pkt = m_next; } while (nb_segs); -- 2.18.0