From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by dpdk.space (Postfix) with ESMTP id 522D9A0096
	for <public@inbox.dpdk.org>; Wed,  8 May 2019 12:27:58 +0200 (CEST)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id 2A5AE378B;
	Wed,  8 May 2019 12:27:58 +0200 (CEST)
Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com
 [148.163.129.52]) by dpdk.org (Postfix) with ESMTP id 74A16374C
 for <dev@dpdk.org>; Wed,  8 May 2019 12:27:56 +0200 (CEST)
X-Virus-Scanned: Proofpoint Essentials engine
Received: from webmail.solarflare.com (webmail.solarflare.com [12.187.104.26])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits))
 (No client certificate requested)
 by mx1-us4.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with ESMTPS id
 F30D9BC005A; Wed,  8 May 2019 10:27:54 +0000 (UTC)
Received: from ocex03.SolarFlarecom.com (10.20.40.36) by
 ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id
 15.0.1395.4; Wed, 8 May 2019 03:27:52 -0700
Received: from opal.uk.solarflarecom.com (10.17.10.1) by
 ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id
 15.0.1395.4 via Frontend Transport; Wed, 8 May 2019 03:27:51 -0700
Received: from ukv-loginhost.uk.solarflarecom.com
 (ukv-loginhost.uk.solarflarecom.com [10.17.10.39])
 by opal.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id x48ARor4029818;
 Wed, 8 May 2019 11:27:50 +0100
Received: from ukv-loginhost.uk.solarflarecom.com (localhost [127.0.0.1])
 by ukv-loginhost.uk.solarflarecom.com (Postfix) with ESMTP id 98877161666;
 Wed,  8 May 2019 11:27:50 +0100 (BST)
From: Andrew Rybchenko <arybchenko@solarflare.com>
To: Ferruh Yigit <ferruh.yigit@intel.com>, Bernard Iremonger
 <bernard.iremonger@intel.com>, Jingjing Wu <jingjing.wu@intel.com>, "Wenzhuo
 Lu" <wenzhuo.lu@intel.com>
CC: <dev@dpdk.org>, Pavan Nikhilesh <pbhagavatula@marvell.com>, "Thomas
 Monjalon" <thomas@monjalon.net>
Date: Wed, 8 May 2019 11:27:37 +0100
Message-ID: <1557311257-10719-1-git-send-email-arybchenko@solarflare.com>
X-Mailer: git-send-email 1.8.3.1
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-24598.005
X-TM-AS-Result: No-2.888400-4.000000-10
X-TMASE-MatchedRID: S4Z8qxzv3T1T4Q98GKrcbzVUc/h8Ki+CQl/FdRYkUZLfUZT83lbkEFmh
 lTC2eRES4vM1YF6AJbZhyT3WNjppUtAtbEEX0MxBxEHRux+uk8ifEzJ5hPndGTyH6eV8apBznj3
 5oqjrrc0oquCGhEDENO7JYtUokBO8MDNblw422/XONNlUQL/NPhjGTkrOokadirTtSkwmGcVYTL
 bjUP9XhpN1JFeUKeMEiOOUXfTkScBZSbxIRLLN37zfneGoTKOTVlxr1FJij9s=
X-TM-AS-User-Approved-Sender: No
X-TM-AS-User-Blocked-Sender: No
X-TMASE-Result: 10--2.888400-4.000000
X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-24598.005
X-MDID: 1557311275-wUWvf3tawhSA
Subject: [dpdk-dev] [PATCH] app/testpmd: fix mbuf leak in the case of
	multi-segment Tx
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>
Message-ID: <20190508102737.Z_s7EHkb4Vb3HmjlQsRXjKKbjaTx4oeB4Bclm6ETa24@z>

The last mbuf allocated in bulk is never used and never freed.

Fixes: 01b645dcff7f ("app/testpmd: move txonly prepare in separate function")
Fixes: 561ddcf8d099 ("app/testpmd: allocate txonly segments per bulk")

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 app/test-pmd/txonly.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index fa8e0c0..fdfca14 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -167,7 +167,7 @@
 		nb_segs = tx_pkt_nb_segs;
 
 	if (nb_segs > 1) {
-		if (rte_mempool_get_bulk(mbp, (void **)pkt_segs, nb_segs))
+		if (rte_mempool_get_bulk(mbp, (void **)pkt_segs, nb_segs - 1))
 			return false;
 	}
 
-- 
1.8.3.1