From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by inbox.dpdk.org (Postfix) with ESMTP id AA295A04BC;
	Thu,  8 Oct 2020 07:26:36 +0200 (CEST)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id AD6AB1BA83;
	Thu,  8 Oct 2020 07:24:00 +0200 (CEST)
Received: from mailout2.w1.samsung.com (mailout2.w1.samsung.com
 [210.118.77.12]) by dpdk.org (Postfix) with ESMTP id AA14E1B6F6
 for <dev@dpdk.org>; Thu,  8 Oct 2020 07:23:43 +0200 (CEST)
Received: from eucas1p1.samsung.com (unknown [182.198.249.206])
 by mailout2.w1.samsung.com (KnoxPortal) with ESMTP id
 20201008052343euoutp0287bdd1a64df853203a8d83d97af296ae~77POct8Te0526205262euoutp02l
 for <dev@dpdk.org>; Thu,  8 Oct 2020 05:23:43 +0000 (GMT)
DKIM-Filter: OpenDKIM Filter v2.11.0 mailout2.w1.samsung.com
 20201008052343euoutp0287bdd1a64df853203a8d83d97af296ae~77POct8Te0526205262euoutp02l
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com;
 s=mail20170921; t=1602134623;
 bh=/a4nS1Xs22D+fHT+StZS8IYevrAhZXaTRDYMrBl1Ydk=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=X9cQ7c5CkmUBMM5X5Cl9XLVwAJbNM6mKPYOgUeOR++la+quZRhwMFFLAAbQ6riHxG
 zMN2hig7GPr+GrGMZKVjYlRho+kD/drAll7LDTEOAtkk5a9YUEBP5McQn2bno7TqF8
 PzPrqTjsUA8/qXQYxp5d0Q7HcovOwBg8wKe7Sj9s=
Received: from eusmges3new.samsung.com (unknown [203.254.199.245]) by
 eucas1p2.samsung.com (KnoxPortal) with ESMTP id
 20201008052343eucas1p226336ef8ff569c64ade58f052f2aa967~77POQtZpE2346323463eucas1p27;
 Thu,  8 Oct 2020 05:23:43 +0000 (GMT)
Received: from eucas1p2.samsung.com ( [182.198.249.207]) by
 eusmges3new.samsung.com (EUCPMTA) with SMTP id 10.AD.06318.F52AE7F5; Thu,  8
 Oct 2020 06:23:43 +0100 (BST)
Received: from eusmtrp1.samsung.com (unknown [182.198.249.138]) by
 eucas1p1.samsung.com (KnoxPortal) with ESMTPA id
 20201008052342eucas1p19e8474360d1f7dacd4164b3e21e54290~77PNyUfuM1295112951eucas1p1G;
 Thu,  8 Oct 2020 05:23:42 +0000 (GMT)
Received: from eusmgms1.samsung.com (unknown [182.198.249.179]) by
 eusmtrp1.samsung.com (KnoxPortal) with ESMTP id
 20201008052342eusmtrp172037bbcb360a9ff57c1886b3597950f~77PNxxU042941529415eusmtrp1X;
 Thu,  8 Oct 2020 05:23:42 +0000 (GMT)
X-AuditID: cbfec7f5-38bff700000018ae-af-5f7ea25f5349
Received: from eusmtip1.samsung.com ( [203.254.199.221]) by
 eusmgms1.samsung.com (EUCPMTA) with SMTP id C8.89.06314.E52AE7F5; Thu,  8
 Oct 2020 06:23:42 +0100 (BST)
Received: from Padamandas.fritz.box (unknown [106.210.88.70]) by
 eusmtip1.samsung.com (KnoxPortal) with ESMTPA id
 20201008052342eusmtip148d5bf5fe1ea72805defc53c6d4689eb~77PNMl1d92784127841eusmtip18;
 Thu,  8 Oct 2020 05:23:42 +0000 (GMT)
From: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
To: David Hunt <david.hunt@intel.com>, Bruce Richardson
 <bruce.richardson@intel.com>
Cc: dev@dpdk.org, l.wojciechow@partner.samsung.com, stable@dpdk.org
Date: Thu,  8 Oct 2020 07:23:16 +0200
Message-Id: <20201008052323.11547-9-l.wojciechow@partner.samsung.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201008052323.11547-1-l.wojciechow@partner.samsung.com>
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrOIsWRmVeSWpSXmKPExsWy7djP87rxi+riDZa+4bO4screom/SRyaL
 d5+2M1k861nHaPGv4w+7A6vHrwVLWT0W73nJ5HHw3R6mAOYoLpuU1JzMstQifbsErozVXVuZ
 CqY5VBycuYypgfG0cRcjJ4eEgInEsjdvGbsYuTiEBFYwSrRNv8kM4XxhlFiy/gmU85lR4vXP
 6ywwLfP/zIRKLGeUONsygQ3C+cQoMXHWUbAqNgFbiSMzv7KC2CICYRLNzXuB4hwczALOEk++
 soGEhYFKlp2+A1bCIqAqsf/EA3YQm1fAVWJzz0pWiGXyEqs3HGAGsTkF3CTO/r8LFb/OJnH2
 NQ+E7SIx9clzRghbWOLV8S3sELaMxOnJPSwgt0kIbGOUuPr7JyOEs59R4nrvCqgqa4nD/36z
 QRynKbF+lz5E2FHi8tqFYGEJAT6JG28FQcLMQOakbdOZIcK8Eh1tQhDVehJPe6Yywqz9s/YJ
 NKw8JFbtmwcN3quMEqeevWOZwCg/C2HZAkbGVYziqaXFuempxcZ5qeV6xYm5xaV56XrJ+bmb
 GIHRf/rf8a87GPf9STrEKMDBqMTDa3C0Nl6INbGsuDL3EKMEB7OSCK/T2dNxQrwpiZVVqUX5
 8UWlOanFhxilOViUxHmNF72MFRJITyxJzU5NLUgtgskycXBKNTAqNb7vSwydnZL49Jjl5Zvt
 LtLsnTKce/39K7f5Baw0PdUyndH32K245aKiUiFLvJ5ZmavzX9F457HhLq/8o8TXd9nOq3j3
 xfbx6VunvtBK32D1eYXcC+0s5oIAy3/vnRMq+yfPEo6Vkum2jufbPveVxjqJsAUCepMa/z5x
 v3x7imi6/tHQDCWW4oxEQy3mouJEACzXbdP6AgAA
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrBLMWRmVeSWpSXmKPExsVy+t/xu7pxi+riDebt17O4screom/SRyaL
 d5+2M1k861nHaPGv4w+7A6vHrwVLWT0W73nJ5HHw3R6mAOYoPZui/NKSVIWM/OISW6VoQwsj
 PUNLCz0jE0s9Q2PzWCsjUyV9O5uU1JzMstQifbsEvYzVXVuZCqY5VBycuYypgfG0cRcjJ4eE
 gInE/D8zmbsYuTiEBJYySiz4tZy1i5EDKCEj8eGSAESNsMSfa11sEDUfGCUuT93KDJJgE7CV
 ODLzK1i9iECYxImV/iBhZgF3iS2Lp4KVCAOVLDt9hxXEZhFQldh/4gE7iM0r4CqxuWclK8R8
 eYnVGw6A1XMKuEmc/X+XFWJXI6PEzfMz2SYw8i1gZFjFKJJaWpybnltsqFecmFtcmpeul5yf
 u4kRGIrbjv3cvIPx0sbgQ4wCHIxKPLwGR2vjhVgTy4orcw8xSnAwK4nwOp09HSfEm5JYWZVa
 lB9fVJqTWnyI0RToqonMUqLJ+cA4ySuJNzQ1NLewNDQ3Njc2s1AS5+0QOBgjJJCeWJKanZpa
 kFoE08fEwSnVwDjtzxK5YM7bmhsuRm++s+1PQsLd39+E/jxapzzZ0YMrSH7LhiVL7bP9fJpU
 4lYesH/0zULNMjuad/VmDfN9+g//XeK36kwyrDeaFvhyBesnn2+H0y7/9lra//R34RPj3Gkn
 D7rGe6b+9Yp+GfdZvcmM8V9B9Pf8jgfbZIX2Llc7+kpHv2nWnZtKLMUZiYZazEXFiQCvaKj3
 WwIAAA==
X-CMS-MailID: 20201008052342eucas1p19e8474360d1f7dacd4164b3e21e54290
X-Msg-Generator: CA
Content-Type: text/plain; charset="utf-8"
X-RootMTR: 20201008052342eucas1p19e8474360d1f7dacd4164b3e21e54290
X-EPHeader: CA
CMS-TYPE: 201P
X-CMS-RootMailID: 20201008052342eucas1p19e8474360d1f7dacd4164b3e21e54290
References: <20200925224209.12173-1-l.wojciechow@partner.samsung.com>
 <20201008052323.11547-1-l.wojciechow@partner.samsung.com>
 <CGME20201008052342eucas1p19e8474360d1f7dacd4164b3e21e54290@eucas1p1.samsung.com>
Subject: [dpdk-dev] [PATCH v5 08/15] test/distributor: fix freeing mbufs
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

Sanity tests with mbuf alloc and shutdown tests assume that
mbufs passed to worker cores are freed in handlers.
Such packets should not be returned to the distributor's main
core. The only packets that should be returned are the packets
send after completion of the tests in quit_workers function.

This patch stops returning mbufs to distributor's core.
In case of shutdown tests it is impossible to determine
how worker and distributor threads would synchronize.
Packets used by tests should be freed and packets used during
quit_workers() shouldn't. That's why returning mbufs to mempool
is moved to test procedure run on distributor thread
from worker threads.

Additionally this patch cleans up unused variables.

Fixes: c0de0eb82e40 ("distributor: switch over to new API")
Cc: david.hunt@intel.com
Cc: stable@dpdk.org

Signed-off-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Acked-by: David Hunt <david.hunt@intel.com>
---
 app/test/test_distributor.c | 68 ++++++++++++++++++-------------------
 1 file changed, 33 insertions(+), 35 deletions(-)

diff --git a/app/test/test_distributor.c b/app/test/test_distributor.c
index 838459392..d7f780acc 100644
--- a/app/test/test_distributor.c
+++ b/app/test/test_distributor.c
@@ -67,20 +67,18 @@ handle_work(void *arg)
 	struct rte_mbuf *buf[8] __rte_cache_aligned;
 	struct worker_params *wp = arg;
 	struct rte_distributor *db = wp->dist;
-	unsigned int count = 0, num;
+	unsigned int num;
 	unsigned int id = __atomic_fetch_add(&worker_idx, 1, __ATOMIC_RELAXED);
 
 	num = rte_distributor_get_pkt(db, id, buf, NULL, 0);
 	while (!quit) {
 		__atomic_fetch_add(&worker_stats[id].handled_packets, num,
 				__ATOMIC_ACQ_REL);
-		count += num;
 		num = rte_distributor_get_pkt(db, id,
 				buf, buf, num);
 	}
 	__atomic_fetch_add(&worker_stats[id].handled_packets, num,
 			__ATOMIC_ACQ_REL);
-	count += num;
 	rte_distributor_return_pkt(db, id, buf, num);
 	return 0;
 }
@@ -276,21 +274,18 @@ handle_work_with_free_mbufs(void *arg)
 	struct rte_mbuf *buf[8] __rte_cache_aligned;
 	struct worker_params *wp = arg;
 	struct rte_distributor *d = wp->dist;
-	unsigned int count = 0;
 	unsigned int i;
 	unsigned int num;
 	unsigned int id = __atomic_fetch_add(&worker_idx, 1, __ATOMIC_RELAXED);
 
 	num = rte_distributor_get_pkt(d, id, buf, NULL, 0);
 	while (!quit) {
-		count += num;
 		__atomic_fetch_add(&worker_stats[id].handled_packets, num,
 				__ATOMIC_ACQ_REL);
 		for (i = 0; i < num; i++)
 			rte_pktmbuf_free(buf[i]);
 		num = rte_distributor_get_pkt(d, id, buf, NULL, 0);
 	}
-	count += num;
 	__atomic_fetch_add(&worker_stats[id].handled_packets, num,
 			__ATOMIC_ACQ_REL);
 	rte_distributor_return_pkt(d, id, buf, num);
@@ -318,7 +313,6 @@ sanity_test_with_mbuf_alloc(struct worker_params *wp, struct rte_mempool *p)
 			rte_distributor_process(d, NULL, 0);
 		for (j = 0; j < BURST; j++) {
 			bufs[j]->hash.usr = (i+j) << 1;
-			rte_mbuf_refcnt_set(bufs[j], 1);
 		}
 
 		rte_distributor_process(d, bufs, BURST);
@@ -342,15 +336,10 @@ sanity_test_with_mbuf_alloc(struct worker_params *wp, struct rte_mempool *p)
 static int
 handle_work_for_shutdown_test(void *arg)
 {
-	struct rte_mbuf *pkt = NULL;
 	struct rte_mbuf *buf[8] __rte_cache_aligned;
 	struct worker_params *wp = arg;
 	struct rte_distributor *d = wp->dist;
-	unsigned int count = 0;
 	unsigned int num;
-	unsigned int total = 0;
-	unsigned int i;
-	unsigned int returned = 0;
 	unsigned int zero_id = 0;
 	unsigned int zero_unset;
 	const unsigned int id = __atomic_fetch_add(&worker_idx, 1,
@@ -368,11 +357,8 @@ handle_work_for_shutdown_test(void *arg)
 	/* wait for quit single globally, or for worker zero, wait
 	 * for zero_quit */
 	while (!quit && !(id == zero_id && zero_quit)) {
-		count += num;
 		__atomic_fetch_add(&worker_stats[id].handled_packets, num,
 				__ATOMIC_ACQ_REL);
-		for (i = 0; i < num; i++)
-			rte_pktmbuf_free(buf[i]);
 		num = rte_distributor_get_pkt(d, id, buf, NULL, 0);
 
 		if (num > 0) {
@@ -381,15 +367,12 @@ handle_work_for_shutdown_test(void *arg)
 				false, __ATOMIC_ACQ_REL, __ATOMIC_ACQUIRE);
 		}
 		zero_id = __atomic_load_n(&zero_idx, __ATOMIC_ACQUIRE);
-
-		total += num;
 	}
-	count += num;
-	returned = rte_distributor_return_pkt(d, id, buf, num);
-
 	__atomic_fetch_add(&worker_stats[id].handled_packets, num,
 			__ATOMIC_ACQ_REL);
 	if (id == zero_id) {
+		rte_distributor_return_pkt(d, id, NULL, 0);
+
 		/* for worker zero, allow it to restart to pick up last packet
 		 * when all workers are shutting down.
 		 */
@@ -401,14 +384,10 @@ handle_work_for_shutdown_test(void *arg)
 		while (!quit) {
 			__atomic_fetch_add(&worker_stats[id].handled_packets,
 					num, __ATOMIC_ACQ_REL);
-			count += num;
-			rte_pktmbuf_free(pkt);
 			num = rte_distributor_get_pkt(d, id, buf, NULL, 0);
 		}
-		returned = rte_distributor_return_pkt(d,
-				id, buf, num);
-		printf("Num returned = %d\n", returned);
 	}
+	rte_distributor_return_pkt(d, id, buf, num);
 	return 0;
 }
 
@@ -424,7 +403,9 @@ sanity_test_with_worker_shutdown(struct worker_params *wp,
 {
 	struct rte_distributor *d = wp->dist;
 	struct rte_mbuf *bufs[BURST];
-	unsigned i;
+	struct rte_mbuf *bufs2[BURST];
+	unsigned int i;
+	unsigned int failed = 0;
 
 	printf("=== Sanity test of worker shutdown ===\n");
 
@@ -450,16 +431,17 @@ sanity_test_with_worker_shutdown(struct worker_params *wp,
 	 */
 
 	/* get more buffers to queue up, again setting them to the same flow */
-	if (rte_mempool_get_bulk(p, (void *)bufs, BURST) != 0) {
+	if (rte_mempool_get_bulk(p, (void *)bufs2, BURST) != 0) {
 		printf("line %d: Error getting mbufs from pool\n", __LINE__);
+		rte_mempool_put_bulk(p, (void *)bufs, BURST);
 		return -1;
 	}
 	for (i = 0; i < BURST; i++)
-		bufs[i]->hash.usr = 1;
+		bufs2[i]->hash.usr = 1;
 
 	/* get worker zero to quit */
 	zero_quit = 1;
-	rte_distributor_process(d, bufs, BURST);
+	rte_distributor_process(d, bufs2, BURST);
 
 	/* flush the distributor */
 	rte_distributor_flush(d);
@@ -474,9 +456,15 @@ sanity_test_with_worker_shutdown(struct worker_params *wp,
 		printf("Line %d: Error, not all packets flushed. "
 				"Expected %u, got %u\n",
 				__LINE__, BURST * 2, total_packet_count());
-		return -1;
+		failed = 1;
 	}
 
+	rte_mempool_put_bulk(p, (void *)bufs, BURST);
+	rte_mempool_put_bulk(p, (void *)bufs2, BURST);
+
+	if (failed)
+		return -1;
+
 	printf("Sanity test with worker shutdown passed\n\n");
 	return 0;
 }
@@ -490,7 +478,8 @@ test_flush_with_worker_shutdown(struct worker_params *wp,
 {
 	struct rte_distributor *d = wp->dist;
 	struct rte_mbuf *bufs[BURST];
-	unsigned i;
+	unsigned int i;
+	unsigned int failed = 0;
 
 	printf("=== Test flush fn with worker shutdown (%s) ===\n", wp->name);
 
@@ -528,9 +517,14 @@ test_flush_with_worker_shutdown(struct worker_params *wp,
 		printf("Line %d: Error, not all packets flushed. "
 				"Expected %u, got %u\n",
 				__LINE__, BURST, total_packet_count());
-		return -1;
+		failed = 1;
 	}
 
+	rte_mempool_put_bulk(p, (void *)bufs, BURST);
+
+	if (failed)
+		return -1;
+
 	printf("Flush test with worker shutdown passed\n\n");
 	return 0;
 }
@@ -596,7 +590,10 @@ quit_workers(struct worker_params *wp, struct rte_mempool *p)
 	const unsigned num_workers = rte_lcore_count() - 1;
 	unsigned i;
 	struct rte_mbuf *bufs[RTE_MAX_LCORE];
-	rte_mempool_get_bulk(p, (void *)bufs, num_workers);
+	if (rte_mempool_get_bulk(p, (void *)bufs, num_workers) != 0) {
+		printf("line %d: Error getting mbufs from pool\n", __LINE__);
+		return;
+	}
 
 	zero_quit = 0;
 	quit = 1;
@@ -604,11 +601,12 @@ quit_workers(struct worker_params *wp, struct rte_mempool *p)
 		bufs[i]->hash.usr = i << 1;
 	rte_distributor_process(d, bufs, num_workers);
 
-	rte_mempool_put_bulk(p, (void *)bufs, num_workers);
-
 	rte_distributor_process(d, NULL, 0);
 	rte_distributor_flush(d);
 	rte_eal_mp_wait_lcore();
+
+	rte_mempool_put_bulk(p, (void *)bufs, num_workers);
+
 	quit = 0;
 	worker_idx = 0;
 	zero_idx = RTE_MAX_LCORE;
-- 
2.17.1