From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by inbox.dpdk.org (Postfix) with ESMTP id EFE7AA04DB;
	Sat, 17 Oct 2020 05:10:41 +0200 (CEST)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id 8805BE2E4;
	Sat, 17 Oct 2020 05:08:07 +0200 (CEST)
Received: from mailout1.w1.samsung.com (mailout1.w1.samsung.com
 [210.118.77.11]) by dpdk.org (Postfix) with ESMTP id 9755AE2A8
 for <dev@dpdk.org>; Sat, 17 Oct 2020 05:07:39 +0200 (CEST)
Received: from eucas1p1.samsung.com (unknown [182.198.249.206])
 by mailout1.w1.samsung.com (KnoxPortal) with ESMTP id
 20201017030722euoutp0151c9003e044a93626cdf65bab89dfd72~_qLvkNkSQ0903009030euoutp01w
 for <dev@dpdk.org>; Sat, 17 Oct 2020 03:07:22 +0000 (GMT)
DKIM-Filter: OpenDKIM Filter v2.11.0 mailout1.w1.samsung.com
 20201017030722euoutp0151c9003e044a93626cdf65bab89dfd72~_qLvkNkSQ0903009030euoutp01w
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com;
 s=mail20170921; t=1602904042;
 bh=dtWIMxmqC093nromZjQX9OoS2dikG3YuYpmSL0ZQi8w=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=M4fIBWGClr4NJD1WgRTRI0t2xBo6QHk/N6eAa/JivYFvy8Gmzv4iCHiej2AgnHZ16
 887j0xGsJZ548ZA8n+cja6VOB+88K4WKTjxWeopg1tbhNrj/weS1P3PmYR7D2DqqHS
 3m/XAiBE1a2neoOHf+CYGoICfIisesoJz82c/Pco=
Received: from eusmges2new.samsung.com (unknown [203.254.199.244]) by
 eucas1p2.samsung.com (KnoxPortal) with ESMTP id
 20201017030716eucas1p2f06e78f9a35e054842b1ad194bc548bd~_qLpzOsjq1521315213eucas1p2Z;
 Sat, 17 Oct 2020 03:07:16 +0000 (GMT)
Received: from eucas1p1.samsung.com ( [182.198.249.206]) by
 eusmges2new.samsung.com (EUCPMTA) with SMTP id D2.42.05997.4EF5A8F5; Sat, 17
 Oct 2020 04:07:16 +0100 (BST)
Received: from eusmtrp2.samsung.com (unknown [182.198.249.139]) by
 eucas1p2.samsung.com (KnoxPortal) with ESMTPA id
 20201017030715eucas1p2366d1f0ce16a219b21542bb26e4588a6~_qLpCbwAZ1097310973eucas1p24;
 Sat, 17 Oct 2020 03:07:15 +0000 (GMT)
Received: from eusmgms1.samsung.com (unknown [182.198.249.179]) by
 eusmtrp2.samsung.com (KnoxPortal) with ESMTP id
 20201017030715eusmtrp25e6ba6d9f2f9fe785a3912ec8f5d7510~_qLpB3h9g2813028130eusmtrp2M;
 Sat, 17 Oct 2020 03:07:15 +0000 (GMT)
X-AuditID: cbfec7f4-65dff7000000176d-be-5f8a5fe40e4c
Received: from eusmtip1.samsung.com ( [203.254.199.221]) by
 eusmgms1.samsung.com (EUCPMTA) with SMTP id 2B.99.06314.3EF5A8F5; Sat, 17
 Oct 2020 04:07:15 +0100 (BST)
Received: from localhost.localdomain (unknown [106.210.88.70]) by
 eusmtip1.samsung.com (KnoxPortal) with ESMTPA id
 20201017030714eusmtip13a1463d0d770a8b49ffa101699ad0146~_qLoQVeVj3124931249eusmtip17;
 Sat, 17 Oct 2020 03:07:14 +0000 (GMT)
From: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
To: David Hunt <david.hunt@intel.com>, Bruce Richardson
 <bruce.richardson@intel.com>
Cc: dev@dpdk.org, l.wojciechow@partner.samsung.com, stable@dpdk.org
Date: Sat, 17 Oct 2020 05:06:51 +0200
Message-Id: <20201017030701.16134-8-l.wojciechow@partner.samsung.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201017030701.16134-1-l.wojciechow@partner.samsung.com>
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrKIsWRmVeSWpSXmKPExsWy7djPc7pP4rviDfacYre4screom/SRyaL
 d5+2M1k861nHaPGv4w+7A6vHrwVLWT0W73nJ5HHw3R6mAOYoLpuU1JzMstQifbsErozrk8MK
 ztlVbNgu1sA4z6iLkZNDQsBE4urVacxdjFwcQgIrGCUuTDnDBOF8YZTY8n0zO4TzmVFi4e8l
 rDAtryZuZ4FILGeUmLFjKStc1Y6mWYwgVWwCthJHZn4F6xARCJNobt4L1MHBwSzgLPHkKxtI
 WBio5MnUtywgNouAqsS8nUvA4rwCrhJrn79jglgmL7F6wwFmkFZOATeJxuOuIKskBC6zSSx/
 u4URosZFYtepu1D1whKvjm9hh7BlJP7vnM8E0bCNUeLq75+MEM5+RonrvSugqqwlDv/7zQZx
 nKbE+l36EGFHiY/9k8HCEgJ8EjfeCoKEmYHMSdumM0OEeSU62oQgqvUknvZMZYRZ+2ftExYI
 20Pi36QP0LC6yihxasl35gmM8rMQli1gZFzFKJ5aWpybnlpslJdarlecmFtcmpeul5yfu4kR
 GPmn/x3/soNx15+kQ4wCHIxKPLwcSzvjhVgTy4orcw8xSnAwK4nwOp09HSfEm5JYWZValB9f
 VJqTWnyIUZqDRUmc13jRy1ghgfTEktTs1NSC1CKYLBMHp1QDo9WfPMvmLh+xp2v6WVSeuCx8
 Wye7Q2/9/jMrX7A3XHgQWbi9l6mdVb+i68CU4+pHeuZMUBIKv7XSpZm9RVVZrf7EbMn2jZJt
 JYUeX5bOTqmMeViRuvPyXYdrEuvrC0VO8cjEOPBeOPQ4/ruZacZqp+cVEb05h3xvrPCN6NYt
 EAo7ud61obRWiaU4I9FQi7moOBEAcHIRbvgCAAA=
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrKLMWRmVeSWpSXmKPExsVy+t/xu7qP47viDc7d4bW4screom/SRyaL
 d5+2M1k861nHaPGv4w+7A6vHrwVLWT0W73nJ5HHw3R6mAOYoPZui/NKSVIWM/OISW6VoQwsj
 PUNLCz0jE0s9Q2PzWCsjUyV9O5uU1JzMstQifbsEvYzrk8MKztlVbNgu1sA4z6iLkZNDQsBE
 4tXE7SxdjFwcQgJLGSXuvzvK3MXIAZSQkfhwSQCiRljiz7UuNoiaj4wSG+7cZgNJsAnYShyZ
 +ZUVpF5EIEzixEp/kDCzgLvElsVTmUFsYaCSJ1PfsoDYLAKqEvN2LgFr5RVwlVj7/B0TxHx5
 idUbDoCt5RRwk2g87gqxqpFR4uysdSwTGPkWMDKsYhRJLS3OTc8tNtQrTswtLs1L10vOz93E
 CAzCbcd+bt7BeGlj8CFGAQ5GJR7eDYs644VYE8uKK3MPMUpwMCuJ8DqdPR0nxJuSWFmVWpQf
 X1Sak1p8iNEU6KiJzFKiyfnACMkriTc0NTS3sDQ0NzY3NrNQEuftEDgYIySQnliSmp2aWpBa
 BNPHxMEp1cDIdtyxcNIZpUN3dh7wXLnsbOJl2fbGOFHf430/Ur7VBBotLI0u8G66rX0u6crR
 Mu/ygyclbq1ZeMv93t1zDZcPthyOuZ96xTEmlaOZbbaCQ3UL16b/XGF75pq+atNaHuetG9P6
 /um7yNdzZ0TJvFyUcsf/fdtRzSXqvw8tzJ80o6hAvKnoncp8JZbijERDLeai4kQA6y7qrlgC
 AAA=
X-CMS-MailID: 20201017030715eucas1p2366d1f0ce16a219b21542bb26e4588a6
X-Msg-Generator: CA
Content-Type: text/plain; charset="utf-8"
X-RootMTR: 20201017030715eucas1p2366d1f0ce16a219b21542bb26e4588a6
X-EPHeader: CA
CMS-TYPE: 201P
X-CMS-RootMailID: 20201017030715eucas1p2366d1f0ce16a219b21542bb26e4588a6
References: <20201010160508.19709-1-l.wojciechow@partner.samsung.com>
 <20201017030701.16134-1-l.wojciechow@partner.samsung.com>
 <CGME20201017030715eucas1p2366d1f0ce16a219b21542bb26e4588a6@eucas1p2.samsung.com>
Subject: [dpdk-dev] [PATCH v8 07/17] test/distributor: fix freeing mbufs
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

Sanity tests with mbuf alloc and shutdown tests assume that
mbufs passed to worker cores are freed in handlers.
Such packets should not be returned to the distributor's main
core. The only packets that should be returned are the packets
send after completion of the tests in quit_workers function.

This patch stops returning mbufs to distributor's core.
In case of shutdown tests it is impossible to determine
how worker and distributor threads would synchronize.
Packets used by tests should be freed and packets used during
quit_workers() shouldn't. That's why returning mbufs to mempool
is moved to test procedure run on distributor thread
from worker threads.

Additionally this patch cleans up unused variables.

Fixes: c0de0eb82e40 ("distributor: switch over to new API")
Cc: david.hunt@intel.com
Cc: stable@dpdk.org

Signed-off-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Acked-by: David Hunt <david.hunt@intel.com>
---
 app/test/test_distributor.c | 67 ++++++++++++++++++-------------------
 1 file changed, 33 insertions(+), 34 deletions(-)

diff --git a/app/test/test_distributor.c b/app/test/test_distributor.c
index 6cd7a2edd..ec1fe348b 100644
--- a/app/test/test_distributor.c
+++ b/app/test/test_distributor.c
@@ -63,20 +63,18 @@ handle_work(void *arg)
 	struct rte_mbuf *buf[8] __rte_cache_aligned;
 	struct worker_params *wp = arg;
 	struct rte_distributor *db = wp->dist;
-	unsigned int count = 0, num;
+	unsigned int num;
 	unsigned int id = __atomic_fetch_add(&worker_idx, 1, __ATOMIC_RELAXED);
 
 	num = rte_distributor_get_pkt(db, id, buf, NULL, 0);
 	while (!quit) {
 		__atomic_fetch_add(&worker_stats[id].handled_packets, num,
 				__ATOMIC_RELAXED);
-		count += num;
 		num = rte_distributor_get_pkt(db, id,
 				buf, buf, num);
 	}
 	__atomic_fetch_add(&worker_stats[id].handled_packets, num,
 			__ATOMIC_RELAXED);
-	count += num;
 	rte_distributor_return_pkt(db, id, buf, num);
 	return 0;
 }
@@ -268,7 +266,6 @@ handle_work_with_free_mbufs(void *arg)
 	struct rte_mbuf *buf[8] __rte_cache_aligned;
 	struct worker_params *wp = arg;
 	struct rte_distributor *d = wp->dist;
-	unsigned int count = 0;
 	unsigned int i;
 	unsigned int num;
 	unsigned int id = __atomic_fetch_add(&worker_idx, 1, __ATOMIC_RELAXED);
@@ -276,13 +273,11 @@ handle_work_with_free_mbufs(void *arg)
 	num = rte_distributor_get_pkt(d, id, buf, NULL, 0);
 	while (!quit) {
 		worker_stats[id].handled_packets += num;
-		count += num;
 		for (i = 0; i < num; i++)
 			rte_pktmbuf_free(buf[i]);
 		num = rte_distributor_get_pkt(d, id, buf, NULL, 0);
 	}
 	worker_stats[id].handled_packets += num;
-	count += num;
 	rte_distributor_return_pkt(d, id, buf, num);
 	return 0;
 }
@@ -308,7 +303,6 @@ sanity_test_with_mbuf_alloc(struct worker_params *wp, struct rte_mempool *p)
 			rte_distributor_process(d, NULL, 0);
 		for (j = 0; j < BURST; j++) {
 			bufs[j]->hash.usr = (i+j) << 1;
-			rte_mbuf_refcnt_set(bufs[j], 1);
 		}
 
 		rte_distributor_process(d, bufs, BURST);
@@ -332,15 +326,10 @@ sanity_test_with_mbuf_alloc(struct worker_params *wp, struct rte_mempool *p)
 static int
 handle_work_for_shutdown_test(void *arg)
 {
-	struct rte_mbuf *pkt = NULL;
 	struct rte_mbuf *buf[8] __rte_cache_aligned;
 	struct worker_params *wp = arg;
 	struct rte_distributor *d = wp->dist;
-	unsigned int count = 0;
 	unsigned int num;
-	unsigned int total = 0;
-	unsigned int i;
-	unsigned int returned = 0;
 	unsigned int zero_id = 0;
 	unsigned int zero_unset;
 	const unsigned int id = __atomic_fetch_add(&worker_idx, 1,
@@ -359,9 +348,6 @@ handle_work_for_shutdown_test(void *arg)
 	 * for zero_quit */
 	while (!quit && !(id == zero_id && zero_quit)) {
 		worker_stats[id].handled_packets += num;
-		count += num;
-		for (i = 0; i < num; i++)
-			rte_pktmbuf_free(buf[i]);
 		num = rte_distributor_get_pkt(d, id, buf, NULL, 0);
 
 		if (num > 0) {
@@ -370,14 +356,12 @@ handle_work_for_shutdown_test(void *arg)
 				false, __ATOMIC_ACQ_REL, __ATOMIC_ACQUIRE);
 		}
 		zero_id = __atomic_load_n(&zero_idx, __ATOMIC_ACQUIRE);
-
-		total += num;
 	}
 	worker_stats[id].handled_packets += num;
-	count += num;
-	returned = rte_distributor_return_pkt(d, id, buf, num);
 
 	if (id == zero_id) {
+		rte_distributor_return_pkt(d, id, NULL, 0);
+
 		/* for worker zero, allow it to restart to pick up last packet
 		 * when all workers are shutting down.
 		 */
@@ -388,14 +372,10 @@ handle_work_for_shutdown_test(void *arg)
 
 		while (!quit) {
 			worker_stats[id].handled_packets += num;
-			count += num;
-			rte_pktmbuf_free(pkt);
 			num = rte_distributor_get_pkt(d, id, buf, NULL, 0);
 		}
-		returned = rte_distributor_return_pkt(d,
-				id, buf, num);
-		printf("Num returned = %d\n", returned);
 	}
+	rte_distributor_return_pkt(d, id, buf, num);
 	return 0;
 }
 
@@ -411,7 +391,9 @@ sanity_test_with_worker_shutdown(struct worker_params *wp,
 {
 	struct rte_distributor *d = wp->dist;
 	struct rte_mbuf *bufs[BURST];
-	unsigned i;
+	struct rte_mbuf *bufs2[BURST];
+	unsigned int i;
+	unsigned int failed = 0;
 
 	printf("=== Sanity test of worker shutdown ===\n");
 
@@ -437,16 +419,17 @@ sanity_test_with_worker_shutdown(struct worker_params *wp,
 	 */
 
 	/* get more buffers to queue up, again setting them to the same flow */
-	if (rte_mempool_get_bulk(p, (void *)bufs, BURST) != 0) {
+	if (rte_mempool_get_bulk(p, (void *)bufs2, BURST) != 0) {
 		printf("line %d: Error getting mbufs from pool\n", __LINE__);
+		rte_mempool_put_bulk(p, (void *)bufs, BURST);
 		return -1;
 	}
 	for (i = 0; i < BURST; i++)
-		bufs[i]->hash.usr = 1;
+		bufs2[i]->hash.usr = 1;
 
 	/* get worker zero to quit */
 	zero_quit = 1;
-	rte_distributor_process(d, bufs, BURST);
+	rte_distributor_process(d, bufs2, BURST);
 
 	/* flush the distributor */
 	rte_distributor_flush(d);
@@ -460,9 +443,15 @@ sanity_test_with_worker_shutdown(struct worker_params *wp,
 		printf("Line %d: Error, not all packets flushed. "
 				"Expected %u, got %u\n",
 				__LINE__, BURST * 2, total_packet_count());
-		return -1;
+		failed = 1;
 	}
 
+	rte_mempool_put_bulk(p, (void *)bufs, BURST);
+	rte_mempool_put_bulk(p, (void *)bufs2, BURST);
+
+	if (failed)
+		return -1;
+
 	printf("Sanity test with worker shutdown passed\n\n");
 	return 0;
 }
@@ -476,7 +465,8 @@ test_flush_with_worker_shutdown(struct worker_params *wp,
 {
 	struct rte_distributor *d = wp->dist;
 	struct rte_mbuf *bufs[BURST];
-	unsigned i;
+	unsigned int i;
+	unsigned int failed = 0;
 
 	printf("=== Test flush fn with worker shutdown (%s) ===\n", wp->name);
 
@@ -513,9 +503,14 @@ test_flush_with_worker_shutdown(struct worker_params *wp,
 		printf("Line %d: Error, not all packets flushed. "
 				"Expected %u, got %u\n",
 				__LINE__, BURST, total_packet_count());
-		return -1;
+		failed = 1;
 	}
 
+	rte_mempool_put_bulk(p, (void *)bufs, BURST);
+
+	if (failed)
+		return -1;
+
 	printf("Flush test with worker shutdown passed\n\n");
 	return 0;
 }
@@ -581,7 +576,10 @@ quit_workers(struct worker_params *wp, struct rte_mempool *p)
 	const unsigned num_workers = rte_lcore_count() - 1;
 	unsigned i;
 	struct rte_mbuf *bufs[RTE_MAX_LCORE];
-	rte_mempool_get_bulk(p, (void *)bufs, num_workers);
+	if (rte_mempool_get_bulk(p, (void *)bufs, num_workers) != 0) {
+		printf("line %d: Error getting mbufs from pool\n", __LINE__);
+		return;
+	}
 
 	zero_quit = 0;
 	quit = 1;
@@ -589,11 +587,12 @@ quit_workers(struct worker_params *wp, struct rte_mempool *p)
 		bufs[i]->hash.usr = i << 1;
 	rte_distributor_process(d, bufs, num_workers);
 
-	rte_mempool_put_bulk(p, (void *)bufs, num_workers);
-
 	rte_distributor_process(d, NULL, 0);
 	rte_distributor_flush(d);
 	rte_eal_mp_wait_lcore();
+
+	rte_mempool_put_bulk(p, (void *)bufs, num_workers);
+
 	quit = 0;
 	worker_idx = 0;
 	zero_idx = RTE_MAX_LCORE;
-- 
2.17.1