From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B68E7A04B1 for ; Sat, 10 Oct 2020 18:06:24 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 49A6C1D950; Sat, 10 Oct 2020 18:05:57 +0200 (CEST) Received: from mailout1.w1.samsung.com (mailout1.w1.samsung.com [210.118.77.11]) by dpdk.org (Postfix) with ESMTP id 477011D94A for ; Sat, 10 Oct 2020 18:05:53 +0200 (CEST) Received: from eucas1p2.samsung.com (unknown [182.198.249.207]) by mailout1.w1.samsung.com (KnoxPortal) with ESMTP id 20201010160542euoutp019148fae03c6fd121dd9cc191a98ebc9a~8rSVCyutd0381003810euoutp01e for ; Sat, 10 Oct 2020 16:05:42 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout1.w1.samsung.com 20201010160542euoutp019148fae03c6fd121dd9cc191a98ebc9a~8rSVCyutd0381003810euoutp01e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1602345943; bh=XHBRFJdpy0VQOXDZHZktkwJ397h3dgSUtlAUFqe+eH4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oRjRTgDjw+VNJ0tGO3vYH8ihl7oDyYzl3rif9wvBikPl8s2uLXI/MuSC4geiAXbjW nlX2pA8RVz6EwxFL/aQAGkAMAvJIp8L8SvjIULukzUXoAyIOkmSXu97+QGU+DiqtWB zt6qyLUPz+rM0cyBMBTN3vDXFFPIv/0+OZ/Q4ksw= Received: from eusmges2new.samsung.com (unknown [203.254.199.244]) by eucas1p2.samsung.com (KnoxPortal) with ESMTP id 20201010160537eucas1p2f572ce782fcbee7d811f9b422ce4d842~8rSQJU4k30101201012eucas1p2p; Sat, 10 Oct 2020 16:05:37 +0000 (GMT) Received: from eucas1p2.samsung.com ( [182.198.249.207]) by eusmges2new.samsung.com (EUCPMTA) with SMTP id 38.58.05997.1DBD18F5; Sat, 10 Oct 2020 17:05:37 +0100 (BST) Received: from eusmtrp2.samsung.com (unknown [182.198.249.139]) by eucas1p2.samsung.com (KnoxPortal) with ESMTPA id 20201010160536eucas1p2b20e729b90d66eddd03618e98d38c179~8rSPCPY_N0269602696eucas1p2o; Sat, 10 Oct 2020 16:05:36 +0000 (GMT) Received: from eusmgms2.samsung.com (unknown [182.198.249.180]) by eusmtrp2.samsung.com (KnoxPortal) with ESMTP id 20201010160536eusmtrp2ae0a91d05b2d78cb5eaf0d73ee215bd0~8rSPBq-zW1692616926eusmtrp2F; Sat, 10 Oct 2020 16:05:36 +0000 (GMT) X-AuditID: cbfec7f4-65dff7000000176d-76-5f81dbd18b7b Received: from eusmtip1.samsung.com ( [203.254.199.221]) by eusmgms2.samsung.com (EUCPMTA) with SMTP id 9A.A6.06017.0DBD18F5; Sat, 10 Oct 2020 17:05:36 +0100 (BST) Received: from Padamandas.fritz.box (unknown [106.210.88.70]) by eusmtip1.samsung.com (KnoxPortal) with ESMTPA id 20201010160533eusmtip15d15f329759683dabdc2e02273b0df71~8rSMRIxaL2442224422eusmtip12; Sat, 10 Oct 2020 16:05:31 +0000 (GMT) From: Lukasz Wojciechowski To: David Hunt , Bruce Richardson Cc: dev@dpdk.org, l.wojciechow@partner.samsung.com, stable@dpdk.org Date: Sat, 10 Oct 2020 18:04:59 +0200 Message-Id: <20201010160508.19709-9-l.wojciechow@partner.samsung.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201010160508.19709-1-l.wojciechow@partner.samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrCIsWRmVeSWpSXmKPExsWy7djP87oXbzfGG2w7oW5xY5W9Rd+kj0wW 7z5tZ7J41rOO0eJfxx92B1aPXwuWsnos3vOSyePguz1MAcxRXDYpqTmZZalF+nYJXBmndj5k LZgbVLH56Vb2BsaXjl2MnBwSAiYSP7f9ZOxi5OIQEljBKHFxym1mCOcLo8TBj+eYIJzPjBKf Vk5ghmnpmHiOHcQWEljOKLHxozlE0SdGiQtN39lAEmwCthJHZn5lBbFFBMIkmpv3snQxcnAw CzhLPPkKViIMVLLwyDOwMIuAqsSeZyogYV4BV4kLmx9DrZKXWL3hAJjNKeAmcbbrIBvIKgmB y2wSp5/NAuuVEHCReHnEFKJeWOLV8S3sELaMxP+d85kg6rcxSlz9DfGmhMB+RonrvSugqqwl Dv/7zQZxm6bE+l36EDMdJY6/y4Yw+SRuvBUEKWYGMidtm84MEeaV6GgTgpihJ/G0ZyojzNY/ a5+wQNgeEr8X/YaG4FVGia6lR9kmMMrPQti1gJFxFaN4amlxbnpqsVFearlecWJucWleul5y fu4mRmDcn/53/MsOxl1/kg4xCnAwKvHwSpxqjBdiTSwrrsw9xCjBwawkwut09nScEG9KYmVV alF+fFFpTmrxIUZpDhYlcV7jRS9jhQTSE0tSs1NTC1KLYLJMHJxSDYzM6x/JSzyfdczk5pO7 z55fmmOd+b7vRPNpZyn/+SXOi8K2afNdmNx7bePqaNHIC6defQ66PjHtivJthqM1Z67kOWbP qW7VPvxjST/vakG7b2xrdbYYh/yX5n6Wv03/vDJ3MJNB5RWZytU/uV8xfOAv/LTrefcCgc2T f1rkhvjbT7I0+3973fV1SizFGYmGWsxFxYkAsH+bIfcCAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrGLMWRmVeSWpSXmKPExsVy+t/xu7oXbjfGG7z8IG1xY5W9Rd+kj0wW 7z5tZ7J41rOO0eJfxx92B1aPXwuWsnos3vOSyePguz1MAcxRejZF+aUlqQoZ+cUltkrRhhZG eoaWFnpGJpZ6hsbmsVZGpkr6djYpqTmZZalF+nYJehmndj5kLZgbVLH56Vb2BsaXjl2MnBwS AiYSHRPPsYPYQgJLGSUaLul3MXIAxWUkPlwSgCgRlvhzrYuti5ELqOQDo8SFHf9YQRJsArYS R2Z+ZQWpFxEIkzix0h8kzCzgLrFl8VRmEFsYqGThkWcsICUsAqoSe56pgIR5BVwlLmx+zAwx Xl5i9YYDYDangJvE2a6DUKsaGSVO3L7JNoGRbwEjwypGkdTS4tz03GIjveLE3OLSvHS95Pzc TYzAMNx27OeWHYxd74IPMQpwMCrx8EqcaowXYk0sK67MPcQowcGsJMLrdPZ0nBBvSmJlVWpR fnxRaU5q8SFGU6CjJjJLiSbnA2MkryTe0NTQ3MLS0NzY3NjMQkmct0PgYIyQQHpiSWp2ampB ahFMHxMHp1QDY1j5+fPC8xfN0r5XUbvW967tt8Pnr79yOzQ17nL7xi7TCL/vRisKFO9+vu8a EmXIpX0rX4F9c5v6xNo5XppKf7Tn3/i/XKe4IctZbsldlfA6jYOCsU9Lt18L3K7DkKU7U6e3 b28u479FYddPazru4VpRc+Gt+QzeCt76z5I73ReLP09XXinpo8RSnJFoqMVcVJwIAAt/zedZ AgAA X-CMS-MailID: 20201010160536eucas1p2b20e729b90d66eddd03618e98d38c179 X-Msg-Generator: CA Content-Type: text/plain; charset="utf-8" X-RootMTR: 20201010160536eucas1p2b20e729b90d66eddd03618e98d38c179 X-EPHeader: CA CMS-TYPE: 201P X-CMS-RootMailID: 20201010160536eucas1p2b20e729b90d66eddd03618e98d38c179 References: <20201009220202.20834-1-l.wojciechow@partner.samsung.com> <20201010160508.19709-1-l.wojciechow@partner.samsung.com> Subject: [dpdk-stable] [PATCH v7 08/16] test/distributor: fix freeing mbufs X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" Sanity tests with mbuf alloc and shutdown tests assume that mbufs passed to worker cores are freed in handlers. Such packets should not be returned to the distributor's main core. The only packets that should be returned are the packets send after completion of the tests in quit_workers function. This patch stops returning mbufs to distributor's core. In case of shutdown tests it is impossible to determine how worker and distributor threads would synchronize. Packets used by tests should be freed and packets used during quit_workers() shouldn't. That's why returning mbufs to mempool is moved to test procedure run on distributor thread from worker threads. Additionally this patch cleans up unused variables. Fixes: c0de0eb82e40 ("distributor: switch over to new API") Cc: david.hunt@intel.com Cc: stable@dpdk.org Signed-off-by: Lukasz Wojciechowski Acked-by: David Hunt --- app/test/test_distributor.c | 96 ++++++++++++++++++------------------- 1 file changed, 47 insertions(+), 49 deletions(-) diff --git a/app/test/test_distributor.c b/app/test/test_distributor.c index 838459392..06e01ff9d 100644 --- a/app/test/test_distributor.c +++ b/app/test/test_distributor.c @@ -44,7 +44,7 @@ total_packet_count(void) unsigned i, count = 0; for (i = 0; i < worker_idx; i++) count += __atomic_load_n(&worker_stats[i].handled_packets, - __ATOMIC_ACQUIRE); + __ATOMIC_RELAXED); return count; } @@ -55,7 +55,7 @@ clear_packet_count(void) unsigned int i; for (i = 0; i < RTE_MAX_LCORE; i++) __atomic_store_n(&worker_stats[i].handled_packets, 0, - __ATOMIC_RELEASE); + __ATOMIC_RELAXED); } /* this is the basic worker function for sanity test @@ -67,20 +67,18 @@ handle_work(void *arg) struct rte_mbuf *buf[8] __rte_cache_aligned; struct worker_params *wp = arg; struct rte_distributor *db = wp->dist; - unsigned int count = 0, num; + unsigned int num; unsigned int id = __atomic_fetch_add(&worker_idx, 1, __ATOMIC_RELAXED); num = rte_distributor_get_pkt(db, id, buf, NULL, 0); while (!quit) { __atomic_fetch_add(&worker_stats[id].handled_packets, num, - __ATOMIC_ACQ_REL); - count += num; + __ATOMIC_RELAXED); num = rte_distributor_get_pkt(db, id, buf, buf, num); } __atomic_fetch_add(&worker_stats[id].handled_packets, num, - __ATOMIC_ACQ_REL); - count += num; + __ATOMIC_RELAXED); rte_distributor_return_pkt(db, id, buf, num); return 0; } @@ -136,7 +134,7 @@ sanity_test(struct worker_params *wp, struct rte_mempool *p) for (i = 0; i < rte_lcore_count() - 1; i++) printf("Worker %u handled %u packets\n", i, __atomic_load_n(&worker_stats[i].handled_packets, - __ATOMIC_ACQUIRE)); + __ATOMIC_RELAXED)); printf("Sanity test with all zero hashes done.\n"); /* pick two flows and check they go correctly */ @@ -163,7 +161,7 @@ sanity_test(struct worker_params *wp, struct rte_mempool *p) printf("Worker %u handled %u packets\n", i, __atomic_load_n( &worker_stats[i].handled_packets, - __ATOMIC_ACQUIRE)); + __ATOMIC_RELAXED)); printf("Sanity test with two hash values done\n"); } @@ -190,7 +188,7 @@ sanity_test(struct worker_params *wp, struct rte_mempool *p) for (i = 0; i < rte_lcore_count() - 1; i++) printf("Worker %u handled %u packets\n", i, __atomic_load_n(&worker_stats[i].handled_packets, - __ATOMIC_ACQUIRE)); + __ATOMIC_RELAXED)); printf("Sanity test with non-zero hashes done\n"); rte_mempool_put_bulk(p, (void *)bufs, BURST); @@ -276,23 +274,20 @@ handle_work_with_free_mbufs(void *arg) struct rte_mbuf *buf[8] __rte_cache_aligned; struct worker_params *wp = arg; struct rte_distributor *d = wp->dist; - unsigned int count = 0; unsigned int i; unsigned int num; unsigned int id = __atomic_fetch_add(&worker_idx, 1, __ATOMIC_RELAXED); num = rte_distributor_get_pkt(d, id, buf, NULL, 0); while (!quit) { - count += num; __atomic_fetch_add(&worker_stats[id].handled_packets, num, - __ATOMIC_ACQ_REL); + __ATOMIC_RELAXED); for (i = 0; i < num; i++) rte_pktmbuf_free(buf[i]); num = rte_distributor_get_pkt(d, id, buf, NULL, 0); } - count += num; __atomic_fetch_add(&worker_stats[id].handled_packets, num, - __ATOMIC_ACQ_REL); + __ATOMIC_RELAXED); rte_distributor_return_pkt(d, id, buf, num); return 0; } @@ -318,7 +313,6 @@ sanity_test_with_mbuf_alloc(struct worker_params *wp, struct rte_mempool *p) rte_distributor_process(d, NULL, 0); for (j = 0; j < BURST; j++) { bufs[j]->hash.usr = (i+j) << 1; - rte_mbuf_refcnt_set(bufs[j], 1); } rte_distributor_process(d, bufs, BURST); @@ -342,15 +336,10 @@ sanity_test_with_mbuf_alloc(struct worker_params *wp, struct rte_mempool *p) static int handle_work_for_shutdown_test(void *arg) { - struct rte_mbuf *pkt = NULL; struct rte_mbuf *buf[8] __rte_cache_aligned; struct worker_params *wp = arg; struct rte_distributor *d = wp->dist; - unsigned int count = 0; unsigned int num; - unsigned int total = 0; - unsigned int i; - unsigned int returned = 0; unsigned int zero_id = 0; unsigned int zero_unset; const unsigned int id = __atomic_fetch_add(&worker_idx, 1, @@ -368,11 +357,8 @@ handle_work_for_shutdown_test(void *arg) /* wait for quit single globally, or for worker zero, wait * for zero_quit */ while (!quit && !(id == zero_id && zero_quit)) { - count += num; __atomic_fetch_add(&worker_stats[id].handled_packets, num, - __ATOMIC_ACQ_REL); - for (i = 0; i < num; i++) - rte_pktmbuf_free(buf[i]); + __ATOMIC_RELAXED); num = rte_distributor_get_pkt(d, id, buf, NULL, 0); if (num > 0) { @@ -381,15 +367,12 @@ handle_work_for_shutdown_test(void *arg) false, __ATOMIC_ACQ_REL, __ATOMIC_ACQUIRE); } zero_id = __atomic_load_n(&zero_idx, __ATOMIC_ACQUIRE); - - total += num; } - count += num; - returned = rte_distributor_return_pkt(d, id, buf, num); - __atomic_fetch_add(&worker_stats[id].handled_packets, num, - __ATOMIC_ACQ_REL); + __ATOMIC_RELAXED); if (id == zero_id) { + rte_distributor_return_pkt(d, id, NULL, 0); + /* for worker zero, allow it to restart to pick up last packet * when all workers are shutting down. */ @@ -400,15 +383,11 @@ handle_work_for_shutdown_test(void *arg) while (!quit) { __atomic_fetch_add(&worker_stats[id].handled_packets, - num, __ATOMIC_ACQ_REL); - count += num; - rte_pktmbuf_free(pkt); + num, __ATOMIC_RELAXED); num = rte_distributor_get_pkt(d, id, buf, NULL, 0); } - returned = rte_distributor_return_pkt(d, - id, buf, num); - printf("Num returned = %d\n", returned); } + rte_distributor_return_pkt(d, id, buf, num); return 0; } @@ -424,7 +403,9 @@ sanity_test_with_worker_shutdown(struct worker_params *wp, { struct rte_distributor *d = wp->dist; struct rte_mbuf *bufs[BURST]; - unsigned i; + struct rte_mbuf *bufs2[BURST]; + unsigned int i; + unsigned int failed = 0; printf("=== Sanity test of worker shutdown ===\n"); @@ -450,16 +431,17 @@ sanity_test_with_worker_shutdown(struct worker_params *wp, */ /* get more buffers to queue up, again setting them to the same flow */ - if (rte_mempool_get_bulk(p, (void *)bufs, BURST) != 0) { + if (rte_mempool_get_bulk(p, (void *)bufs2, BURST) != 0) { printf("line %d: Error getting mbufs from pool\n", __LINE__); + rte_mempool_put_bulk(p, (void *)bufs, BURST); return -1; } for (i = 0; i < BURST; i++) - bufs[i]->hash.usr = 1; + bufs2[i]->hash.usr = 1; /* get worker zero to quit */ zero_quit = 1; - rte_distributor_process(d, bufs, BURST); + rte_distributor_process(d, bufs2, BURST); /* flush the distributor */ rte_distributor_flush(d); @@ -468,15 +450,21 @@ sanity_test_with_worker_shutdown(struct worker_params *wp, for (i = 0; i < rte_lcore_count() - 1; i++) printf("Worker %u handled %u packets\n", i, __atomic_load_n(&worker_stats[i].handled_packets, - __ATOMIC_ACQUIRE)); + __ATOMIC_RELAXED)); if (total_packet_count() != BURST * 2) { printf("Line %d: Error, not all packets flushed. " "Expected %u, got %u\n", __LINE__, BURST * 2, total_packet_count()); - return -1; + failed = 1; } + rte_mempool_put_bulk(p, (void *)bufs, BURST); + rte_mempool_put_bulk(p, (void *)bufs2, BURST); + + if (failed) + return -1; + printf("Sanity test with worker shutdown passed\n\n"); return 0; } @@ -490,7 +478,8 @@ test_flush_with_worker_shutdown(struct worker_params *wp, { struct rte_distributor *d = wp->dist; struct rte_mbuf *bufs[BURST]; - unsigned i; + unsigned int i; + unsigned int failed = 0; printf("=== Test flush fn with worker shutdown (%s) ===\n", wp->name); @@ -522,15 +511,20 @@ test_flush_with_worker_shutdown(struct worker_params *wp, for (i = 0; i < rte_lcore_count() - 1; i++) printf("Worker %u handled %u packets\n", i, __atomic_load_n(&worker_stats[i].handled_packets, - __ATOMIC_ACQUIRE)); + __ATOMIC_RELAXED)); if (total_packet_count() != BURST) { printf("Line %d: Error, not all packets flushed. " "Expected %u, got %u\n", __LINE__, BURST, total_packet_count()); - return -1; + failed = 1; } + rte_mempool_put_bulk(p, (void *)bufs, BURST); + + if (failed) + return -1; + printf("Flush test with worker shutdown passed\n\n"); return 0; } @@ -596,7 +590,10 @@ quit_workers(struct worker_params *wp, struct rte_mempool *p) const unsigned num_workers = rte_lcore_count() - 1; unsigned i; struct rte_mbuf *bufs[RTE_MAX_LCORE]; - rte_mempool_get_bulk(p, (void *)bufs, num_workers); + if (rte_mempool_get_bulk(p, (void *)bufs, num_workers) != 0) { + printf("line %d: Error getting mbufs from pool\n", __LINE__); + return; + } zero_quit = 0; quit = 1; @@ -604,11 +601,12 @@ quit_workers(struct worker_params *wp, struct rte_mempool *p) bufs[i]->hash.usr = i << 1; rte_distributor_process(d, bufs, num_workers); - rte_mempool_put_bulk(p, (void *)bufs, num_workers); - rte_distributor_process(d, NULL, 0); rte_distributor_flush(d); rte_eal_mp_wait_lcore(); + + rte_mempool_put_bulk(p, (void *)bufs, num_workers); + quit = 0; worker_idx = 0; zero_idx = RTE_MAX_LCORE; -- 2.17.1