From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0C09FA04BC for ; Sat, 10 Oct 2020 00:03:34 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AA3FE1D667; Sat, 10 Oct 2020 00:03:11 +0200 (CEST) Received: from mailout2.w1.samsung.com (mailout2.w1.samsung.com [210.118.77.12]) by dpdk.org (Postfix) with ESMTP id BA1B61D658 for ; Sat, 10 Oct 2020 00:03:03 +0200 (CEST) Received: from eucas1p1.samsung.com (unknown [182.198.249.206]) by mailout2.w1.samsung.com (KnoxPortal) with ESMTP id 20201009220253euoutp02730458194217cbe57154d8257117ca9a~8cg5Tn8x31685816858euoutp02r for ; Fri, 9 Oct 2020 22:02:53 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout2.w1.samsung.com 20201009220253euoutp02730458194217cbe57154d8257117ca9a~8cg5Tn8x31685816858euoutp02r DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1602280973; bh=XHBRFJdpy0VQOXDZHZktkwJ397h3dgSUtlAUFqe+eH4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=A/FnxLYctm/owAeQX2scjU7lZFFhzRrbYRr9VSuWEMG4mHbevVV6e71Lh6cdtBOlD NvoMvgeYOqx63GOX+tWIMSxSpV/T7w3Ax8wYHsD96BNJO7aTcSL4fUYv75t1jWGdMc sDvH4rVdnwwp+SCVCRbpR54mcaT9+FZkG/tBHlPI= Received: from eusmges2new.samsung.com (unknown [203.254.199.244]) by eucas1p2.samsung.com (KnoxPortal) with ESMTP id 20201009220247eucas1p20d0a52f47db83ff7e9678188fe3df7b3~8cg0FHg2y1921319213eucas1p2V; Fri, 9 Oct 2020 22:02:47 +0000 (GMT) Received: from eucas1p2.samsung.com ( [182.198.249.207]) by eusmges2new.samsung.com (EUCPMTA) with SMTP id 14.88.05997.70ED08F5; Fri, 9 Oct 2020 23:02:47 +0100 (BST) Received: from eusmtrp2.samsung.com (unknown [182.198.249.139]) by eucas1p1.samsung.com (KnoxPortal) with ESMTPA id 20201009220246eucas1p1283b16f1f54c572b5952ca9334d667da~8cgzJIvdh1723217232eucas1p1b; Fri, 9 Oct 2020 22:02:46 +0000 (GMT) Received: from eusmgms1.samsung.com (unknown [182.198.249.179]) by eusmtrp2.samsung.com (KnoxPortal) with ESMTP id 20201009220246eusmtrp2be9e2334d175b3148dd2c2f0288e8c25~8cgzIoCdb1642116421eusmtrp2g; Fri, 9 Oct 2020 22:02:46 +0000 (GMT) X-AuditID: cbfec7f4-677ff7000000176d-03-5f80de07606f Received: from eusmtip1.samsung.com ( [203.254.199.221]) by eusmgms1.samsung.com (EUCPMTA) with SMTP id E3.E2.06314.60ED08F5; Fri, 9 Oct 2020 23:02:46 +0100 (BST) Received: from Padamandas.fritz.box (unknown [106.210.88.70]) by eusmtip1.samsung.com (KnoxPortal) with ESMTPA id 20201009220245eusmtip1cc58a4dbb2adbb93a8c5ea97fdeeeacf~8cgyTP6p51990219902eusmtip1I; Fri, 9 Oct 2020 22:02:45 +0000 (GMT) From: Lukasz Wojciechowski To: David Hunt , Bruce Richardson Cc: dev@dpdk.org, l.wojciechow@partner.samsung.com, stable@dpdk.org Date: Sat, 10 Oct 2020 00:01:55 +0200 Message-Id: <20201009220202.20834-9-l.wojciechow@partner.samsung.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201009220202.20834-1-l.wojciechow@partner.samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrGIsWRmVeSWpSXmKPExsWy7djP87rs9xriDWZ8VbC4screom/SRyaL d5+2M1k861nHaPGv4w+7A6vHrwVLWT0W73nJ5HHw3R6mAOYoLpuU1JzMstQifbsEroxTOx+y FswNqtj8dCt7A+NLxy5GDg4JAROJk5Ntuhi5OIQEVjBK9Kz/yQThfGGUuNj7lRXC+cwo8f3T JrYuRk6wjgezN4DZQgLLGSUu/4yHKPrEKHF/8R0WkASbgK3EkZkg3ZwcIgJhEs3Ne1lA1jEL OEs8+QrWKwxUcvDkZTCbRUBV4sqTTrByXgFXifsr70HtkpdYveEAM4jNKeAm0bprGQvILgmB y2wSK88uYoIocpHYt24VK4QtLPHq+BZ2CFtG4v/O+UwQDdsYJa7+/skI4exnlLjeuwKqylri 8L/fbBDXaUqs36UPEXaUePvoETMkjPgkbrwVBAkzA5mTtk2HCvNKdLQJQVTrSTztmcoIs/bP 2icsELaHxOEpm9kg4XOVUeLs8l8sExjlZyEsW8DIuIpRPLW0ODc9tdgoL7Vcrzgxt7g0L10v OT93EyMw9k//O/5lB+OuP0mHGAU4GJV4eBuSG+KFWBPLiitzDzFKcDArifA6nT0dJ8SbklhZ lVqUH19UmpNafIhRmoNFSZzXeNHLWCGB9MSS1OzU1ILUIpgsEwenVANj2S5LwbOzrKYvE6pf d9CgLdbqxhpD7QUrqvbOep9utOJ7xOF1d//K+ie9ULTd4VSz2kpnz70erg0Lp1vlOT/JZHXP WJZtJRF+3bZAsMYopfVR2SN+paX3FimzLfi29scr3uby2bZ+01sOSt9aZ/fokvrhtv8r72/K +PfOj+PsPC7VJTv9M4s2KbEUZyQaajEXFScCAPpOqT75AgAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrBLMWRmVeSWpSXmKPExsVy+t/xu7ps9xriDRb1q1ncWGVv0TfpI5PF u0/bmSye9axjtPjX8YfdgdXj14KlrB6L97xk8jj4bg9TAHOUnk1RfmlJqkJGfnGJrVK0oYWR nqGlhZ6RiaWeobF5rJWRqZK+nU1Kak5mWWqRvl2CXsapnQ9ZC+YGVWx+upW9gfGlYxcjJ4eE gInEg9kb2LoYuTiEBJYySvR9PMTcxcgBlJCR+HBJAKJGWOLPtS6omg+MEq8PzGMFSbAJ2Eoc mfmVFaReRCBM4sRKf5Aws4C7xJbFU5lBbGGgkoMnL7OB2CwCqhJXnnSCtfIKuErcX3mPDWK+ vMTqDQfA6jkF3CRady1jgdjVyCix5cBL5gmMfAsYGVYxiqSWFuem5xYb6hUn5haX5qXrJefn bmIEhuK2Yz8372C8tDH4EKMAB6MSD69GYkO8EGtiWXFl7iFGCQ5mJRFep7On44R4UxIrq1KL 8uOLSnNSiw8xmgJdNZFZSjQ5HxgneSXxhqaG5haWhubG5sZmFkrivB0CB2OEBNITS1KzU1ML Uotg+pg4OKUaGN07Mle5HSrWeFs577zKB4HdsX7CRqFekS4+ZvxVqxNWZPgZrHgnt/7ytYs6 gfZz+K8x/u/zftyp8m/qe+crIkGdeXfk1VofaCr8LWfhX+r44LT17n9FLrZuR46JHJGYW3yh /12Giqxx16WYn57zftq5/zuwntl9kQjvsTMvvmY9mHxT+3r0cyWW4oxEQy3mouJEAMcgFe5b AgAA X-CMS-MailID: 20201009220246eucas1p1283b16f1f54c572b5952ca9334d667da X-Msg-Generator: CA Content-Type: text/plain; charset="utf-8" X-RootMTR: 20201009220246eucas1p1283b16f1f54c572b5952ca9334d667da X-EPHeader: CA CMS-TYPE: 201P X-CMS-RootMailID: 20201009220246eucas1p1283b16f1f54c572b5952ca9334d667da References: <20201008052323.11547-1-l.wojciechow@partner.samsung.com> <20201009220202.20834-1-l.wojciechow@partner.samsung.com> Subject: [dpdk-stable] [PATCH v6 08/15] test/distributor: fix freeing mbufs X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" Sanity tests with mbuf alloc and shutdown tests assume that mbufs passed to worker cores are freed in handlers. Such packets should not be returned to the distributor's main core. The only packets that should be returned are the packets send after completion of the tests in quit_workers function. This patch stops returning mbufs to distributor's core. In case of shutdown tests it is impossible to determine how worker and distributor threads would synchronize. Packets used by tests should be freed and packets used during quit_workers() shouldn't. That's why returning mbufs to mempool is moved to test procedure run on distributor thread from worker threads. Additionally this patch cleans up unused variables. Fixes: c0de0eb82e40 ("distributor: switch over to new API") Cc: david.hunt@intel.com Cc: stable@dpdk.org Signed-off-by: Lukasz Wojciechowski Acked-by: David Hunt --- app/test/test_distributor.c | 96 ++++++++++++++++++------------------- 1 file changed, 47 insertions(+), 49 deletions(-) diff --git a/app/test/test_distributor.c b/app/test/test_distributor.c index 838459392..06e01ff9d 100644 --- a/app/test/test_distributor.c +++ b/app/test/test_distributor.c @@ -44,7 +44,7 @@ total_packet_count(void) unsigned i, count = 0; for (i = 0; i < worker_idx; i++) count += __atomic_load_n(&worker_stats[i].handled_packets, - __ATOMIC_ACQUIRE); + __ATOMIC_RELAXED); return count; } @@ -55,7 +55,7 @@ clear_packet_count(void) unsigned int i; for (i = 0; i < RTE_MAX_LCORE; i++) __atomic_store_n(&worker_stats[i].handled_packets, 0, - __ATOMIC_RELEASE); + __ATOMIC_RELAXED); } /* this is the basic worker function for sanity test @@ -67,20 +67,18 @@ handle_work(void *arg) struct rte_mbuf *buf[8] __rte_cache_aligned; struct worker_params *wp = arg; struct rte_distributor *db = wp->dist; - unsigned int count = 0, num; + unsigned int num; unsigned int id = __atomic_fetch_add(&worker_idx, 1, __ATOMIC_RELAXED); num = rte_distributor_get_pkt(db, id, buf, NULL, 0); while (!quit) { __atomic_fetch_add(&worker_stats[id].handled_packets, num, - __ATOMIC_ACQ_REL); - count += num; + __ATOMIC_RELAXED); num = rte_distributor_get_pkt(db, id, buf, buf, num); } __atomic_fetch_add(&worker_stats[id].handled_packets, num, - __ATOMIC_ACQ_REL); - count += num; + __ATOMIC_RELAXED); rte_distributor_return_pkt(db, id, buf, num); return 0; } @@ -136,7 +134,7 @@ sanity_test(struct worker_params *wp, struct rte_mempool *p) for (i = 0; i < rte_lcore_count() - 1; i++) printf("Worker %u handled %u packets\n", i, __atomic_load_n(&worker_stats[i].handled_packets, - __ATOMIC_ACQUIRE)); + __ATOMIC_RELAXED)); printf("Sanity test with all zero hashes done.\n"); /* pick two flows and check they go correctly */ @@ -163,7 +161,7 @@ sanity_test(struct worker_params *wp, struct rte_mempool *p) printf("Worker %u handled %u packets\n", i, __atomic_load_n( &worker_stats[i].handled_packets, - __ATOMIC_ACQUIRE)); + __ATOMIC_RELAXED)); printf("Sanity test with two hash values done\n"); } @@ -190,7 +188,7 @@ sanity_test(struct worker_params *wp, struct rte_mempool *p) for (i = 0; i < rte_lcore_count() - 1; i++) printf("Worker %u handled %u packets\n", i, __atomic_load_n(&worker_stats[i].handled_packets, - __ATOMIC_ACQUIRE)); + __ATOMIC_RELAXED)); printf("Sanity test with non-zero hashes done\n"); rte_mempool_put_bulk(p, (void *)bufs, BURST); @@ -276,23 +274,20 @@ handle_work_with_free_mbufs(void *arg) struct rte_mbuf *buf[8] __rte_cache_aligned; struct worker_params *wp = arg; struct rte_distributor *d = wp->dist; - unsigned int count = 0; unsigned int i; unsigned int num; unsigned int id = __atomic_fetch_add(&worker_idx, 1, __ATOMIC_RELAXED); num = rte_distributor_get_pkt(d, id, buf, NULL, 0); while (!quit) { - count += num; __atomic_fetch_add(&worker_stats[id].handled_packets, num, - __ATOMIC_ACQ_REL); + __ATOMIC_RELAXED); for (i = 0; i < num; i++) rte_pktmbuf_free(buf[i]); num = rte_distributor_get_pkt(d, id, buf, NULL, 0); } - count += num; __atomic_fetch_add(&worker_stats[id].handled_packets, num, - __ATOMIC_ACQ_REL); + __ATOMIC_RELAXED); rte_distributor_return_pkt(d, id, buf, num); return 0; } @@ -318,7 +313,6 @@ sanity_test_with_mbuf_alloc(struct worker_params *wp, struct rte_mempool *p) rte_distributor_process(d, NULL, 0); for (j = 0; j < BURST; j++) { bufs[j]->hash.usr = (i+j) << 1; - rte_mbuf_refcnt_set(bufs[j], 1); } rte_distributor_process(d, bufs, BURST); @@ -342,15 +336,10 @@ sanity_test_with_mbuf_alloc(struct worker_params *wp, struct rte_mempool *p) static int handle_work_for_shutdown_test(void *arg) { - struct rte_mbuf *pkt = NULL; struct rte_mbuf *buf[8] __rte_cache_aligned; struct worker_params *wp = arg; struct rte_distributor *d = wp->dist; - unsigned int count = 0; unsigned int num; - unsigned int total = 0; - unsigned int i; - unsigned int returned = 0; unsigned int zero_id = 0; unsigned int zero_unset; const unsigned int id = __atomic_fetch_add(&worker_idx, 1, @@ -368,11 +357,8 @@ handle_work_for_shutdown_test(void *arg) /* wait for quit single globally, or for worker zero, wait * for zero_quit */ while (!quit && !(id == zero_id && zero_quit)) { - count += num; __atomic_fetch_add(&worker_stats[id].handled_packets, num, - __ATOMIC_ACQ_REL); - for (i = 0; i < num; i++) - rte_pktmbuf_free(buf[i]); + __ATOMIC_RELAXED); num = rte_distributor_get_pkt(d, id, buf, NULL, 0); if (num > 0) { @@ -381,15 +367,12 @@ handle_work_for_shutdown_test(void *arg) false, __ATOMIC_ACQ_REL, __ATOMIC_ACQUIRE); } zero_id = __atomic_load_n(&zero_idx, __ATOMIC_ACQUIRE); - - total += num; } - count += num; - returned = rte_distributor_return_pkt(d, id, buf, num); - __atomic_fetch_add(&worker_stats[id].handled_packets, num, - __ATOMIC_ACQ_REL); + __ATOMIC_RELAXED); if (id == zero_id) { + rte_distributor_return_pkt(d, id, NULL, 0); + /* for worker zero, allow it to restart to pick up last packet * when all workers are shutting down. */ @@ -400,15 +383,11 @@ handle_work_for_shutdown_test(void *arg) while (!quit) { __atomic_fetch_add(&worker_stats[id].handled_packets, - num, __ATOMIC_ACQ_REL); - count += num; - rte_pktmbuf_free(pkt); + num, __ATOMIC_RELAXED); num = rte_distributor_get_pkt(d, id, buf, NULL, 0); } - returned = rte_distributor_return_pkt(d, - id, buf, num); - printf("Num returned = %d\n", returned); } + rte_distributor_return_pkt(d, id, buf, num); return 0; } @@ -424,7 +403,9 @@ sanity_test_with_worker_shutdown(struct worker_params *wp, { struct rte_distributor *d = wp->dist; struct rte_mbuf *bufs[BURST]; - unsigned i; + struct rte_mbuf *bufs2[BURST]; + unsigned int i; + unsigned int failed = 0; printf("=== Sanity test of worker shutdown ===\n"); @@ -450,16 +431,17 @@ sanity_test_with_worker_shutdown(struct worker_params *wp, */ /* get more buffers to queue up, again setting them to the same flow */ - if (rte_mempool_get_bulk(p, (void *)bufs, BURST) != 0) { + if (rte_mempool_get_bulk(p, (void *)bufs2, BURST) != 0) { printf("line %d: Error getting mbufs from pool\n", __LINE__); + rte_mempool_put_bulk(p, (void *)bufs, BURST); return -1; } for (i = 0; i < BURST; i++) - bufs[i]->hash.usr = 1; + bufs2[i]->hash.usr = 1; /* get worker zero to quit */ zero_quit = 1; - rte_distributor_process(d, bufs, BURST); + rte_distributor_process(d, bufs2, BURST); /* flush the distributor */ rte_distributor_flush(d); @@ -468,15 +450,21 @@ sanity_test_with_worker_shutdown(struct worker_params *wp, for (i = 0; i < rte_lcore_count() - 1; i++) printf("Worker %u handled %u packets\n", i, __atomic_load_n(&worker_stats[i].handled_packets, - __ATOMIC_ACQUIRE)); + __ATOMIC_RELAXED)); if (total_packet_count() != BURST * 2) { printf("Line %d: Error, not all packets flushed. " "Expected %u, got %u\n", __LINE__, BURST * 2, total_packet_count()); - return -1; + failed = 1; } + rte_mempool_put_bulk(p, (void *)bufs, BURST); + rte_mempool_put_bulk(p, (void *)bufs2, BURST); + + if (failed) + return -1; + printf("Sanity test with worker shutdown passed\n\n"); return 0; } @@ -490,7 +478,8 @@ test_flush_with_worker_shutdown(struct worker_params *wp, { struct rte_distributor *d = wp->dist; struct rte_mbuf *bufs[BURST]; - unsigned i; + unsigned int i; + unsigned int failed = 0; printf("=== Test flush fn with worker shutdown (%s) ===\n", wp->name); @@ -522,15 +511,20 @@ test_flush_with_worker_shutdown(struct worker_params *wp, for (i = 0; i < rte_lcore_count() - 1; i++) printf("Worker %u handled %u packets\n", i, __atomic_load_n(&worker_stats[i].handled_packets, - __ATOMIC_ACQUIRE)); + __ATOMIC_RELAXED)); if (total_packet_count() != BURST) { printf("Line %d: Error, not all packets flushed. " "Expected %u, got %u\n", __LINE__, BURST, total_packet_count()); - return -1; + failed = 1; } + rte_mempool_put_bulk(p, (void *)bufs, BURST); + + if (failed) + return -1; + printf("Flush test with worker shutdown passed\n\n"); return 0; } @@ -596,7 +590,10 @@ quit_workers(struct worker_params *wp, struct rte_mempool *p) const unsigned num_workers = rte_lcore_count() - 1; unsigned i; struct rte_mbuf *bufs[RTE_MAX_LCORE]; - rte_mempool_get_bulk(p, (void *)bufs, num_workers); + if (rte_mempool_get_bulk(p, (void *)bufs, num_workers) != 0) { + printf("line %d: Error getting mbufs from pool\n", __LINE__); + return; + } zero_quit = 0; quit = 1; @@ -604,11 +601,12 @@ quit_workers(struct worker_params *wp, struct rte_mempool *p) bufs[i]->hash.usr = i << 1; rte_distributor_process(d, bufs, num_workers); - rte_mempool_put_bulk(p, (void *)bufs, num_workers); - rte_distributor_process(d, NULL, 0); rte_distributor_flush(d); rte_eal_mp_wait_lcore(); + + rte_mempool_put_bulk(p, (void *)bufs, num_workers); + quit = 0; worker_idx = 0; zero_idx = RTE_MAX_LCORE; -- 2.17.1