From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id DD774A04DB for ; Sat, 17 Oct 2020 05:29:07 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BDA61E2C2; Sat, 17 Oct 2020 05:29:06 +0200 (CEST) Received: from mailout1.w1.samsung.com (mailout1.w1.samsung.com [210.118.77.11]) by dpdk.org (Postfix) with ESMTP id 042E7E2C2 for ; Sat, 17 Oct 2020 05:29:05 +0200 (CEST) Received: from eucas1p2.samsung.com (unknown [182.198.249.207]) by mailout1.w1.samsung.com (KnoxPortal) with ESMTP id 20201017032854euoutp016c8622825f32d9fb0e93f455ef793c54~_qejT3_cb3000630006euoutp01N for ; Sat, 17 Oct 2020 03:28:54 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout1.w1.samsung.com 20201017032854euoutp016c8622825f32d9fb0e93f455ef793c54~_qejT3_cb3000630006euoutp01N DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1602905334; bh=IJAIR0eRYqOeg4wt3lzgb0TVaDFGSQCDuEcP0oT9PiE=; h=Subject:To:Cc:From:Date:In-Reply-To:References:From; b=k/mMN4mBgzIv84Kb6bkvyr3oNP5TPiVSaGMAv+7Lb5pefAcAhD+cNlqA/gxGHG8mU oLo2Wqq90rxTWgEqCk4/kKl1Y7G6JUmtRHPKcJfvVYKG9JqR/ff3BB2GdJgAX02AyC JNx26FGVoaDdwgNfPpxW5RYOTjPqtO2nOlvNWvVM= Received: from eusmges3new.samsung.com (unknown [203.254.199.245]) by eucas1p2.samsung.com (KnoxPortal) with ESMTP id 20201017032848eucas1p20da2dca7738ffeca563ea78d9f847f51~_qedeqGnq1904819048eucas1p2l; Sat, 17 Oct 2020 03:28:48 +0000 (GMT) Received: from eucas1p2.samsung.com ( [182.198.249.207]) by eusmges3new.samsung.com (EUCPMTA) with SMTP id 99.F5.06318.0F46A8F5; Sat, 17 Oct 2020 04:28:48 +0100 (BST) Received: from eusmtrp1.samsung.com (unknown [182.198.249.138]) by eucas1p1.samsung.com (KnoxPortal) with ESMTPA id 20201017032847eucas1p1271e2496dd94a81ffe328208633eeb2f~_qecWVn0m0363603636eucas1p1f; Sat, 17 Oct 2020 03:28:47 +0000 (GMT) Received: from eusmgms2.samsung.com (unknown [182.198.249.180]) by eusmtrp1.samsung.com (KnoxPortal) with ESMTP id 20201017032847eusmtrp18a8c576853215fd6ad2d8cd39d38dc84~_qecVzGUj3018830188eusmtrp1W; Sat, 17 Oct 2020 03:28:47 +0000 (GMT) X-AuditID: cbfec7f5-38bff700000018ae-be-5f8a64f06c35 Received: from eusmtip2.samsung.com ( [203.254.199.222]) by eusmgms2.samsung.com (EUCPMTA) with SMTP id 58.EE.06017.FE46A8F5; Sat, 17 Oct 2020 04:28:47 +0100 (BST) Received: from [106.210.88.70] (unknown [106.210.88.70]) by eusmtip2.samsung.com (KnoxPortal) with ESMTPA id 20201017032846eusmtip22064d8bd2cf18bd1dd7004637eb1849e~_qebiMGK91711517115eusmtip2m; Sat, 17 Oct 2020 03:28:46 +0000 (GMT) To: Honnappa Nagarahalli , David Hunt , Bruce Richardson Cc: "dev@dpdk.org" , "stable@dpdk.org" , nd , "\"'Lukasz Wojciechowski'\"," From: Lukasz Wojciechowski Message-ID: <09ec498f-6dea-6100-ca39-678a3e4c6fde@partner.samsung.com> Date: Sat, 17 Oct 2020 05:28:45 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.12.1 MIME-Version: 1.0 In-Reply-To: Content-Transfer-Encoding: 8bit Content-Language: en-US X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrCKsWRmVeSWpSXmKPExsWy7djP87ofUrriDZ7uUbS4screom/SRyaL d5+2M1nMfNrCbvGsZx2jxZnlPcwW/zr+sDuwe6yZt4bR49eCpawei/e8ZPI4+G4PUwBLFJdN SmpOZllqkb5dAlfGrsWLWAp2x1VsWP6KpYHxu2cXIyeHhICJxMnbP1i7GLk4hARWMEqsXnWJ CcL5wiixbEY3I4TzmVFi18VdrDAtP5oPsoHYQgLLGSWae0wg7LeMEgsnB4HYwgI+Eh+eHGIH aRYR6GaUmHT2DNgkZoG5jBI7pzUyglSxCdhKHJn5FWwqr4CbRFPjH6AODg4WAVWJxknyIGFR gTiJCRtbWCBKBCVOznwCZnMKxEq0LP3NBGIzC8hLNG+dzQxhi0vcejKfCeLQbewSL/fXQ9gu EpfWd7ND2MISr45vgbJlJE5P7mEBuQ2onlHi6u+fjBDOfkaJ670roKqsJQ7/+80GchyzgKbE +l36IKaEgKPE8XfZECafxI23ghAn8ElM2jadGSLMK9HRJgQxQ0/iac9URpitf9Y+YZnAqDQL yWOzkDwzC8kzsxDWLmBkWcUonlpanJueWmycl1quV5yYW1yal66XnJ+7iRGYdk7/O/51B+O+ P0mHGAU4GJV4eDmWdsYLsSaWFVfmHmKU4GBWEuF1Ons6Tog3JbGyKrUoP76oNCe1+BCjNAeL kjiv8aKXsUIC6YklqdmpqQWpRTBZJg5OqQZGNZV//DXMC9TUGw5c2nTjfG6H9bF5ri1PS9gr Fm94NOnAFLY/7TyqT5IKP9kUbuvhr50wZ3KkmND2E8fz/PdltXs4vJxwdJLeesXgouXaS4W/ blt39d3s5o/n6+/OtnJ6fNztTH3QnWMd11gFWa8fFZKJZrMwqOi4+qc6KEGr+s6XVp3TFz+6 KLEUZyQaajEXFScCACpF3oI3AwAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrBIsWRmVeSWpSXmKPExsVy+t/xe7rvU7riDQ6+4be4screom/SRyaL d5+2M1nMfNrCbvGsZx2jxZnlPcwW/zr+sDuwe6yZt4bR49eCpawei/e8ZPI4+G4PUwBLlJ5N UX5pSapCRn5xia1StKGFkZ6hpYWekYmlnqGxeayVkamSvp1NSmpOZllqkb5dgl7GrsWLWAp2 x1VsWP6KpYHxu2cXIyeHhICJxI/mg2xdjFwcQgJLGSU+zl0O5HAAJWQkPlwSgKgRlvhzrYsN xBYSeM0oceYCE4gtLOAj8eHJIXaQXhGBXkaJ7V1zmUAcZoG5jBJTG58yQkx9wiRxZuM/dpAW NgFbiSMzv7KC2LwCbhJNjX/YQbaxCKhKNE6SBwmLCsRJ/JjYywZRIihxcuYTFhCbUyBWomXp b7DNzAJmEvM2P2SGsOUlmrfOhrLFJW49mc80gVFoFpL2WUhaZiFpmYWkZQEjyypGkdTS4tz0 3GIjveLE3OLSvHS95PzcTYzAWNt27OeWHYxd74IPMQpwMCrx8G5Y1BkvxJpYVlyZe4hRgoNZ SYTX6ezpOCHelMTKqtSi/Pii0pzU4kOMpkC/TWSWEk3OB6aBvJJ4Q1NDcwtLQ3Njc2MzCyVx 3g6BgzFCAumJJanZqakFqUUwfUwcnFINjB6Fv3/k6BfPLpU77Zj/Z8usTTUBGSeOqt5s5Noa 5X+hqXiTxuHfPD7bKj8mdQdvtKrdMeNJ9+N7Zvrfwn3lhfP9Im5lz3do8i3aEXyoe9GxBbNn zk4NmMPfqJ3169LqnffNZya+TxGYdZyPZ9VTz/WJT/jnOL0tOr7tQM18tWq3k/v+MF+3bVNi Kc5INNRiLipOBAA06DdJywIAAA== X-CMS-MailID: 20201017032847eucas1p1271e2496dd94a81ffe328208633eeb2f X-Msg-Generator: CA Content-Type: text/plain; charset="utf-8" X-RootMTR: 20201010160536eucas1p2b20e729b90d66eddd03618e98d38c179 X-EPHeader: CA CMS-TYPE: 201P X-CMS-RootMailID: 20201010160536eucas1p2b20e729b90d66eddd03618e98d38c179 References: <20201009220202.20834-1-l.wojciechow@partner.samsung.com> <20201010160508.19709-1-l.wojciechow@partner.samsung.com> <20201010160508.19709-9-l.wojciechow@partner.samsung.com> Subject: Re: [dpdk-stable] [dpdk-dev] [PATCH v7 08/16] test/distributor: fix freeing mbufs X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" Hi Honnappa, W dniu 16.10.2020 o 07:12, Honnappa Nagarahalli pisze: > > >> Sanity tests with mbuf alloc and shutdown tests assume that mbufs passed >> to worker cores are freed in handlers. >> Such packets should not be returned to the distributor's main core. The only >> packets that should be returned are the packets send after completion of >> the tests in quit_workers function. >> >> This patch stops returning mbufs to distributor's core. >> In case of shutdown tests it is impossible to determine how worker and >> distributor threads would synchronize. >> Packets used by tests should be freed and packets used during >> quit_workers() shouldn't. That's why returning mbufs to mempool is moved >> to test procedure run on distributor thread from worker threads. >> >> Additionally this patch cleans up unused variables. >> >> Fixes: c0de0eb82e40 ("distributor: switch over to new API") >> Cc: david.hunt@intel.com >> Cc: stable@dpdk.org >> >> Signed-off-by: Lukasz Wojciechowski >> Acked-by: David Hunt >> --- >> app/test/test_distributor.c | 96 ++++++++++++++++++------------------- >> 1 file changed, 47 insertions(+), 49 deletions(-) >> >> diff --git a/app/test/test_distributor.c b/app/test/test_distributor.c index >> 838459392..06e01ff9d 100644 >> --- a/app/test/test_distributor.c >> +++ b/app/test/test_distributor.c >> @@ -44,7 +44,7 @@ total_packet_count(void) >> unsigned i, count = 0; >> for (i = 0; i < worker_idx; i++) >> count += >> __atomic_load_n(&worker_stats[i].handled_packets, >> - __ATOMIC_ACQUIRE); >> + __ATOMIC_RELAXED); > I think it is better to make this and other statistics changes below in commit 6/16. It will be in line with the commit log as well. I changed the order of patches to avoid duplicated changes in the code. > >> return count; >> } >> >> @@ -55,7 +55,7 @@ clear_packet_count(void) >> unsigned int i; >> for (i = 0; i < RTE_MAX_LCORE; i++) >> __atomic_store_n(&worker_stats[i].handled_packets, 0, >> - __ATOMIC_RELEASE); >> + __ATOMIC_RELAXED); >> } >> >> /* this is the basic worker function for sanity test @@ -67,20 +67,18 @@ >> handle_work(void *arg) >> struct rte_mbuf *buf[8] __rte_cache_aligned; >> struct worker_params *wp = arg; >> struct rte_distributor *db = wp->dist; >> - unsigned int count = 0, num; >> + unsigned int num; >> unsigned int id = __atomic_fetch_add(&worker_idx, 1, >> __ATOMIC_RELAXED); >> >> num = rte_distributor_get_pkt(db, id, buf, NULL, 0); >> while (!quit) { >> __atomic_fetch_add(&worker_stats[id].handled_packets, >> num, >> - __ATOMIC_ACQ_REL); >> - count += num; >> + __ATOMIC_RELAXED); >> num = rte_distributor_get_pkt(db, id, >> buf, buf, num); >> } >> __atomic_fetch_add(&worker_stats[id].handled_packets, num, >> - __ATOMIC_ACQ_REL); >> - count += num; >> + __ATOMIC_RELAXED); >> rte_distributor_return_pkt(db, id, buf, num); >> return 0; >> } >> @@ -136,7 +134,7 @@ sanity_test(struct worker_params *wp, struct >> rte_mempool *p) >> for (i = 0; i < rte_lcore_count() - 1; i++) >> printf("Worker %u handled %u packets\n", i, >> __atomic_load_n(&worker_stats[i].handled_packets, >> - __ATOMIC_ACQUIRE)); >> + __ATOMIC_RELAXED)); >> printf("Sanity test with all zero hashes done.\n"); >> >> /* pick two flows and check they go correctly */ @@ -163,7 +161,7 >> @@ sanity_test(struct worker_params *wp, struct rte_mempool *p) >> printf("Worker %u handled %u packets\n", i, >> __atomic_load_n( >> &worker_stats[i].handled_packets, >> - __ATOMIC_ACQUIRE)); >> + __ATOMIC_RELAXED)); >> printf("Sanity test with two hash values done\n"); >> } >> >> @@ -190,7 +188,7 @@ sanity_test(struct worker_params *wp, struct >> rte_mempool *p) >> for (i = 0; i < rte_lcore_count() - 1; i++) >> printf("Worker %u handled %u packets\n", i, >> __atomic_load_n(&worker_stats[i].handled_packets, >> - __ATOMIC_ACQUIRE)); >> + __ATOMIC_RELAXED)); >> printf("Sanity test with non-zero hashes done\n"); >> >> rte_mempool_put_bulk(p, (void *)bufs, BURST); @@ -276,23 >> +274,20 @@ handle_work_with_free_mbufs(void *arg) >> struct rte_mbuf *buf[8] __rte_cache_aligned; >> struct worker_params *wp = arg; >> struct rte_distributor *d = wp->dist; >> - unsigned int count = 0; >> unsigned int i; >> unsigned int num; >> unsigned int id = __atomic_fetch_add(&worker_idx, 1, >> __ATOMIC_RELAXED); >> >> num = rte_distributor_get_pkt(d, id, buf, NULL, 0); >> while (!quit) { >> - count += num; >> __atomic_fetch_add(&worker_stats[id].handled_packets, >> num, >> - __ATOMIC_ACQ_REL); >> + __ATOMIC_RELAXED); >> for (i = 0; i < num; i++) >> rte_pktmbuf_free(buf[i]); >> num = rte_distributor_get_pkt(d, id, buf, NULL, 0); >> } >> - count += num; >> __atomic_fetch_add(&worker_stats[id].handled_packets, num, >> - __ATOMIC_ACQ_REL); >> + __ATOMIC_RELAXED); >> rte_distributor_return_pkt(d, id, buf, num); >> return 0; >> } >> @@ -318,7 +313,6 @@ sanity_test_with_mbuf_alloc(struct worker_params >> *wp, struct rte_mempool *p) >> rte_distributor_process(d, NULL, 0); >> for (j = 0; j < BURST; j++) { >> bufs[j]->hash.usr = (i+j) << 1; >> - rte_mbuf_refcnt_set(bufs[j], 1); >> } >> >> rte_distributor_process(d, bufs, BURST); @@ -342,15 +336,10 >> @@ sanity_test_with_mbuf_alloc(struct worker_params *wp, struct >> rte_mempool *p) static int handle_work_for_shutdown_test(void *arg) { >> - struct rte_mbuf *pkt = NULL; >> struct rte_mbuf *buf[8] __rte_cache_aligned; >> struct worker_params *wp = arg; >> struct rte_distributor *d = wp->dist; >> - unsigned int count = 0; >> unsigned int num; >> - unsigned int total = 0; >> - unsigned int i; >> - unsigned int returned = 0; >> unsigned int zero_id = 0; >> unsigned int zero_unset; >> const unsigned int id = __atomic_fetch_add(&worker_idx, 1, @@ - >> 368,11 +357,8 @@ handle_work_for_shutdown_test(void *arg) >> /* wait for quit single globally, or for worker zero, wait >> * for zero_quit */ >> while (!quit && !(id == zero_id && zero_quit)) { >> - count += num; >> __atomic_fetch_add(&worker_stats[id].handled_packets, >> num, >> - __ATOMIC_ACQ_REL); >> - for (i = 0; i < num; i++) >> - rte_pktmbuf_free(buf[i]); >> + __ATOMIC_RELAXED); >> num = rte_distributor_get_pkt(d, id, buf, NULL, 0); >> >> if (num > 0) { >> @@ -381,15 +367,12 @@ handle_work_for_shutdown_test(void *arg) >> false, __ATOMIC_ACQ_REL, >> __ATOMIC_ACQUIRE); >> } >> zero_id = __atomic_load_n(&zero_idx, >> __ATOMIC_ACQUIRE); >> - >> - total += num; >> } >> - count += num; >> - returned = rte_distributor_return_pkt(d, id, buf, num); >> - >> __atomic_fetch_add(&worker_stats[id].handled_packets, num, >> - __ATOMIC_ACQ_REL); >> + __ATOMIC_RELAXED); >> if (id == zero_id) { >> + rte_distributor_return_pkt(d, id, NULL, 0); >> + >> /* for worker zero, allow it to restart to pick up last packet >> * when all workers are shutting down. >> */ >> @@ -400,15 +383,11 @@ handle_work_for_shutdown_test(void *arg) >> >> while (!quit) { >> >> __atomic_fetch_add(&worker_stats[id].handled_packets, >> - num, __ATOMIC_ACQ_REL); >> - count += num; >> - rte_pktmbuf_free(pkt); >> + num, __ATOMIC_RELAXED); >> num = rte_distributor_get_pkt(d, id, buf, NULL, 0); >> } >> - returned = rte_distributor_return_pkt(d, >> - id, buf, num); >> - printf("Num returned = %d\n", returned); >> } >> + rte_distributor_return_pkt(d, id, buf, num); >> return 0; >> } >> >> @@ -424,7 +403,9 @@ sanity_test_with_worker_shutdown(struct >> worker_params *wp, { >> struct rte_distributor *d = wp->dist; >> struct rte_mbuf *bufs[BURST]; >> - unsigned i; >> + struct rte_mbuf *bufs2[BURST]; >> + unsigned int i; >> + unsigned int failed = 0; >> >> printf("=== Sanity test of worker shutdown ===\n"); >> >> @@ -450,16 +431,17 @@ sanity_test_with_worker_shutdown(struct >> worker_params *wp, >> */ >> >> /* get more buffers to queue up, again setting them to the same >> flow */ >> - if (rte_mempool_get_bulk(p, (void *)bufs, BURST) != 0) { >> + if (rte_mempool_get_bulk(p, (void *)bufs2, BURST) != 0) { >> printf("line %d: Error getting mbufs from pool\n", __LINE__); >> + rte_mempool_put_bulk(p, (void *)bufs, BURST); >> return -1; >> } >> for (i = 0; i < BURST; i++) >> - bufs[i]->hash.usr = 1; >> + bufs2[i]->hash.usr = 1; >> >> /* get worker zero to quit */ >> zero_quit = 1; >> - rte_distributor_process(d, bufs, BURST); >> + rte_distributor_process(d, bufs2, BURST); >> >> /* flush the distributor */ >> rte_distributor_flush(d); >> @@ -468,15 +450,21 @@ sanity_test_with_worker_shutdown(struct >> worker_params *wp, >> for (i = 0; i < rte_lcore_count() - 1; i++) >> printf("Worker %u handled %u packets\n", i, >> __atomic_load_n(&worker_stats[i].handled_packets, >> - __ATOMIC_ACQUIRE)); >> + __ATOMIC_RELAXED)); >> >> if (total_packet_count() != BURST * 2) { >> printf("Line %d: Error, not all packets flushed. " >> "Expected %u, got %u\n", >> __LINE__, BURST * 2, total_packet_count()); >> - return -1; >> + failed = 1; >> } >> >> + rte_mempool_put_bulk(p, (void *)bufs, BURST); >> + rte_mempool_put_bulk(p, (void *)bufs2, BURST); >> + >> + if (failed) >> + return -1; >> + >> printf("Sanity test with worker shutdown passed\n\n"); >> return 0; >> } >> @@ -490,7 +478,8 @@ test_flush_with_worker_shutdown(struct >> worker_params *wp, { >> struct rte_distributor *d = wp->dist; >> struct rte_mbuf *bufs[BURST]; >> - unsigned i; >> + unsigned int i; >> + unsigned int failed = 0; >> >> printf("=== Test flush fn with worker shutdown (%s) ===\n", wp- >>> name); >> @@ -522,15 +511,20 @@ test_flush_with_worker_shutdown(struct >> worker_params *wp, >> for (i = 0; i < rte_lcore_count() - 1; i++) >> printf("Worker %u handled %u packets\n", i, >> __atomic_load_n(&worker_stats[i].handled_packets, >> - __ATOMIC_ACQUIRE)); >> + __ATOMIC_RELAXED)); >> >> if (total_packet_count() != BURST) { >> printf("Line %d: Error, not all packets flushed. " >> "Expected %u, got %u\n", >> __LINE__, BURST, total_packet_count()); >> - return -1; >> + failed = 1; >> } >> >> + rte_mempool_put_bulk(p, (void *)bufs, BURST); >> + >> + if (failed) >> + return -1; >> + >> printf("Flush test with worker shutdown passed\n\n"); >> return 0; >> } >> @@ -596,7 +590,10 @@ quit_workers(struct worker_params *wp, struct >> rte_mempool *p) >> const unsigned num_workers = rte_lcore_count() - 1; >> unsigned i; >> struct rte_mbuf *bufs[RTE_MAX_LCORE]; >> - rte_mempool_get_bulk(p, (void *)bufs, num_workers); >> + if (rte_mempool_get_bulk(p, (void *)bufs, num_workers) != 0) { >> + printf("line %d: Error getting mbufs from pool\n", __LINE__); >> + return; >> + } >> >> zero_quit = 0; >> quit = 1; >> @@ -604,11 +601,12 @@ quit_workers(struct worker_params *wp, struct >> rte_mempool *p) >> bufs[i]->hash.usr = i << 1; >> rte_distributor_process(d, bufs, num_workers); >> >> - rte_mempool_put_bulk(p, (void *)bufs, num_workers); >> - >> rte_distributor_process(d, NULL, 0); >> rte_distributor_flush(d); >> rte_eal_mp_wait_lcore(); >> + >> + rte_mempool_put_bulk(p, (void *)bufs, num_workers); >> + >> quit = 0; >> worker_idx = 0; >> zero_idx = RTE_MAX_LCORE; >> -- >> 2.17.1 -- Lukasz Wojciechowski Principal Software Engineer Samsung R&D Institute Poland Samsung Electronics Office +48 22 377 88 25 l.wojciechow@partner.samsung.com