From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4720CA04DD for ; Wed, 18 Nov 2020 17:37:38 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 38BC0C8DC; Wed, 18 Nov 2020 17:37:29 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by dpdk.org (Postfix) with ESMTP id 106F8C8DC for ; Wed, 18 Nov 2020 17:37:26 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1605717445; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nYfSSnPQN+PmVnSb4P5BG2L5eCrYSKLcZnQefC4P0/k=; b=jHUgd2vr0dzbPrC2vBgQOSAWzPoze3VaiI82Ilr0DbdZObJb9eS1W5J0ZUgmAutf04HhhR Ta+ilXBWUm2r8rumy4DK/m0HJvX1IdiBR4ssoUYB8pFbURyq9AG9myli/pItGyZKG+TJUb 3fK7HbaBil7zmnYgIzHKlxhFogYp7EM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-268-DwTmJ5ajNFqItduSaa3hDA-1; Wed, 18 Nov 2020 11:37:18 -0500 X-MC-Unique: DwTmJ5ajNFqItduSaa3hDA-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C4895107467B; Wed, 18 Nov 2020 16:36:57 +0000 (UTC) Received: from rh.redhat.com (ovpn-113-249.ams2.redhat.com [10.36.113.249]) by smtp.corp.redhat.com (Postfix) with ESMTP id C774E5C1A3; Wed, 18 Nov 2020 16:36:56 +0000 (UTC) From: Kevin Traynor To: Lukasz Wojciechowski Cc: David Hunt , dpdk stable Date: Wed, 18 Nov 2020 16:35:10 +0000 Message-Id: <20201118163558.1101823-24-ktraynor@redhat.com> In-Reply-To: <20201118163558.1101823-1-ktraynor@redhat.com> References: <20201118163558.1101823-1-ktraynor@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ktraynor@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="US-ASCII" Subject: [dpdk-stable] patch 'test/distributor: fix freeing mbufs' has been queued to LTS release 18.11.11 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" Hi, FYI, your patch has been queued to LTS release 18.11.11 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 11/24/20. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/kevintraynor/dpdk-stable-queue This queued commit can be viewed at: https://github.com/kevintraynor/dpdk-stable-queue/commit/2a9389bb63bba97f79d389dd806581388b2524f5 Thanks. Kevin. --- >From 2a9389bb63bba97f79d389dd806581388b2524f5 Mon Sep 17 00:00:00 2001 From: Lukasz Wojciechowski Date: Sat, 17 Oct 2020 05:06:51 +0200 Subject: [PATCH] test/distributor: fix freeing mbufs [ upstream commit 13b13100eaf6de02e2be428dc4ff5099ba796ab5 ] Sanity tests with mbuf alloc and shutdown tests assume that mbufs passed to worker cores are freed in handlers. Such packets should not be returned to the distributor's main core. The only packets that should be returned are the packets send after completion of the tests in quit_workers function. This patch stops returning mbufs to distributor's core. In case of shutdown tests it is impossible to determine how worker and distributor threads would synchronize. Packets used by tests should be freed and packets used during quit_workers() shouldn't. That's why returning mbufs to mempool is moved to test procedure run on distributor thread from worker threads. Additionally this patch cleans up unused variables. Fixes: c0de0eb82e40 ("distributor: switch over to new API") Signed-off-by: Lukasz Wojciechowski Acked-by: David Hunt --- test/test/test_distributor.c | 67 ++++++++++++++++++------------------ 1 file changed, 33 insertions(+), 34 deletions(-) diff --git a/test/test/test_distributor.c b/test/test/test_distributor.c index 45b4b22b43..d13e38f062 100644 --- a/test/test/test_distributor.c +++ b/test/test/test_distributor.c @@ -64,5 +64,5 @@ handle_work(void *arg) struct worker_params *wp = arg; struct rte_distributor *db = wp->dist; - unsigned int count = 0, num; + unsigned int num; unsigned int id = __sync_fetch_and_add(&worker_idx, 1); @@ -71,5 +71,4 @@ handle_work(void *arg) __atomic_fetch_add(&worker_stats[id].handled_packets, num, __ATOMIC_RELAXED); - count += num; num = rte_distributor_get_pkt(db, id, buf, buf, num); @@ -77,5 +76,4 @@ handle_work(void *arg) __atomic_fetch_add(&worker_stats[id].handled_packets, num, __ATOMIC_RELAXED); - count += num; rte_distributor_return_pkt(db, id, buf, num); return 0; @@ -269,5 +267,4 @@ handle_work_with_free_mbufs(void *arg) struct worker_params *wp = arg; struct rte_distributor *d = wp->dist; - unsigned int count = 0; unsigned int i; unsigned int num; @@ -277,5 +274,4 @@ handle_work_with_free_mbufs(void *arg) while (!quit) { worker_stats[id].handled_packets += num; - count += num; for (i = 0; i < num; i++) rte_pktmbuf_free(buf[i]); @@ -283,5 +279,4 @@ handle_work_with_free_mbufs(void *arg) } worker_stats[id].handled_packets += num; - count += num; rte_distributor_return_pkt(d, id, buf, num); return 0; @@ -309,5 +304,4 @@ sanity_test_with_mbuf_alloc(struct worker_params *wp, struct rte_mempool *p) for (j = 0; j < BURST; j++) { bufs[j]->hash.usr = (i+j) << 1; - rte_mbuf_refcnt_set(bufs[j], 1); } @@ -333,13 +327,8 @@ static int handle_work_for_shutdown_test(void *arg) { - struct rte_mbuf *pkt = NULL; struct rte_mbuf *buf[8] __rte_cache_aligned; struct worker_params *wp = arg; struct rte_distributor *d = wp->dist; - unsigned int count = 0; unsigned int num; - unsigned int total = 0; - unsigned int i; - unsigned int returned = 0; unsigned int zero_id = 0; unsigned int zero_unset; @@ -359,7 +348,4 @@ handle_work_for_shutdown_test(void *arg) while (!quit && !(id == zero_id && zero_quit)) { worker_stats[id].handled_packets += num; - count += num; - for (i = 0; i < num; i++) - rte_pktmbuf_free(buf[i]); num = rte_distributor_get_pkt(d, id, buf, NULL, 0); @@ -370,12 +356,10 @@ handle_work_for_shutdown_test(void *arg) } zero_id = __atomic_load_n(&zero_idx, __ATOMIC_ACQUIRE); - - total += num; } worker_stats[id].handled_packets += num; - count += num; - returned = rte_distributor_return_pkt(d, id, buf, num); if (id == zero_id) { + rte_distributor_return_pkt(d, id, NULL, 0); + /* for worker zero, allow it to restart to pick up last packet * when all workers are shutting down. @@ -388,12 +372,8 @@ handle_work_for_shutdown_test(void *arg) while (!quit) { worker_stats[id].handled_packets += num; - count += num; - rte_pktmbuf_free(pkt); num = rte_distributor_get_pkt(d, id, buf, NULL, 0); } - returned = rte_distributor_return_pkt(d, - id, buf, num); - printf("Num returned = %d\n", returned); } + rte_distributor_return_pkt(d, id, buf, num); return 0; } @@ -411,5 +391,7 @@ sanity_test_with_worker_shutdown(struct worker_params *wp, struct rte_distributor *d = wp->dist; struct rte_mbuf *bufs[BURST]; - unsigned i; + struct rte_mbuf *bufs2[BURST]; + unsigned int i; + unsigned int failed = 0; printf("=== Sanity test of worker shutdown ===\n"); @@ -437,14 +419,15 @@ sanity_test_with_worker_shutdown(struct worker_params *wp, /* get more buffers to queue up, again setting them to the same flow */ - if (rte_mempool_get_bulk(p, (void *)bufs, BURST) != 0) { + if (rte_mempool_get_bulk(p, (void *)bufs2, BURST) != 0) { printf("line %d: Error getting mbufs from pool\n", __LINE__); + rte_mempool_put_bulk(p, (void *)bufs, BURST); return -1; } for (i = 0; i < BURST; i++) - bufs[i]->hash.usr = 1; + bufs2[i]->hash.usr = 1; /* get worker zero to quit */ zero_quit = 1; - rte_distributor_process(d, bufs, BURST); + rte_distributor_process(d, bufs2, BURST); /* flush the distributor */ @@ -460,7 +443,13 @@ sanity_test_with_worker_shutdown(struct worker_params *wp, "Expected %u, got %u\n", __LINE__, BURST * 2, total_packet_count()); - return -1; + failed = 1; } + rte_mempool_put_bulk(p, (void *)bufs, BURST); + rte_mempool_put_bulk(p, (void *)bufs2, BURST); + + if (failed) + return -1; + printf("Sanity test with worker shutdown passed\n\n"); return 0; @@ -476,5 +465,6 @@ test_flush_with_worker_shutdown(struct worker_params *wp, struct rte_distributor *d = wp->dist; struct rte_mbuf *bufs[BURST]; - unsigned i; + unsigned int i; + unsigned int failed = 0; printf("=== Test flush fn with worker shutdown (%s) ===\n", wp->name); @@ -513,7 +503,12 @@ test_flush_with_worker_shutdown(struct worker_params *wp, "Expected %u, got %u\n", __LINE__, BURST, total_packet_count()); - return -1; + failed = 1; } + rte_mempool_put_bulk(p, (void *)bufs, BURST); + + if (failed) + return -1; + printf("Flush test with worker shutdown passed\n\n"); return 0; @@ -581,5 +576,8 @@ quit_workers(struct worker_params *wp, struct rte_mempool *p) unsigned i; struct rte_mbuf *bufs[RTE_MAX_LCORE]; - rte_mempool_get_bulk(p, (void *)bufs, num_workers); + if (rte_mempool_get_bulk(p, (void *)bufs, num_workers) != 0) { + printf("line %d: Error getting mbufs from pool\n", __LINE__); + return; + } zero_quit = 0; @@ -589,9 +587,10 @@ quit_workers(struct worker_params *wp, struct rte_mempool *p) rte_distributor_process(d, bufs, num_workers); - rte_mempool_put_bulk(p, (void *)bufs, num_workers); - rte_distributor_process(d, NULL, 0); rte_distributor_flush(d); rte_eal_mp_wait_lcore(); + + rte_mempool_put_bulk(p, (void *)bufs, num_workers); + quit = 0; worker_idx = 0; -- 2.26.2 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2020-11-18 16:33:38.421356226 +0000 +++ 0024-test-distributor-fix-freeing-mbufs.patch 2020-11-18 16:33:37.929215062 +0000 @@ -1 +1 @@ -From 13b13100eaf6de02e2be428dc4ff5099ba796ab5 Mon Sep 17 00:00:00 2001 +From 2a9389bb63bba97f79d389dd806581388b2524f5 Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit 13b13100eaf6de02e2be428dc4ff5099ba796ab5 ] + @@ -23 +24,0 @@ -Cc: stable@dpdk.org @@ -28 +29 @@ - app/test/test_distributor.c | 67 ++++++++++++++++++------------------- + test/test/test_distributor.c | 67 ++++++++++++++++++------------------ @@ -31,4 +32,4 @@ -diff --git a/app/test/test_distributor.c b/app/test/test_distributor.c -index 6cd7a2edda..ec1fe348ba 100644 ---- a/app/test/test_distributor.c -+++ b/app/test/test_distributor.c +diff --git a/test/test/test_distributor.c b/test/test/test_distributor.c +index 45b4b22b43..d13e38f062 100644 +--- a/test/test/test_distributor.c ++++ b/test/test/test_distributor.c @@ -40 +41 @@ - unsigned int id = __atomic_fetch_add(&worker_idx, 1, __ATOMIC_RELAXED); + unsigned int id = __sync_fetch_and_add(&worker_idx, 1); @@ -92 +93 @@ -@@ -360,7 +349,4 @@ handle_work_for_shutdown_test(void *arg) +@@ -359,7 +348,4 @@ handle_work_for_shutdown_test(void *arg) @@ -100 +101 @@ -@@ -371,12 +357,10 @@ handle_work_for_shutdown_test(void *arg) +@@ -370,12 +356,10 @@ handle_work_for_shutdown_test(void *arg) @@ -115 +116 @@ -@@ -389,12 +373,8 @@ handle_work_for_shutdown_test(void *arg) +@@ -388,12 +372,8 @@ handle_work_for_shutdown_test(void *arg) @@ -129 +130 @@ -@@ -412,5 +392,7 @@ sanity_test_with_worker_shutdown(struct worker_params *wp, +@@ -411,5 +391,7 @@ sanity_test_with_worker_shutdown(struct worker_params *wp, @@ -138 +139 @@ -@@ -438,14 +420,15 @@ sanity_test_with_worker_shutdown(struct worker_params *wp, +@@ -437,14 +419,15 @@ sanity_test_with_worker_shutdown(struct worker_params *wp, @@ -157 +158 @@ -@@ -461,7 +444,13 @@ sanity_test_with_worker_shutdown(struct worker_params *wp, +@@ -460,7 +443,13 @@ sanity_test_with_worker_shutdown(struct worker_params *wp, @@ -172 +173 @@ -@@ -477,5 +466,6 @@ test_flush_with_worker_shutdown(struct worker_params *wp, +@@ -476,5 +465,6 @@ test_flush_with_worker_shutdown(struct worker_params *wp, @@ -180 +181 @@ -@@ -514,7 +504,12 @@ test_flush_with_worker_shutdown(struct worker_params *wp, +@@ -513,7 +503,12 @@ test_flush_with_worker_shutdown(struct worker_params *wp, @@ -194 +195 @@ -@@ -582,5 +577,8 @@ quit_workers(struct worker_params *wp, struct rte_mempool *p) +@@ -581,5 +576,8 @@ quit_workers(struct worker_params *wp, struct rte_mempool *p) @@ -204 +205 @@ -@@ -590,9 +588,10 @@ quit_workers(struct worker_params *wp, struct rte_mempool *p) +@@ -589,9 +587,10 @@ quit_workers(struct worker_params *wp, struct rte_mempool *p)