From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DA82BA0C47; Tue, 12 Oct 2021 15:53:44 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C7C2440E0F; Tue, 12 Oct 2021 15:53:44 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by mails.dpdk.org (Postfix) with ESMTP id 3563040151 for ; Tue, 12 Oct 2021 15:53:43 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1634046822; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=KNTe2V8HMlNosmmrOpGi/N0ny+Fm3Xs/6rMtmlac+1c=; b=djd6XL1468BlJm82I/KVh7/349HQ5abOOJGBr2cKJRtz3vVXXQ25IvHPVkaf02dohdLUXN hKdvNtp9Q56GxSJksoQ8ilyfWjzdqSiQ96k41ytvvEsAvfiwaooh+zPLDJxdmCtp6vrpR0 ZImSO47JS4vhlSEwytXb5oPhf+Fn0jI= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-581-TkiJaFTCPX-okM8yfEDSJA-1; Tue, 12 Oct 2021 09:53:41 -0400 X-MC-Unique: TkiJaFTCPX-okM8yfEDSJA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 430C6814245; Tue, 12 Oct 2021 13:53:40 +0000 (UTC) Received: from RHTPC1VM0NT (unknown [10.22.17.93]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 99A485D6A8; Tue, 12 Oct 2021 13:53:39 +0000 (UTC) From: Aaron Conole To: Dmitry Kozlyuk Cc: , Viacheslav Ovsiienko , Anatoly Burakov References: <20210921081632.858873-1-dkozlyuk@nvidia.com> <20211011085644.2716490-1-dkozlyuk@nvidia.com> <20211011085644.2716490-4-dkozlyuk@nvidia.com> Date: Tue, 12 Oct 2021 09:53:38 -0400 In-Reply-To: <20211011085644.2716490-4-dkozlyuk@nvidia.com> (Dmitry Kozlyuk's message of "Mon, 11 Oct 2021 11:56:44 +0300") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.2 (gnu/linux) MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=aconole@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain Subject: Re: [dpdk-dev] [PATCH v6 3/3] app/test: add allocator performance autotest X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Dmitry Kozlyuk writes: > Memory allocator performance is crucial to applications that deal > with large amount of memory or allocate frequently. DPDK allocator > performance is affected by EAL options, API used and, at least, > allocation size. New autotest is intended to be run with different > EAL options. It measures performance with a range of sizes > for dirrerent APIs: rte_malloc, rte_zmalloc, and rte_memzone_reserve. > > Work distribution between allocation and deallocation depends on EAL > options. The test prints both times and total time to ease comparison. > > Memory can be filled with zeroes at different points of allocation path, > but it always takes considerable fraction of overall timing. This is why > the test measures filling speed and prints how long clearing would take > for each size as a hint. > > Signed-off-by: Dmitry Kozlyuk > Reviewed-by: Viacheslav Ovsiienko > --- This isn't really a test, imho. There are no assert()s. How does a developer who tries to fix a bug in this area know what is acceptable? Please switch the printf()s to RTE_LOG calls, and add some RTE_TEST_ASSERT calls to enforce some time range at the least. Otherwise this test will not really be checking the performance - just giving a report somewhere. Also, I don't understand the way the memset test works here. You do one large memset at the very beginning and then extrapolate the time it would take. Does that hold any value or should we do a memset in each iteration and enforce a scaled time? > app/test/meson.build | 2 + > app/test/test_malloc_perf.c | 161 ++++++++++++++++++++++++++++++++++++ > 2 files changed, 163 insertions(+) > create mode 100644 app/test/test_malloc_perf.c > > diff --git a/app/test/meson.build b/app/test/meson.build > index f144d8b8ed..47d1d60ded 100644 > --- a/app/test/meson.build > +++ b/app/test/meson.build > @@ -85,6 +85,7 @@ test_sources = files( > 'test_lpm6_perf.c', > 'test_lpm_perf.c', > 'test_malloc.c', > + 'test_malloc_perf.c', > 'test_mbuf.c', > 'test_member.c', > 'test_member_perf.c', > @@ -282,6 +283,7 @@ fast_tests = [ > > perf_test_names = [ > 'ring_perf_autotest', > + 'malloc_perf_autotest', > 'mempool_perf_autotest', > 'memcpy_perf_autotest', > 'hash_perf_autotest', > diff --git a/app/test/test_malloc_perf.c b/app/test/test_malloc_perf.c > new file mode 100644 > index 0000000000..fa7357f540 > --- /dev/null > +++ b/app/test/test_malloc_perf.c > @@ -0,0 +1,161 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright (c) 2021 NVIDIA Corporation & Affiliates > + */ > + > +#include > +#include > +#include > +#include > +#include > +#include > + > +#include "test.h" > + > +typedef void * (alloc_t)(const char *name, size_t size, unsigned int align); > +typedef void (free_t)(void *addr); > + > +static const uint64_t KB = 1 << 10; > +static const uint64_t GB = 1 << 30; > + > +static double > +tsc_to_us(uint64_t tsc, size_t runs) > +{ > + return (double)tsc / rte_get_tsc_hz() * US_PER_S / runs; > +} > + > +static int > +test_memset_perf(double *us_per_gb) > +{ > + static const size_t RUNS = 20; > + > + void *ptr; > + size_t i; > + uint64_t tsc; > + > + puts("Performance: memset"); > + > + ptr = rte_malloc(NULL, GB, 0); > + if (ptr == NULL) { > + printf("rte_malloc(size=%"PRIx64") failed\n", GB); > + return -1; > + } > + > + tsc = rte_rdtsc_precise(); > + for (i = 0; i < RUNS; i++) > + memset(ptr, 0, GB); > + tsc = rte_rdtsc_precise() - tsc; > + > + *us_per_gb = tsc_to_us(tsc, RUNS); > + printf("Result: %f.3 GiB/s <=> %.2f us/MiB\n", > + US_PER_S / *us_per_gb, *us_per_gb / KB); > + > + rte_free(ptr); > + putchar('\n'); > + return 0; > +} > + > +static int > +test_alloc_perf(const char *name, alloc_t *alloc_fn, free_t free_fn, > + size_t max_runs, double memset_gb_us) > +{ > + static const size_t SIZES[] = { > + 1 << 6, 1 << 7, 1 << 10, 1 << 12, 1 << 16, 1 << 20, > + 1 << 21, 1 << 22, 1 << 24, 1 << 30 }; > + > + size_t i, j; > + void **ptrs; > + > + printf("Performance: %s\n", name); > + > + ptrs = calloc(max_runs, sizeof(ptrs[0])); > + if (ptrs == NULL) { > + puts("Cannot allocate memory for pointers"); > + return -1; > + } > + > + printf("%12s%8s%12s%12s%12s%12s\n", > + "Size (B)", "Runs", "Alloc (us)", "Free (us)", > + "Total (us)", "memset (us)"); > + for (i = 0; i < RTE_DIM(SIZES); i++) { > + size_t size = SIZES[i]; > + size_t runs_done; > + uint64_t tsc_start, tsc_alloc, tsc_free; > + double alloc_time, free_time, memset_time; > + > + tsc_start = rte_rdtsc_precise(); > + for (j = 0; j < max_runs; j++) { > + ptrs[j] = alloc_fn(NULL, size, 0); > + if (ptrs[j] == NULL) > + break; > + } > + tsc_alloc = rte_rdtsc_precise() - tsc_start; > + > + if (j == 0) { > + printf("%12zu Interrupted: out of memory.\n", size); > + break; > + } > + runs_done = j; > + > + tsc_start = rte_rdtsc_precise(); > + for (j = 0; j < runs_done && ptrs[j] != NULL; j++) > + free_fn(ptrs[j]); > + tsc_free = rte_rdtsc_precise() - tsc_start; > + > + alloc_time = tsc_to_us(tsc_alloc, runs_done); > + free_time = tsc_to_us(tsc_free, runs_done); > + memset_time = memset_gb_us * size / GB; > + printf("%12zu%8zu%12.2f%12.2f%12.2f%12.2f\n", > + size, runs_done, alloc_time, free_time, > + alloc_time + free_time, memset_time); > + > + memset(ptrs, 0, max_runs * sizeof(ptrs[0])); > + } > + > + free(ptrs); > + putchar('\n'); > + return 0; > +} > + > +static void * > +memzone_alloc(const char *name __rte_unused, size_t size, unsigned int align) > +{ > + const struct rte_memzone *mz; > + char gen_name[RTE_MEMZONE_NAMESIZE]; > + > + snprintf(gen_name, sizeof(gen_name), "test-mz-%"PRIx64, rte_rdtsc()); > + mz = rte_memzone_reserve_aligned(gen_name, size, SOCKET_ID_ANY, > + RTE_MEMZONE_1GB | RTE_MEMZONE_SIZE_HINT_ONLY, align); > + return (void *)(uintptr_t)mz; > +} > + > +static void > +memzone_free(void *addr) > +{ > + rte_memzone_free((struct rte_memzone *)addr); > +} > + > +static int > +test_malloc_perf(void) > +{ > + static const size_t MAX_RUNS = 10000; > + > + double memset_gb_us; > + > + if (test_memset_perf(&memset_gb_us) < 0) > + return -1; > + > + if (test_alloc_perf("rte_malloc", rte_malloc, rte_free, > + MAX_RUNS, memset_gb_us) < 0) > + return -1; > + if (test_alloc_perf("rte_zmalloc", rte_zmalloc, rte_free, > + MAX_RUNS, memset_gb_us) < 0) > + return -1; > + > + if (test_alloc_perf("rte_memzone_reserve", memzone_alloc, memzone_free, > + RTE_MAX_MEMZONE - 1, memset_gb_us) < 0) > + return -1; > + > + return 0; > +} > + > +REGISTER_TEST_COMMAND(malloc_perf_autotest, test_malloc_perf);