DPDK patches and discussions
 help / color / mirror / Atom feed
From: Aaron Conole <aconole@redhat.com>
To: Dmitry Kozlyuk <dkozlyuk@oss.nvidia.com>
Cc: <dev@dpdk.org>,
	 Viacheslav Ovsiienko <viacheslavo@oss.nvidia.com>,
	Anatoly Burakov <anatoly.burakov@intel.com>
Subject: Re: [dpdk-dev] [PATCH v6 3/3] app/test: add allocator performance autotest
Date: Tue, 12 Oct 2021 09:53:38 -0400	[thread overview]
Message-ID: <f7tr1cqe4d9.fsf@redhat.com> (raw)
In-Reply-To: <20211011085644.2716490-4-dkozlyuk@nvidia.com> (Dmitry Kozlyuk's message of "Mon, 11 Oct 2021 11:56:44 +0300")

Dmitry Kozlyuk <dkozlyuk@oss.nvidia.com> writes:

> Memory allocator performance is crucial to applications that deal
> with large amount of memory or allocate frequently. DPDK allocator
> performance is affected by EAL options, API used and, at least,
> allocation size. New autotest is intended to be run with different
> EAL options. It measures performance with a range of sizes
> for dirrerent APIs: rte_malloc, rte_zmalloc, and rte_memzone_reserve.
>
> Work distribution between allocation and deallocation depends on EAL
> options. The test prints both times and total time to ease comparison.
>
> Memory can be filled with zeroes at different points of allocation path,
> but it always takes considerable fraction of overall timing. This is why
> the test measures filling speed and prints how long clearing would take
> for each size as a hint.
>
> Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
> Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---

This isn't really a test, imho.  There are no assert()s.  How does a
developer who tries to fix a bug in this area know what is acceptable?

Please switch the printf()s to RTE_LOG calls, and add some
RTE_TEST_ASSERT calls to enforce some time range at the least.
Otherwise this test will not really be checking the performance - just
giving a report somewhere.

Also, I don't understand the way the memset test works here.  You do one
large memset at the very beginning and then extrapolate the time it
would take.  Does that hold any value or should we do a memset in each
iteration and enforce a scaled time?

>  app/test/meson.build        |   2 +
>  app/test/test_malloc_perf.c | 161 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 163 insertions(+)
>  create mode 100644 app/test/test_malloc_perf.c
>
> diff --git a/app/test/meson.build b/app/test/meson.build
> index f144d8b8ed..47d1d60ded 100644
> --- a/app/test/meson.build
> +++ b/app/test/meson.build
> @@ -85,6 +85,7 @@ test_sources = files(
>          'test_lpm6_perf.c',
>          'test_lpm_perf.c',
>          'test_malloc.c',
> +        'test_malloc_perf.c',
>          'test_mbuf.c',
>          'test_member.c',
>          'test_member_perf.c',
> @@ -282,6 +283,7 @@ fast_tests = [
>  
>  perf_test_names = [
>          'ring_perf_autotest',
> +        'malloc_perf_autotest',
>          'mempool_perf_autotest',
>          'memcpy_perf_autotest',
>          'hash_perf_autotest',
> diff --git a/app/test/test_malloc_perf.c b/app/test/test_malloc_perf.c
> new file mode 100644
> index 0000000000..fa7357f540
> --- /dev/null
> +++ b/app/test/test_malloc_perf.c
> @@ -0,0 +1,161 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright (c) 2021 NVIDIA Corporation & Affiliates
> + */
> +
> +#include <inttypes.h>
> +#include <string.h>
> +#include <rte_cycles.h>
> +#include <rte_errno.h>
> +#include <rte_malloc.h>
> +#include <rte_memzone.h>
> +
> +#include "test.h"
> +
> +typedef void * (alloc_t)(const char *name, size_t size, unsigned int align);
> +typedef void (free_t)(void *addr);
> +
> +static const uint64_t KB = 1 << 10;
> +static const uint64_t GB = 1 << 30;
> +
> +static double
> +tsc_to_us(uint64_t tsc, size_t runs)
> +{
> +	return (double)tsc / rte_get_tsc_hz() * US_PER_S / runs;
> +}
> +
> +static int
> +test_memset_perf(double *us_per_gb)
> +{
> +	static const size_t RUNS = 20;
> +
> +	void *ptr;
> +	size_t i;
> +	uint64_t tsc;
> +
> +	puts("Performance: memset");
> +
> +	ptr = rte_malloc(NULL, GB, 0);
> +	if (ptr == NULL) {
> +		printf("rte_malloc(size=%"PRIx64") failed\n", GB);
> +		return -1;
> +	}
> +
> +	tsc = rte_rdtsc_precise();
> +	for (i = 0; i < RUNS; i++)
> +		memset(ptr, 0, GB);
> +	tsc = rte_rdtsc_precise() - tsc;
> +
> +	*us_per_gb = tsc_to_us(tsc, RUNS);
> +	printf("Result: %f.3 GiB/s <=> %.2f us/MiB\n",
> +			US_PER_S / *us_per_gb, *us_per_gb / KB);
> +
> +	rte_free(ptr);
> +	putchar('\n');
> +	return 0;
> +}
> +
> +static int
> +test_alloc_perf(const char *name, alloc_t *alloc_fn, free_t free_fn,
> +		size_t max_runs, double memset_gb_us)
> +{
> +	static const size_t SIZES[] = {
> +			1 << 6, 1 << 7, 1 << 10, 1 << 12, 1 << 16, 1 << 20,
> +			1 << 21, 1 << 22, 1 << 24, 1 << 30 };
> +
> +	size_t i, j;
> +	void **ptrs;
> +
> +	printf("Performance: %s\n", name);
> +
> +	ptrs = calloc(max_runs, sizeof(ptrs[0]));
> +	if (ptrs == NULL) {
> +		puts("Cannot allocate memory for pointers");
> +		return -1;
> +	}
> +
> +	printf("%12s%8s%12s%12s%12s%12s\n",
> +			"Size (B)", "Runs", "Alloc (us)", "Free (us)",
> +			"Total (us)", "memset (us)");
> +	for (i = 0; i < RTE_DIM(SIZES); i++) {
> +		size_t size = SIZES[i];
> +		size_t runs_done;
> +		uint64_t tsc_start, tsc_alloc, tsc_free;
> +		double alloc_time, free_time, memset_time;
> +
> +		tsc_start = rte_rdtsc_precise();
> +		for (j = 0; j < max_runs; j++) {
> +			ptrs[j] = alloc_fn(NULL, size, 0);
> +			if (ptrs[j] == NULL)
> +				break;
> +		}
> +		tsc_alloc = rte_rdtsc_precise() - tsc_start;
> +
> +		if (j == 0) {
> +			printf("%12zu Interrupted: out of memory.\n", size);
> +			break;
> +		}
> +		runs_done = j;
> +
> +		tsc_start = rte_rdtsc_precise();
> +		for (j = 0; j < runs_done && ptrs[j] != NULL; j++)
> +			free_fn(ptrs[j]);
> +		tsc_free = rte_rdtsc_precise() - tsc_start;
> +
> +		alloc_time = tsc_to_us(tsc_alloc, runs_done);
> +		free_time = tsc_to_us(tsc_free, runs_done);
> +		memset_time = memset_gb_us * size / GB;
> +		printf("%12zu%8zu%12.2f%12.2f%12.2f%12.2f\n",
> +				size, runs_done, alloc_time, free_time,
> +				alloc_time + free_time, memset_time);
> +
> +		memset(ptrs, 0, max_runs * sizeof(ptrs[0]));
> +	}
> +
> +	free(ptrs);
> +	putchar('\n');
> +	return 0;
> +}
> +
> +static void *
> +memzone_alloc(const char *name __rte_unused, size_t size, unsigned int align)
> +{
> +	const struct rte_memzone *mz;
> +	char gen_name[RTE_MEMZONE_NAMESIZE];
> +
> +	snprintf(gen_name, sizeof(gen_name), "test-mz-%"PRIx64, rte_rdtsc());
> +	mz = rte_memzone_reserve_aligned(gen_name, size, SOCKET_ID_ANY,
> +			RTE_MEMZONE_1GB | RTE_MEMZONE_SIZE_HINT_ONLY, align);
> +	return (void *)(uintptr_t)mz;
> +}
> +
> +static void
> +memzone_free(void *addr)
> +{
> +	rte_memzone_free((struct rte_memzone *)addr);
> +}
> +
> +static int
> +test_malloc_perf(void)
> +{
> +	static const size_t MAX_RUNS = 10000;
> +
> +	double memset_gb_us;
> +
> +	if (test_memset_perf(&memset_gb_us) < 0)
> +		return -1;
> +
> +	if (test_alloc_perf("rte_malloc", rte_malloc, rte_free,
> +			MAX_RUNS, memset_gb_us) < 0)
> +		return -1;
> +	if (test_alloc_perf("rte_zmalloc", rte_zmalloc, rte_free,
> +			MAX_RUNS, memset_gb_us) < 0)
> +		return -1;
> +
> +	if (test_alloc_perf("rte_memzone_reserve", memzone_alloc, memzone_free,
> +			RTE_MAX_MEMZONE - 1, memset_gb_us) < 0)
> +		return -1;
> +
> +	return 0;
> +}
> +
> +REGISTER_TEST_COMMAND(malloc_perf_autotest, test_malloc_perf);


  reply	other threads:[~2021-10-12 13:53 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-05 12:49 [dpdk-dev] [PATCH 21.11 0/3] eal: add memory pre-allocation from existing files Dmitry Kozlyuk
2021-07-05 12:49 ` [dpdk-dev] [PATCH 21.11 1/3] eal/linux: make hugetlbfs analysis reusable Dmitry Kozlyuk
2021-07-05 12:49 ` [dpdk-dev] [PATCH 21.11 2/3] eal: add memory pre-allocation from existing files Dmitry Kozlyuk
2021-07-05 12:49 ` [dpdk-dev] [PATCH 21.11 3/3] app/test: add allocator performance autotest Dmitry Kozlyuk
2021-07-16 11:08 ` [dpdk-dev] [PATCH 21.11 v2 0/3] eal: add memory pre-allocation from existing files Dmitry Kozlyuk
2021-07-16 11:08   ` [dpdk-dev] [PATCH 21.11 v2 1/3] eal/linux: make hugetlbfs analysis reusable Dmitry Kozlyuk
2021-07-16 11:08   ` [dpdk-dev] [PATCH 21.11 v2 2/3] eal: add memory pre-allocation from existing files Dmitry Kozlyuk
2021-07-16 11:08   ` [dpdk-dev] [PATCH 21.11 v2 3/3] app/test: add allocator performance autotest Dmitry Kozlyuk
2021-08-09  9:45   ` [dpdk-dev] [PATCH 21.11 v2 0/3] eal: add memory pre-allocation from existing files Dmitry Kozlyuk
2021-08-30  8:21     ` Dmitry Kozlyuk
2021-09-14 10:34   ` [dpdk-dev] [PATCH v3 " Dmitry Kozlyuk
2021-09-14 10:34     ` [dpdk-dev] [PATCH v3 1/3] eal/linux: make hugetlbfs analysis reusable Dmitry Kozlyuk
2021-09-14 12:48       ` John Levon
2021-09-14 12:57         ` Dmitry Kozlyuk
2021-09-16 12:08       ` John Levon
2021-09-14 10:34     ` [dpdk-dev] [PATCH v3 2/3] eal: add memory pre-allocation from existing files Dmitry Kozlyuk
2021-09-14 10:34     ` [dpdk-dev] [PATCH v3 3/3] app/test: add allocator performance autotest Dmitry Kozlyuk
2021-09-20 12:52     ` [dpdk-dev] [PATCH v4 0/3] eal: add memory pre-allocation from existing files dkozlyuk
2021-09-20 12:53       ` [dpdk-dev] [PATCH v4 1/3] eal/linux: make hugetlbfs analysis reusable dkozlyuk
2021-09-20 12:53       ` [dpdk-dev] [PATCH v4 2/3] eal: add memory pre-allocation from existing files dkozlyuk
2021-09-20 12:53       ` [dpdk-dev] [PATCH v4 3/3] app/test: add allocator performance autotest dkozlyuk
2021-09-21  8:16       ` [dpdk-dev] [PATCH v5 0/3] eal: add memory pre-allocation from existing files dkozlyuk
2021-09-21  8:16         ` [dpdk-dev] [PATCH v5 1/3] eal/linux: make hugetlbfs analysis reusable dkozlyuk
2021-09-22 13:52           ` John Levon
2021-10-05 17:36           ` Thomas Monjalon
2021-10-08 15:33             ` John Levon
2021-10-08 15:50               ` Dmitry Kozlyuk
2021-09-21  8:16         ` [dpdk-dev] [PATCH v5 2/3] eal: add memory pre-allocation from existing files dkozlyuk
2021-09-21  8:16         ` [dpdk-dev] [PATCH v5 3/3] app/test: add allocator performance autotest dkozlyuk
2021-10-11  8:56         ` [dpdk-dev] [PATCH v6 0/3] eal: add memory pre-allocation from existing files Dmitry Kozlyuk
2021-10-11  8:56           ` [dpdk-dev] [PATCH v6 1/3] eal/linux: make hugetlbfs analysis reusable Dmitry Kozlyuk
2021-10-13  8:16             ` David Marchand
2021-10-13  9:21               ` Dmitry Kozlyuk
2021-10-11  8:56           ` [dpdk-dev] [PATCH v6 2/3] eal: add memory pre-allocation from existing files Dmitry Kozlyuk
2021-10-12 15:37             ` David Marchand
2021-10-12 15:55               ` Dmitry Kozlyuk
2021-10-12 17:32                 ` David Marchand
2021-10-12 21:09                   ` Dmitry Kozlyuk
2021-10-13 10:18                     ` David Marchand
2021-11-08 14:27                       ` Dmitry Kozlyuk
2021-11-08 17:45                         ` David Marchand
2021-10-11  8:56           ` [dpdk-dev] [PATCH v6 3/3] app/test: add allocator performance autotest Dmitry Kozlyuk
2021-10-12 13:53             ` Aaron Conole [this message]
2021-10-12 14:48               ` Dmitry Kozlyuk
2021-10-15 13:47                 ` Aaron Conole
2021-10-11 18:52           ` [dpdk-dev] [PATCH v6 0/3] eal: add memory pre-allocation from existing files Thomas Monjalon
2021-10-11 21:12             ` [dpdk-dev] [dpdk-ci] " Lincoln Lavoie
2021-10-12  6:54               ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f7tr1cqe4d9.fsf@redhat.com \
    --to=aconole@redhat.com \
    --cc=anatoly.burakov@intel.com \
    --cc=dev@dpdk.org \
    --cc=dkozlyuk@oss.nvidia.com \
    --cc=viacheslavo@oss.nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).