DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH v2 1/2] test/service: add perf measurements for with stats mode
@ 2022-07-11 10:57 Harry van Haaren
  2022-07-11 10:57 ` [PATCH v2 2/2] service: fix potential stats race-condition on MT services Harry van Haaren
  2022-07-11 13:18 ` [PATCH v3 1/2] test/service: add perf measurements for with stats mode Harry van Haaren
  0 siblings, 2 replies; 6+ messages in thread
From: Harry van Haaren @ 2022-07-11 10:57 UTC (permalink / raw)
  To: dev
  Cc: Harry van Haaren, Mattias Rönnblom, Honnappa Nagarahalli,
	Morten Brørup

This commit improves the performance reporting of the service
cores polling loop to show both with and without statistics
collection modes. Collecting cycle statistics is costly, due
to calls to rte_rdtsc() per service iteration.

Reported-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Suggested-by: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
Suggested-by: Morten Brørup <mb@smartsharesystems.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>

---

This is split out as a seperate patch from the fix to allow
measuring the before/after of the service stats atomic fixup.
---
 app/test/test_service_cores.c | 36 ++++++++++++++++++++++++-----------
 1 file changed, 25 insertions(+), 11 deletions(-)

diff --git a/app/test/test_service_cores.c b/app/test/test_service_cores.c
index ced6ed0081..7415b6b686 100644
--- a/app/test/test_service_cores.c
+++ b/app/test/test_service_cores.c
@@ -777,6 +777,22 @@ service_run_on_app_core_func(void *arg)
 	return rte_service_run_iter_on_app_lcore(*delay_service_id, 1);
 }
 
+static float
+service_app_lcore_perf_measure(uint32_t id)
+{
+	/* Performance test: call in a loop, and measure tsc() */
+	const uint32_t perf_iters = (1 << 12);
+	uint64_t start = rte_rdtsc();
+	uint32_t i;
+	for (i = 0; i < perf_iters; i++) {
+		int err = service_run_on_app_core_func(&id);
+		TEST_ASSERT_EQUAL(0, err, "perf test: returned run failure");
+	}
+	uint64_t end = rte_rdtsc();
+
+	return (end - start)/(float)perf_iters;
+}
+
 static int
 service_app_lcore_poll_impl(const int mt_safe)
 {
@@ -828,17 +844,15 @@ service_app_lcore_poll_impl(const int mt_safe)
 				"MT Unsafe: App core1 didn't return -EBUSY");
 	}
 
-	/* Performance test: call in a loop, and measure tsc() */
-	const uint32_t perf_iters = (1 << 12);
-	uint64_t start = rte_rdtsc();
-	uint32_t i;
-	for (i = 0; i < perf_iters; i++) {
-		int err = service_run_on_app_core_func(&id);
-		TEST_ASSERT_EQUAL(0, err, "perf test: returned run failure");
-	}
-	uint64_t end = rte_rdtsc();
-	printf("perf test for %s: %0.1f cycles per call\n", mt_safe ?
-		"MT Safe" : "MT Unsafe", (end - start)/(float)perf_iters);
+	/* Measure performance of no-stats and with-stats. */
+	float cyc_no_stats = service_app_lcore_perf_measure(id);
+
+	TEST_ASSERT_EQUAL(0, rte_service_set_stats_enable(id, 1),
+				"failed to enable stats for service.");
+	float cyc_with_stats = service_app_lcore_perf_measure(id);
+
+	printf("perf test for %s, no stats: %0.1f, with stats %0.1f cycles/call\n",
+		mt_safe ? "MT Safe" : "MT Unsafe", cyc_no_stats, cyc_with_stats);
 
 	unregister_all();
 	return TEST_SUCCESS;
-- 
2.32.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v2 2/2] service: fix potential stats race-condition on MT services
  2022-07-11 10:57 [PATCH v2 1/2] test/service: add perf measurements for with stats mode Harry van Haaren
@ 2022-07-11 10:57 ` Harry van Haaren
  2022-07-11 13:18 ` [PATCH v3 1/2] test/service: add perf measurements for with stats mode Harry van Haaren
  1 sibling, 0 replies; 6+ messages in thread
From: Harry van Haaren @ 2022-07-11 10:57 UTC (permalink / raw)
  To: dev
  Cc: Harry van Haaren, Mattias Rönnblom, Honnappa Nagarahalli,
	Morten Brørup, Bruce Richardson

This commit fixes a potential racey-add that could occur if
multiple service-lcores were executing the same MT-safe service
at the same time, with service statistics collection enabled.

Because multiple threads can run and execute the service, the
stats values can have multiple writer threads, resulting in the
requirement of using atomic addition for correctness.

Note that when a MT unsafe service is executed, a spinlock is
held, so the stats increments are protected. This fact is used
to avoid executing atomic add instructions when not required.
Regular reads and increments are used, and only the store is
specified as atomic, reducing perf impact on e.g. x86 arch.

This patch causes a 1.25x increase in cycle-cost for polling a
MT safe service when statistics are enabled. No change was seen
for MT unsafe services, or when statistics are disabled.

Reported-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Suggested-by: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
Suggested-by: Morten Brørup <mb@smartsharesystems.com>
Suggested-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>

---

v2 (Thanks Honnappa, Morten, Bruce & Mattias for discussion):
- Improved handling of stat stores to ensure they're atomic by
  using __atomic_store_n() with regular loads/increments.
- Added BUILD_BUG_ON alignment checks for the uint64_t stats
  variables, tested with __rte_packed to ensure build breaks
  if not aligned naturally.

---
 lib/eal/common/rte_service.c | 23 +++++++++++++++++++++--
 1 file changed, 21 insertions(+), 2 deletions(-)

diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c
index d2b7275ac0..90d12032f0 100644
--- a/lib/eal/common/rte_service.c
+++ b/lib/eal/common/rte_service.c
@@ -54,6 +54,9 @@ struct rte_service_spec_impl {
 	uint64_t cycles_spent;
 } __rte_cache_aligned;
 
+/* Mask used to ensure uint64_t 8 byte vars are naturally aligned. */
+#define RTE_SERVICE_STAT_ALIGN_MASK (8 - 1)
+
 /* the internal values of a service core */
 struct core_state {
 	/* map of services IDs are run on this core */
@@ -359,13 +362,29 @@ service_runner_do_callback(struct rte_service_spec_impl *s,
 {
 	void *userdata = s->spec.callback_userdata;
 
+	/* Ensure the atomically stored variables are naturally aligned,
+	 * as required for regular loads to be atomic.
+	 */
+	RTE_BUILD_BUG_ON((offsetof(struct rte_service_spec_impl, calls)
+		& RTE_SERVICE_STAT_ALIGN_MASK) != 0);
+	RTE_BUILD_BUG_ON((offsetof(struct rte_service_spec_impl, cycles_spent)
+		& RTE_SERVICE_STAT_ALIGN_MASK) != 0);
+
 	if (service_stats_enabled(s)) {
 		uint64_t start = rte_rdtsc();
 		s->spec.callback(userdata);
 		uint64_t end = rte_rdtsc();
-		s->cycles_spent += end - start;
+		uint64_t cycles = end - start;
 		cs->calls_per_service[service_idx]++;
-		s->calls++;
+		if (service_mt_safe(s)) {
+			__atomic_fetch_add(&s->cycles_spent, cycles, __ATOMIC_RELAXED);
+			__atomic_fetch_add(&s->calls, 1, __ATOMIC_RELAXED);
+		} else {
+			uint64_t cycles_new = s->cycles_spent + cycles;
+			uint64_t calls_new = s->calls++;
+			__atomic_store_n(&s->cycles_spent, cycles_new, __ATOMIC_RELAXED);
+			__atomic_store_n(&s->calls, calls_new, __ATOMIC_RELAXED);
+		}
 	} else
 		s->spec.callback(userdata);
 }
-- 
2.32.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v3 1/2] test/service: add perf measurements for with stats mode
  2022-07-11 10:57 [PATCH v2 1/2] test/service: add perf measurements for with stats mode Harry van Haaren
  2022-07-11 10:57 ` [PATCH v2 2/2] service: fix potential stats race-condition on MT services Harry van Haaren
@ 2022-07-11 13:18 ` Harry van Haaren
  2022-07-11 13:18   ` [PATCH v3 2/2] service: fix potential stats race-condition on MT services Harry van Haaren
  2022-09-02 17:17   ` [PATCH v3 1/2] test/service: add perf measurements for with stats mode Mattias Rönnblom
  1 sibling, 2 replies; 6+ messages in thread
From: Harry van Haaren @ 2022-07-11 13:18 UTC (permalink / raw)
  To: dev
  Cc: Harry van Haaren, Mattias Rönnblom, Honnappa Nagarahalli,
	Morten Brørup

This commit improves the performance reporting of the service
cores polling loop to show both with and without statistics
collection modes. Collecting cycle statistics is costly, due
to calls to rte_rdtsc() per service iteration.

Reported-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Suggested-by: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
Suggested-by: Morten Brørup <mb@smartsharesystems.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>

---

This is split out as a seperate patch from the fix to allow
measuring the before/after of the service stats atomic fixup.
---
 app/test/test_service_cores.c | 36 ++++++++++++++++++++++++-----------
 1 file changed, 25 insertions(+), 11 deletions(-)

diff --git a/app/test/test_service_cores.c b/app/test/test_service_cores.c
index ced6ed0081..7415b6b686 100644
--- a/app/test/test_service_cores.c
+++ b/app/test/test_service_cores.c
@@ -777,6 +777,22 @@ service_run_on_app_core_func(void *arg)
 	return rte_service_run_iter_on_app_lcore(*delay_service_id, 1);
 }
 
+static float
+service_app_lcore_perf_measure(uint32_t id)
+{
+	/* Performance test: call in a loop, and measure tsc() */
+	const uint32_t perf_iters = (1 << 12);
+	uint64_t start = rte_rdtsc();
+	uint32_t i;
+	for (i = 0; i < perf_iters; i++) {
+		int err = service_run_on_app_core_func(&id);
+		TEST_ASSERT_EQUAL(0, err, "perf test: returned run failure");
+	}
+	uint64_t end = rte_rdtsc();
+
+	return (end - start)/(float)perf_iters;
+}
+
 static int
 service_app_lcore_poll_impl(const int mt_safe)
 {
@@ -828,17 +844,15 @@ service_app_lcore_poll_impl(const int mt_safe)
 				"MT Unsafe: App core1 didn't return -EBUSY");
 	}
 
-	/* Performance test: call in a loop, and measure tsc() */
-	const uint32_t perf_iters = (1 << 12);
-	uint64_t start = rte_rdtsc();
-	uint32_t i;
-	for (i = 0; i < perf_iters; i++) {
-		int err = service_run_on_app_core_func(&id);
-		TEST_ASSERT_EQUAL(0, err, "perf test: returned run failure");
-	}
-	uint64_t end = rte_rdtsc();
-	printf("perf test for %s: %0.1f cycles per call\n", mt_safe ?
-		"MT Safe" : "MT Unsafe", (end - start)/(float)perf_iters);
+	/* Measure performance of no-stats and with-stats. */
+	float cyc_no_stats = service_app_lcore_perf_measure(id);
+
+	TEST_ASSERT_EQUAL(0, rte_service_set_stats_enable(id, 1),
+				"failed to enable stats for service.");
+	float cyc_with_stats = service_app_lcore_perf_measure(id);
+
+	printf("perf test for %s, no stats: %0.1f, with stats %0.1f cycles/call\n",
+		mt_safe ? "MT Safe" : "MT Unsafe", cyc_no_stats, cyc_with_stats);
 
 	unregister_all();
 	return TEST_SUCCESS;
-- 
2.32.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v3 2/2] service: fix potential stats race-condition on MT services
  2022-07-11 13:18 ` [PATCH v3 1/2] test/service: add perf measurements for with stats mode Harry van Haaren
@ 2022-07-11 13:18   ` Harry van Haaren
  2022-10-05 13:06     ` David Marchand
  2022-09-02 17:17   ` [PATCH v3 1/2] test/service: add perf measurements for with stats mode Mattias Rönnblom
  1 sibling, 1 reply; 6+ messages in thread
From: Harry van Haaren @ 2022-07-11 13:18 UTC (permalink / raw)
  To: dev
  Cc: Harry van Haaren, Mattias Rönnblom, Honnappa Nagarahalli,
	Morten Brørup, Bruce Richardson

This commit fixes a potential racey-add that could occur if
multiple service-lcores were executing the same MT-safe service
at the same time, with service statistics collection enabled.

Because multiple threads can run and execute the service, the
stats values can have multiple writer threads, resulting in the
requirement of using atomic addition for correctness.

Note that when a MT unsafe service is executed, a spinlock is
held, so the stats increments are protected. This fact is used
to avoid executing atomic add instructions when not required.
Regular reads and increments are used, and only the store is
specified as atomic, reducing perf impact on e.g. x86 arch.

This patch causes a 1.25x increase in cycle-cost for polling a
MT safe service when statistics are enabled. No change was seen
for MT unsafe services, or when statistics are disabled.

Reported-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Suggested-by: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
Suggested-by: Morten Brørup <mb@smartsharesystems.com>
Suggested-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>

---

v3:
- Fix 32-bit build, by forcing natural alignment of uint64_t in
  the struct that contains it, using __rte_aligned(8) macro.
- Note: I'm seeing a checkpatch "avoid externs in .c files" warning,
  but it doesn't make sense to me, so perhaps its a false-positive..?

v2 (Thanks Honnappa, Morten, Bruce & Mattias for discussion):
- Improved handling of stat stores to ensure they're atomic by
  using __atomic_store_n() with regular loads/increments.
- Added BUILD_BUG_ON alignment checks for the uint64_t stats
  variables, tested with __rte_packed to ensure build breaks.
---
 lib/eal/common/rte_service.c | 31 +++++++++++++++++++++++++++----
 1 file changed, 27 insertions(+), 4 deletions(-)

diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c
index d2b7275ac0..94cb056196 100644
--- a/lib/eal/common/rte_service.c
+++ b/lib/eal/common/rte_service.c
@@ -50,10 +50,17 @@ struct rte_service_spec_impl {
 	 * on currently.
 	 */
 	uint32_t num_mapped_cores;
-	uint64_t calls;
-	uint64_t cycles_spent;
+
+	/* 32-bit builds won't naturally align a uint64_t, so force alignment,
+	 * allowing regular reads to be atomic.
+	 */
+	uint64_t calls __rte_aligned(8);
+	uint64_t cycles_spent __rte_aligned(8);
 } __rte_cache_aligned;
 
+/* Mask used to ensure uint64_t 8 byte vars are naturally aligned. */
+#define RTE_SERVICE_STAT_ALIGN_MASK (8 - 1)
+
 /* the internal values of a service core */
 struct core_state {
 	/* map of services IDs are run on this core */
@@ -359,13 +366,29 @@ service_runner_do_callback(struct rte_service_spec_impl *s,
 {
 	void *userdata = s->spec.callback_userdata;
 
+	/* Ensure the atomically stored variables are naturally aligned,
+	 * as required for regular loads to be atomic.
+	 */
+	RTE_BUILD_BUG_ON((offsetof(struct rte_service_spec_impl, calls)
+		& RTE_SERVICE_STAT_ALIGN_MASK) != 0);
+	RTE_BUILD_BUG_ON((offsetof(struct rte_service_spec_impl, cycles_spent)
+		& RTE_SERVICE_STAT_ALIGN_MASK) != 0);
+
 	if (service_stats_enabled(s)) {
 		uint64_t start = rte_rdtsc();
 		s->spec.callback(userdata);
 		uint64_t end = rte_rdtsc();
-		s->cycles_spent += end - start;
+		uint64_t cycles = end - start;
 		cs->calls_per_service[service_idx]++;
-		s->calls++;
+		if (service_mt_safe(s)) {
+			__atomic_fetch_add(&s->cycles_spent, cycles, __ATOMIC_RELAXED);
+			__atomic_fetch_add(&s->calls, 1, __ATOMIC_RELAXED);
+		} else {
+			uint64_t cycles_new = s->cycles_spent + cycles;
+			uint64_t calls_new = s->calls++;
+			__atomic_store_n(&s->cycles_spent, cycles_new, __ATOMIC_RELAXED);
+			__atomic_store_n(&s->calls, calls_new, __ATOMIC_RELAXED);
+		}
 	} else
 		s->spec.callback(userdata);
 }
-- 
2.32.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v3 1/2] test/service: add perf measurements for with stats mode
  2022-07-11 13:18 ` [PATCH v3 1/2] test/service: add perf measurements for with stats mode Harry van Haaren
  2022-07-11 13:18   ` [PATCH v3 2/2] service: fix potential stats race-condition on MT services Harry van Haaren
@ 2022-09-02 17:17   ` Mattias Rönnblom
  1 sibling, 0 replies; 6+ messages in thread
From: Mattias Rönnblom @ 2022-09-02 17:17 UTC (permalink / raw)
  To: Harry van Haaren, dev
  Cc: Mattias Rönnblom, Honnappa Nagarahalli, Morten Brørup

On 2022-07-11 15:18, Harry van Haaren wrote:
> This commit improves the performance reporting of the service
> cores polling loop to show both with and without statistics
> collection modes. Collecting cycle statistics is costly, due
> to calls to rte_rdtsc() per service iteration.

That is true for a service deployed on only a single core. For 
multi-core services, non-rdtsc-related overhead dominates. For example, 
if the service is deployed on 11 cores, the extra statistics-related 
overhead is ~1000 cc/service call on x86_64. 2x rdtsc shouldn't be more 
than ~50 cc.

> 
> Reported-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> Suggested-by: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> Suggested-by: Morten Brørup <mb@smartsharesystems.com>
> Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
> 
> ---
> 
> This is split out as a seperate patch from the fix to allow
> measuring the before/after of the service stats atomic fixup.
> ---
>   app/test/test_service_cores.c | 36 ++++++++++++++++++++++++-----------
>   1 file changed, 25 insertions(+), 11 deletions(-)
> 
> diff --git a/app/test/test_service_cores.c b/app/test/test_service_cores.c
> index ced6ed0081..7415b6b686 100644
> --- a/app/test/test_service_cores.c
> +++ b/app/test/test_service_cores.c
> @@ -777,6 +777,22 @@ service_run_on_app_core_func(void *arg)
>   	return rte_service_run_iter_on_app_lcore(*delay_service_id, 1);
>   }
>   
> +static float
> +service_app_lcore_perf_measure(uint32_t id)
> +{
> +	/* Performance test: call in a loop, and measure tsc() */
> +	const uint32_t perf_iters = (1 << 12);
> +	uint64_t start = rte_rdtsc();
> +	uint32_t i;
> +	for (i = 0; i < perf_iters; i++) {
> +		int err = service_run_on_app_core_func(&id);

In a real-world scenario, the latency of this function isn't 
representative for the overall service core overhead.

For example, consider a scenario where an lcore has a single service 
mapped to it. rte_service.c will call service_run() 64 times, but only 
one will be a "hit" and the service being run. One iteration in the 
service loop costs ~600 cc, on a machine where this performance 
benchmark reports 128 cc. (Both with statistics disabled.)

For low-latency services, this is a significant overhead.

> +		TEST_ASSERT_EQUAL(0, err, "perf test: returned run failure");
> +	}
> +	uint64_t end = rte_rdtsc();
> +
> +	return (end - start)/(float)perf_iters;
> +}
> +
>   static int
>   service_app_lcore_poll_impl(const int mt_safe)
>   {
> @@ -828,17 +844,15 @@ service_app_lcore_poll_impl(const int mt_safe)
>   				"MT Unsafe: App core1 didn't return -EBUSY");
>   	}
>   
> -	/* Performance test: call in a loop, and measure tsc() */
> -	const uint32_t perf_iters = (1 << 12);
> -	uint64_t start = rte_rdtsc();
> -	uint32_t i;
> -	for (i = 0; i < perf_iters; i++) {
> -		int err = service_run_on_app_core_func(&id);
> -		TEST_ASSERT_EQUAL(0, err, "perf test: returned run failure");
> -	}
> -	uint64_t end = rte_rdtsc();
> -	printf("perf test for %s: %0.1f cycles per call\n", mt_safe ?
> -		"MT Safe" : "MT Unsafe", (end - start)/(float)perf_iters);
> +	/* Measure performance of no-stats and with-stats. */
> +	float cyc_no_stats = service_app_lcore_perf_measure(id);
> +
> +	TEST_ASSERT_EQUAL(0, rte_service_set_stats_enable(id, 1),
> +				"failed to enable stats for service.");
> +	float cyc_with_stats = service_app_lcore_perf_measure(id);
> +
> +	printf("perf test for %s, no stats: %0.1f, with stats %0.1f cycles/call\n",
> +		mt_safe ? "MT Safe" : "MT Unsafe", cyc_no_stats, cyc_with_stats);
>   
>   	unregister_all();
>   	return TEST_SUCCESS;

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v3 2/2] service: fix potential stats race-condition on MT services
  2022-07-11 13:18   ` [PATCH v3 2/2] service: fix potential stats race-condition on MT services Harry van Haaren
@ 2022-10-05 13:06     ` David Marchand
  0 siblings, 0 replies; 6+ messages in thread
From: David Marchand @ 2022-10-05 13:06 UTC (permalink / raw)
  To: Harry van Haaren
  Cc: dev, Mattias Rönnblom, Honnappa Nagarahalli,
	Morten Brørup, Bruce Richardson

On Mon, Jul 11, 2022 at 3:18 PM Harry van Haaren
<harry.van.haaren@intel.com> wrote:
>
> This commit fixes a potential racey-add that could occur if
> multiple service-lcores were executing the same MT-safe service
> at the same time, with service statistics collection enabled.
>
> Because multiple threads can run and execute the service, the
> stats values can have multiple writer threads, resulting in the
> requirement of using atomic addition for correctness.
>
> Note that when a MT unsafe service is executed, a spinlock is
> held, so the stats increments are protected. This fact is used
> to avoid executing atomic add instructions when not required.
> Regular reads and increments are used, and only the store is
> specified as atomic, reducing perf impact on e.g. x86 arch.
>
> This patch causes a 1.25x increase in cycle-cost for polling a
> MT safe service when statistics are enabled. No change was seen
> for MT unsafe services, or when statistics are disabled.

Fixes: 21698354c832 ("service: introduce service cores concept")

I did not mark for backport since the commitlog indicates a performance impact.
You can still ask for backport by pinging LTS maintainers.

>
> Reported-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> Suggested-by: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> Suggested-by: Morten Brørup <mb@smartsharesystems.com>
> Suggested-by: Bruce Richardson <bruce.richardson@intel.com>
> Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>


Series applied, thanks.

-- 
David Marchand


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2022-10-05 13:06 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-11 10:57 [PATCH v2 1/2] test/service: add perf measurements for with stats mode Harry van Haaren
2022-07-11 10:57 ` [PATCH v2 2/2] service: fix potential stats race-condition on MT services Harry van Haaren
2022-07-11 13:18 ` [PATCH v3 1/2] test/service: add perf measurements for with stats mode Harry van Haaren
2022-07-11 13:18   ` [PATCH v3 2/2] service: fix potential stats race-condition on MT services Harry van Haaren
2022-10-05 13:06     ` David Marchand
2022-09-02 17:17   ` [PATCH v3 1/2] test/service: add perf measurements for with stats mode Mattias Rönnblom

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).