* [dpdk-dev] [PATCH] app/test: shorten execution time
@ 2016-05-04 17:32 Thomas Monjalon
2016-05-11 14:26 ` [dpdk-dev] [PATCH v2 0/4] reduce autotests constraints Thomas Monjalon
0 siblings, 1 reply; 7+ messages in thread
From: Thomas Monjalon @ 2016-05-04 17:32 UTC (permalink / raw)
To: dev
The autotests are too long to be run often.
This patch reduces the needed time of some tests in fast_test.
The others will be analyzed when they will be able to run in a
VM with a reasonnable amount of memory.
The current status in a small VM is below:
> make fast_test
/root/dpdk/build/app/test -c f -n 4
Test name Test result Test
Total
================================================================================
Start group_1: Success [00m 00s]
Timer autotest: Success [00m 02s]
Debug autotest: Success [00m 00s]
Errno autotest: Success [00m 00s]
Meter autotest: Success [00m 00s]
Common autotest: Success [00m 01s]
Dump log history: Success [00m 00s]
Dump rings: Success [00m 00s]
Dump mempools: Success [00m 00s] [00m 05s]
Start group_2: Success [00m 00s]
Memory autotest: Success [00m 00s]
Read/write lock autotest: Success [00m 00s]
Logs autotest: Success [00m 00s]
CPU flags autotest: Success [00m 00s]
Version autotest: Success [00m 00s]
EAL filesystem autotest: Success [00m 00s]
EAL flags autotest: Success [00m 05s]
Hash autotest: Success [00m 00s] [00m 11s]
Start group_3: Fail [No prompt] [00m 00s]
LPM autotest: Fail [No prompt] [00m 00s]
IVSHMEM autotest: Fail [No prompt] [00m 00s]
Memcpy autotest: Fail [No prompt] [00m 00s]
Memzone autotest: Fail [No prompt] [00m 00s]
String autotest: Fail [No prompt] [00m 00s]
Alarm autotest: Fail [No prompt] [00m 00s] [00m 11s]
Start group_4: Success [00m 00s]
PCI autotest: Success [00m 00s]
Malloc autotest: Success [00m 00s]
Multi-process autotest: Success [00m 00s]
Mbuf autotest: Success [00m 02s]
Per-lcore autotest: Success [00m 00s]
Ring autotest: Success [00m 00s] [00m 16s]
Start group_5: Success [00m 00s]
Spinlock autotest: Success [00m 00s]
Byte order autotest: Success [00m 00s]
TAILQ autotest: Success [00m 00s]
Command-line autotest: Success [00m 00s]
Interrupts autotest: Success [00m 00s] [00m 18s]
Start group_6: Fail [No prompt] [00m 00s]
Function reentrancy autotest: Fail [No prompt] [00m 00s]
Mempool autotest: Fail [No prompt] [00m 00s]
Atomics autotest: Fail [No prompt] [00m 00s]
Prefetch autotest: Fail [No prompt] [00m 00s]
Red autotest: Fail [No prompt] [00m 00s] [00m 18s]
Start group_7: Success [00m 00s]
PMD ring autotest: Success [00m 00s]
Access list control autotest: Success [00m 01s]
Sched autotest: Success [00m 00s] [00m 20s]
Start kni: Fail [No prompt] [00m 00s]
KNI autotest: Fail [No prompt] [00m 00s] [00m 20s]
Start mempool_perf: Success [00m 00s]
Cycles autotest: Success [00m 01s] [00m 22s]
Start power: Fail [No prompt] [00m 00s]
Power autotest: Fail [No prompt] [00m 00s] [00m 22s]
Start power_acpi_cpufreq: Fail [No prompt] [00m 00s]
Power ACPI cpufreq autotest: Fail [No prompt] [00m 00s] [00m 22s]
Start power_kvm_vm: Fail [No prompt] [00m 00s]
Power KVM VM autotest: Fail [No prompt] [00m 00s] [00m 23s]
Start timer_perf: Fail [No prompt] [00m 00s]
Timer performance autotest: Fail [No prompt] [00m 00s] [00m 23s]
================================================================================
Total run time: 00m 23s
Number of failed tests: 16
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
---
app/test/autotest_test_funcs.py | 14 +++++++-------
app/test/test_hash.c | 8 ++++----
app/test/test_interrupts.c | 4 ++--
app/test/test_mbuf.c | 2 +-
app/test/test_per_lcore.c | 4 ++--
app/test/test_ring.c | 7 +++----
app/test/test_spinlock.c | 6 +++---
app/test/test_timer.c | 20 ++++++++++----------
8 files changed, 32 insertions(+), 33 deletions(-)
diff --git a/app/test/autotest_test_funcs.py b/app/test/autotest_test_funcs.py
index f04909d..5222f6e 100644
--- a/app/test/autotest_test_funcs.py
+++ b/app/test/autotest_test_funcs.py
@@ -83,7 +83,7 @@ def spinlock_autotest(child, test_name):
"Test Failed",
"Hello from core ([0-9]*) !",
"Hello from within recursive locks from ([0-9]*) !",
- pexpect.TIMEOUT], timeout = 20)
+ pexpect.TIMEOUT], timeout = 5)
# ok
if index == 0:
break
@@ -177,9 +177,9 @@ def timer_autotest(child, test_name):
i = 0
child.sendline(test_name)
- index = child.expect(["Start timer stress tests \(20 seconds\)",
+ index = child.expect(["Start timer stress tests",
"Test Failed",
- pexpect.TIMEOUT], timeout = 10)
+ pexpect.TIMEOUT], timeout = 5)
if index == 1:
return -1, "Fail"
@@ -188,16 +188,16 @@ def timer_autotest(child, test_name):
index = child.expect(["Start timer stress tests 2",
"Test Failed",
- pexpect.TIMEOUT], timeout = 40)
+ pexpect.TIMEOUT], timeout = 5)
if index == 1:
return -1, "Fail"
elif index == 2:
return -1, "Fail [Timeout]"
- index = child.expect(["Start timer basic tests \(20 seconds\)",
+ index = child.expect(["Start timer basic tests",
"Test Failed",
- pexpect.TIMEOUT], timeout = 20)
+ pexpect.TIMEOUT], timeout = 5)
if index == 1:
return -1, "Fail"
@@ -277,7 +277,7 @@ def timer_autotest(child, test_name):
def ring_autotest(child, test_name):
child.sendline(test_name)
index = child.expect(["Test OK", "Test Failed",
- pexpect.TIMEOUT], timeout = 15)
+ pexpect.TIMEOUT], timeout = 2)
if index == 1:
return -1, "Fail"
elif index == 2:
diff --git a/app/test/test_hash.c b/app/test/test_hash.c
index 61fc0a0..7e41725 100644
--- a/app/test/test_hash.c
+++ b/app/test/test_hash.c
@@ -176,7 +176,7 @@ static struct rte_hash_parameters ut_params = {
.socket_id = 0,
};
-#define CRC32_ITERATIONS (1U << 20)
+#define CRC32_ITERATIONS (1U << 10)
#define CRC32_DWORDS (1U << 6)
/*
* Test if all CRC32 implementations yield the same hash value
@@ -1081,7 +1081,7 @@ test_hash_creation_with_good_parameters(void)
return 0;
}
-#define ITERATIONS 50
+#define ITERATIONS 3
/*
* Test to see the average table utilization (entries added/max entries)
* before hitting a random entry that cannot be added
@@ -1098,7 +1098,7 @@ static int test_average_table_utilization(void)
"\n before adding elements begins to fail\n");
printf("Measuring performance, please wait");
fflush(stdout);
- ut_params.entries = 1 << 20;
+ ut_params.entries = 1 << 16;
ut_params.name = "test_average_utilization";
ut_params.hash_func = rte_jhash;
handle = rte_hash_create(&ut_params);
@@ -1138,7 +1138,7 @@ static int test_average_table_utilization(void)
return 0;
}
-#define NUM_ENTRIES 1024
+#define NUM_ENTRIES 256
static int test_hash_iteration(void)
{
struct rte_hash *handle;
diff --git a/app/test/test_interrupts.c b/app/test/test_interrupts.c
index 6e3dec3..df6d261 100644
--- a/app/test/test_interrupts.c
+++ b/app/test/test_interrupts.c
@@ -41,7 +41,7 @@
#include "test.h"
-#define TEST_INTERRUPT_CHECK_INTERVAL 1000 /* ms */
+#define TEST_INTERRUPT_CHECK_INTERVAL 100 /* ms */
/* predefined interrupt handle types */
enum test_interrupt_handle_type {
@@ -372,7 +372,7 @@ test_interrupt_full_path_check(enum test_interrupt_handle_type intr_type)
if (test_interrupt_trigger_interrupt() < 0)
return -1;
- /* check flag in 3 seconds */
+ /* check flag */
for (count = 0; flag == 0 && count < 3; count++)
rte_delay_ms(TEST_INTERRUPT_CHECK_INTERVAL);
diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
index 98ff93a..59f9979 100644
--- a/app/test/test_mbuf.c
+++ b/app/test/test_mbuf.c
@@ -748,7 +748,7 @@ test_refcnt_iter(unsigned lcore, unsigned iter)
__func__, lcore, iter, tref);
return;
}
- rte_delay_ms(1000);
+ rte_delay_ms(100);
}
rte_panic("(lcore=%u, iter=%u): after %us only "
diff --git a/app/test/test_per_lcore.c b/app/test/test_per_lcore.c
index b16449a..f452cdb 100644
--- a/app/test/test_per_lcore.c
+++ b/app/test/test_per_lcore.c
@@ -92,8 +92,8 @@ display_vars(__attribute__((unused)) void *arg)
static int
test_per_lcore_delay(__attribute__((unused)) void *arg)
{
- rte_delay_ms(5000);
- printf("wait 5000ms on lcore %u\n", rte_lcore_id());
+ rte_delay_ms(100);
+ printf("wait 100ms on lcore %u\n", rte_lcore_id());
return 0;
}
diff --git a/app/test/test_ring.c b/app/test/test_ring.c
index 0d7523e..d18812e 100644
--- a/app/test/test_ring.c
+++ b/app/test/test_ring.c
@@ -101,7 +101,6 @@
#define RING_SIZE 4096
#define MAX_BULK 32
#define N 65536
-#define TIME_S 5
static rte_atomic32_t synchro;
@@ -130,7 +129,7 @@ check_live_watermark_change(__attribute__((unused)) void *dummy)
/* init the object table */
memset(obj_table, 0, sizeof(obj_table));
- end_time = rte_get_timer_cycles() + (hz * 2);
+ end_time = rte_get_timer_cycles() + (hz / 4);
/* check that bulk and watermark are 4 and 32 (respectively) */
while (diff >= 0) {
@@ -194,9 +193,9 @@ test_live_watermark_change(void)
* watermark and quota */
rte_eal_remote_launch(check_live_watermark_change, NULL, lcore_id2);
- rte_delay_ms(1000);
+ rte_delay_ms(100);
rte_ring_set_water_mark(r, 32);
- rte_delay_ms(1000);
+ rte_delay_ms(100);
if (rte_eal_wait_lcore(lcore_id2) < 0)
return -1;
diff --git a/app/test/test_spinlock.c b/app/test/test_spinlock.c
index 16ced7f..180d6de 100644
--- a/app/test/test_spinlock.c
+++ b/app/test/test_spinlock.c
@@ -129,7 +129,7 @@ test_spinlock_recursive_per_core(__attribute__((unused)) void *arg)
static rte_spinlock_t lk = RTE_SPINLOCK_INITIALIZER;
static uint64_t lock_count[RTE_MAX_LCORE] = {0};
-#define TIME_S 5
+#define TIME_MS 100
static int
load_loop_fn(void *func_param)
@@ -145,7 +145,7 @@ load_loop_fn(void *func_param)
while (rte_atomic32_read(&synchro) == 0);
begin = rte_get_timer_cycles();
- while (time_diff / hz < TIME_S) {
+ while (time_diff < hz * TIME_MS / 1000) {
if (use_lock)
rte_spinlock_lock(&lk);
lcount++;
@@ -258,7 +258,7 @@ test_spinlock(void)
RTE_LCORE_FOREACH_SLAVE(i) {
rte_spinlock_unlock(&sl_tab[i]);
- rte_delay_ms(100);
+ rte_delay_ms(10);
}
rte_eal_mp_wait_lcore();
diff --git a/app/test/test_timer.c b/app/test/test_timer.c
index 944e2ad..bc07925 100644
--- a/app/test/test_timer.c
+++ b/app/test/test_timer.c
@@ -137,7 +137,7 @@
#include <rte_random.h>
#include <rte_malloc.h>
-#define TEST_DURATION_S 20 /* in seconds */
+#define TEST_DURATION_S 1 /* in seconds */
#define NB_TIMER 4
#define RTE_LOGTYPE_TESTTIMER RTE_LOGTYPE_USER3
@@ -305,7 +305,7 @@ timer_stress2_main_loop(__attribute__((unused)) void *arg)
{
static struct rte_timer *timers;
int i, ret;
- uint64_t delay = rte_get_timer_hz() / 4;
+ uint64_t delay = rte_get_timer_hz() / 20;
unsigned lcore_id = rte_lcore_id();
unsigned master = rte_get_master_lcore();
int32_t my_collisions = 0;
@@ -346,7 +346,7 @@ timer_stress2_main_loop(__attribute__((unused)) void *arg)
rte_atomic32_add(&collisions, my_collisions);
/* wait long enough for timers to expire */
- rte_delay_ms(500);
+ rte_delay_ms(100);
/* all cores rendezvous */
if (lcore_id == master) {
@@ -396,7 +396,7 @@ timer_stress2_main_loop(__attribute__((unused)) void *arg)
}
/* wait long enough for timers to expire */
- rte_delay_ms(500);
+ rte_delay_ms(100);
/* now check that we get the right number of callbacks */
if (lcore_id == master) {
@@ -495,13 +495,13 @@ timer_basic_main_loop(__attribute__((unused)) void *arg)
/* launch all timers on core 0 */
if (lcore_id == rte_get_master_lcore()) {
- mytimer_reset(&mytiminfo[0], hz, SINGLE, lcore_id,
+ mytimer_reset(&mytiminfo[0], hz/4, SINGLE, lcore_id,
timer_basic_cb);
- mytimer_reset(&mytiminfo[1], hz*2, SINGLE, lcore_id,
+ mytimer_reset(&mytiminfo[1], hz/2, SINGLE, lcore_id,
timer_basic_cb);
- mytimer_reset(&mytiminfo[2], hz, PERIODICAL, lcore_id,
+ mytimer_reset(&mytiminfo[2], hz/4, PERIODICAL, lcore_id,
timer_basic_cb);
- mytimer_reset(&mytiminfo[3], hz, PERIODICAL,
+ mytimer_reset(&mytiminfo[3], hz/4, PERIODICAL,
rte_get_next_lcore(lcore_id, 0, 1),
timer_basic_cb);
}
@@ -591,7 +591,7 @@ test_timer(void)
end_time = cur_time + (hz * TEST_DURATION_S);
/* start other cores */
- printf("Start timer stress tests (%d seconds)\n", TEST_DURATION_S);
+ printf("Start timer stress tests\n");
rte_eal_mp_remote_launch(timer_stress_main_loop, NULL, CALL_MASTER);
rte_eal_mp_wait_lcore();
@@ -612,7 +612,7 @@ test_timer(void)
end_time = cur_time + (hz * TEST_DURATION_S);
/* start other cores */
- printf("\nStart timer basic tests (%d seconds)\n", TEST_DURATION_S);
+ printf("\nStart timer basic tests\n");
rte_eal_mp_remote_launch(timer_basic_main_loop, NULL, CALL_MASTER);
rte_eal_mp_wait_lcore();
--
2.7.0
^ permalink raw reply [flat|nested] 7+ messages in thread
* [dpdk-dev] [PATCH v2 0/4] reduce autotests constraints
2016-05-04 17:32 [dpdk-dev] [PATCH] app/test: shorten execution time Thomas Monjalon
@ 2016-05-11 14:26 ` Thomas Monjalon
2016-05-11 14:26 ` [dpdk-dev] [PATCH v2 1/4] app/test: shorten execution time Thomas Monjalon
` (4 more replies)
0 siblings, 5 replies; 7+ messages in thread
From: Thomas Monjalon @ 2016-05-11 14:26 UTC (permalink / raw)
To: dev
In order to make autotests easy to run often,
the time and memory constraints are reduced.
These patches depend on the LPM autotest split.
The current status in a small VM is below:
> make fast_test
/root/dpdk/build/app/test -c f -n 4
Test name Test result Test Total
====================================================================
Start group_1: Success [00m 01s]
Cycles autotest: Success [00m 01s]
Timer autotest: Success [00m 03s]
Debug autotest: Success [00m 00s]
Errno autotest: Success [00m 00s]
Meter autotest: Success [00m 00s]
Common autotest: Success [00m 00s]
Dump log history: Success [00m 00s]
Dump rings: Success [00m 00s]
Dump mempools: Success [00m 00s] [00m 07s]
Start group_2: Success [00m 00s]
Memory autotest: Success [00m 00s]
Read/write lock autotest: Success [00m 00s]
Logs autotest: Success [00m 00s]
CPU flags autotest: Success [00m 00s]
Version autotest: Success [00m 00s]
EAL filesystem autotest: Success [00m 00s]
EAL flags autotest: Success [00m 05s]
Hash autotest: Success [00m 00s] [00m 14s]
Start group_3: Success [00m 00s]
LPM autotest: Success [00m 01s]
LPM6 autotest: Success [00m 04s]
IVSHMEM autotest: Fail [Not found] [00m 00s]
Memcpy autotest: Success [00m 08s]
Memzone autotest: Success [00m 00s]
String autotest: Success [00m 00s]
Alarm autotest: Success [00m 00s] [00m 29s]
Start group_4: Success [00m 00s]
PCI autotest: Success [00m 00s]
Malloc autotest: Success [00m 00s]
Multi-process autotest: Success [00m 00s]
Mbuf autotest: Success [00m 01s]
Per-lcore autotest: Success [00m 00s]
Ring autotest: Success [00m 00s] [00m 32s]
Start group_5: Success [00m 00s]
Spinlock autotest: Success [00m 00s]
Byte order autotest: Success [00m 00s]
TAILQ autotest: Success [00m 00s]
Command-line autotest: Success [00m 00s]
Interrupts autotest: Success [00m 00s] [00m 34s]
Start group_6: Success [00m 00s]
Function reentrancy autotest: Fail [00m 00s]
Mempool autotest: Success [00m 00s]
Atomics autotest: Success [00m 00s]
Prefetch autotest: Success [00m 00s]
Red autotest: Success [01m 36s] [02m 13s]
Start group_7: Success [00m 00s]
PMD ring autotest: Success [00m 00s]
Access list control autotest: Success [00m 01s]
Sched autotest: Success [00m 00s] [02m 15s]
Start kni: Fail [No prompt] [00m 00s]
KNI autotest: Fail [No prompt] [00m 00s] [02m 15s]
Start power: Success [00m 00s]
Power autotest: Success [00m 00s] [02m 16s]
Start power_acpi_cpufreq: Success [00m 00s]
Power ACPI cpufreq autotest: Fail [00m 00s] [02m 16s]
Start power_kvm_vm: Success [00m 00s]
Power KVM VM autotest: Fail [00m 00s] [02m 17s]
====================================================================
Total run time: 02m 17s
Number of failed tests: 5
The RED autotest needs some work.
Thomas Monjalon (4):
app/test: shorten execution time
app/test: reduce memory needs
app/test: remove unused constants
app/test: move cycles autotest to first group
app/test/autotest_data.py | 26 +++++++++++-----------
app/test/autotest_test_funcs.py | 14 ++++++------
app/test/test_alarm.c | 48 ++++++++++++++++++++---------------------
app/test/test_hash.c | 8 +++----
app/test/test_interrupts.c | 4 ++--
app/test/test_lpm6.c | 37 +++++++++++++------------------
app/test/test_mbuf.c | 2 +-
app/test/test_memcpy.c | 15 -------------
app/test/test_mempool.c | 8 +++----
app/test/test_per_lcore.c | 4 ++--
app/test/test_ring.c | 8 +++----
app/test/test_spinlock.c | 6 +++---
app/test/test_timer.c | 20 ++++++++---------
13 files changed, 87 insertions(+), 113 deletions(-)
--
2.7.0
^ permalink raw reply [flat|nested] 7+ messages in thread
* [dpdk-dev] [PATCH v2 1/4] app/test: shorten execution time
2016-05-11 14:26 ` [dpdk-dev] [PATCH v2 0/4] reduce autotests constraints Thomas Monjalon
@ 2016-05-11 14:26 ` Thomas Monjalon
2016-05-11 14:26 ` [dpdk-dev] [PATCH v2 2/4] app/test: reduce memory needs Thomas Monjalon
` (3 subsequent siblings)
4 siblings, 0 replies; 7+ messages in thread
From: Thomas Monjalon @ 2016-05-11 14:26 UTC (permalink / raw)
To: dev
The autotests are too long to be run often.
This patch reduces the needed time of some tests in fast_test.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
---
app/test/autotest_test_funcs.py | 14 ++++++------
app/test/test_alarm.c | 48 ++++++++++++++++++++---------------------
app/test/test_hash.c | 8 +++----
app/test/test_interrupts.c | 4 ++--
app/test/test_lpm6.c | 35 +++++++++++++-----------------
app/test/test_mbuf.c | 2 +-
app/test/test_mempool.c | 6 +++---
app/test/test_per_lcore.c | 4 ++--
app/test/test_ring.c | 6 +++---
app/test/test_spinlock.c | 6 +++---
app/test/test_timer.c | 20 ++++++++---------
11 files changed, 74 insertions(+), 79 deletions(-)
diff --git a/app/test/autotest_test_funcs.py b/app/test/autotest_test_funcs.py
index f04909d..5222f6e 100644
--- a/app/test/autotest_test_funcs.py
+++ b/app/test/autotest_test_funcs.py
@@ -83,7 +83,7 @@ def spinlock_autotest(child, test_name):
"Test Failed",
"Hello from core ([0-9]*) !",
"Hello from within recursive locks from ([0-9]*) !",
- pexpect.TIMEOUT], timeout = 20)
+ pexpect.TIMEOUT], timeout = 5)
# ok
if index == 0:
break
@@ -177,9 +177,9 @@ def timer_autotest(child, test_name):
i = 0
child.sendline(test_name)
- index = child.expect(["Start timer stress tests \(20 seconds\)",
+ index = child.expect(["Start timer stress tests",
"Test Failed",
- pexpect.TIMEOUT], timeout = 10)
+ pexpect.TIMEOUT], timeout = 5)
if index == 1:
return -1, "Fail"
@@ -188,16 +188,16 @@ def timer_autotest(child, test_name):
index = child.expect(["Start timer stress tests 2",
"Test Failed",
- pexpect.TIMEOUT], timeout = 40)
+ pexpect.TIMEOUT], timeout = 5)
if index == 1:
return -1, "Fail"
elif index == 2:
return -1, "Fail [Timeout]"
- index = child.expect(["Start timer basic tests \(20 seconds\)",
+ index = child.expect(["Start timer basic tests",
"Test Failed",
- pexpect.TIMEOUT], timeout = 20)
+ pexpect.TIMEOUT], timeout = 5)
if index == 1:
return -1, "Fail"
@@ -277,7 +277,7 @@ def timer_autotest(child, test_name):
def ring_autotest(child, test_name):
child.sendline(test_name)
index = child.expect(["Test OK", "Test Failed",
- pexpect.TIMEOUT], timeout = 15)
+ pexpect.TIMEOUT], timeout = 2)
if index == 1:
return -1, "Fail"
elif index == 2:
diff --git a/app/test/test_alarm.c b/app/test/test_alarm.c
index 5d6f4a2..d83591c 100644
--- a/app/test/test_alarm.c
+++ b/app/test/test_alarm.c
@@ -45,8 +45,8 @@
#define US_PER_MS 1000
-#define RTE_TEST_ALARM_TIMEOUT 3000 /* ms */
-#define RTE_TEST_CHECK_PERIOD 1000 /* ms */
+#define RTE_TEST_ALARM_TIMEOUT 10 /* ms */
+#define RTE_TEST_CHECK_PERIOD 3 /* ms */
static volatile int flag;
@@ -100,17 +100,17 @@ test_multi_alarms(void)
printf("Expect 6 callbacks in order...\n");
/* add two alarms in order */
- rte_eal_alarm_set(1000 * US_PER_MS, test_multi_cb, (void *)1);
- rte_eal_alarm_set(2000 * US_PER_MS, test_multi_cb, (void *)2);
+ rte_eal_alarm_set(10 * US_PER_MS, test_multi_cb, (void *)1);
+ rte_eal_alarm_set(20 * US_PER_MS, test_multi_cb, (void *)2);
/* now add in reverse order */
- rte_eal_alarm_set(6000 * US_PER_MS, test_multi_cb, (void *)6);
- rte_eal_alarm_set(5000 * US_PER_MS, test_multi_cb, (void *)5);
- rte_eal_alarm_set(4000 * US_PER_MS, test_multi_cb, (void *)4);
- rte_eal_alarm_set(3000 * US_PER_MS, test_multi_cb, (void *)3);
+ rte_eal_alarm_set(60 * US_PER_MS, test_multi_cb, (void *)6);
+ rte_eal_alarm_set(50 * US_PER_MS, test_multi_cb, (void *)5);
+ rte_eal_alarm_set(40 * US_PER_MS, test_multi_cb, (void *)4);
+ rte_eal_alarm_set(30 * US_PER_MS, test_multi_cb, (void *)3);
/* wait for expiry */
- rte_delay_ms(6500);
+ rte_delay_ms(65);
if (cb_count.cnt != 6) {
printf("Missing callbacks\n");
/* remove any callbacks that might remain */
@@ -121,12 +121,12 @@ test_multi_alarms(void)
cb_count.cnt = 0;
printf("Expect only callbacks with args 1 and 3...\n");
/* Add 3 flags, then delete one */
- rte_eal_alarm_set(3000 * US_PER_MS, test_multi_cb, (void *)3);
- rte_eal_alarm_set(2000 * US_PER_MS, test_multi_cb, (void *)2);
- rte_eal_alarm_set(1000 * US_PER_MS, test_multi_cb, (void *)1);
+ rte_eal_alarm_set(30 * US_PER_MS, test_multi_cb, (void *)3);
+ rte_eal_alarm_set(20 * US_PER_MS, test_multi_cb, (void *)2);
+ rte_eal_alarm_set(10 * US_PER_MS, test_multi_cb, (void *)1);
rm_count = rte_eal_alarm_cancel(test_multi_cb, (void *)2);
- rte_delay_ms(3500);
+ rte_delay_ms(35);
if (cb_count.cnt != 2 || rm_count != 1) {
printf("Error: invalid flags count or alarm removal failure"
" - flags value = %d, expected = %d\n",
@@ -138,9 +138,9 @@ test_multi_alarms(void)
printf("Testing adding and then removing multiple alarms\n");
/* finally test that no callbacks are called if we delete them all*/
- rte_eal_alarm_set(1000 * US_PER_MS, test_multi_cb, (void *)1);
- rte_eal_alarm_set(1000 * US_PER_MS, test_multi_cb, (void *)2);
- rte_eal_alarm_set(1000 * US_PER_MS, test_multi_cb, (void *)3);
+ rte_eal_alarm_set(10 * US_PER_MS, test_multi_cb, (void *)1);
+ rte_eal_alarm_set(10 * US_PER_MS, test_multi_cb, (void *)2);
+ rte_eal_alarm_set(10 * US_PER_MS, test_multi_cb, (void *)3);
rm_count = rte_eal_alarm_cancel(test_alarm_callback, (void *)-1);
if (rm_count != 0) {
printf("Error removing non-existant alarm succeeded\n");
@@ -157,19 +157,19 @@ test_multi_alarms(void)
* Also test that we can cancel head-of-line callbacks ok.*/
flag = 0;
recursive_error = 0;
- rte_eal_alarm_set(1000 * US_PER_MS, test_remove_in_callback, (void *)1);
- rte_eal_alarm_set(2000 * US_PER_MS, test_remove_in_callback, (void *)2);
+ rte_eal_alarm_set(10 * US_PER_MS, test_remove_in_callback, (void *)1);
+ rte_eal_alarm_set(20 * US_PER_MS, test_remove_in_callback, (void *)2);
rm_count = rte_eal_alarm_cancel(test_remove_in_callback, (void *)1);
if (rm_count != 1) {
printf("Error cancelling head-of-list callback\n");
return -1;
}
- rte_delay_ms(1500);
+ rte_delay_ms(15);
if (flag != 0) {
printf("Error, cancelling head-of-list leads to premature callback\n");
return -1;
}
- rte_delay_ms(1000);
+ rte_delay_ms(10);
if (flag != 2) {
printf("Error - expected callback not called\n");
rte_eal_alarm_cancel(test_remove_in_callback, (void *)-1);
@@ -181,10 +181,10 @@ test_multi_alarms(void)
/* Check if it can cancel all for the same callback */
printf("Testing canceling all for the same callback\n");
flag_2 = 0;
- rte_eal_alarm_set(1000 * US_PER_MS, test_remove_in_callback, (void *)1);
- rte_eal_alarm_set(2000 * US_PER_MS, test_remove_in_callback_2, (void *)2);
- rte_eal_alarm_set(3000 * US_PER_MS, test_remove_in_callback_2, (void *)3);
- rte_eal_alarm_set(4000 * US_PER_MS, test_remove_in_callback, (void *)4);
+ rte_eal_alarm_set(10 * US_PER_MS, test_remove_in_callback, (void *)1);
+ rte_eal_alarm_set(20 * US_PER_MS, test_remove_in_callback_2, (void *)2);
+ rte_eal_alarm_set(30 * US_PER_MS, test_remove_in_callback_2, (void *)3);
+ rte_eal_alarm_set(40 * US_PER_MS, test_remove_in_callback, (void *)4);
rm_count = rte_eal_alarm_cancel(test_remove_in_callback_2, (void *)-1);
if (rm_count != 2) {
printf("Error, cannot cancel all for the same callback\n");
diff --git a/app/test/test_hash.c b/app/test/test_hash.c
index 61fc0a0..7e41725 100644
--- a/app/test/test_hash.c
+++ b/app/test/test_hash.c
@@ -176,7 +176,7 @@ static struct rte_hash_parameters ut_params = {
.socket_id = 0,
};
-#define CRC32_ITERATIONS (1U << 20)
+#define CRC32_ITERATIONS (1U << 10)
#define CRC32_DWORDS (1U << 6)
/*
* Test if all CRC32 implementations yield the same hash value
@@ -1081,7 +1081,7 @@ test_hash_creation_with_good_parameters(void)
return 0;
}
-#define ITERATIONS 50
+#define ITERATIONS 3
/*
* Test to see the average table utilization (entries added/max entries)
* before hitting a random entry that cannot be added
@@ -1098,7 +1098,7 @@ static int test_average_table_utilization(void)
"\n before adding elements begins to fail\n");
printf("Measuring performance, please wait");
fflush(stdout);
- ut_params.entries = 1 << 20;
+ ut_params.entries = 1 << 16;
ut_params.name = "test_average_utilization";
ut_params.hash_func = rte_jhash;
handle = rte_hash_create(&ut_params);
@@ -1138,7 +1138,7 @@ static int test_average_table_utilization(void)
return 0;
}
-#define NUM_ENTRIES 1024
+#define NUM_ENTRIES 256
static int test_hash_iteration(void)
{
struct rte_hash *handle;
diff --git a/app/test/test_interrupts.c b/app/test/test_interrupts.c
index 6e3dec3..df6d261 100644
--- a/app/test/test_interrupts.c
+++ b/app/test/test_interrupts.c
@@ -41,7 +41,7 @@
#include "test.h"
-#define TEST_INTERRUPT_CHECK_INTERVAL 1000 /* ms */
+#define TEST_INTERRUPT_CHECK_INTERVAL 100 /* ms */
/* predefined interrupt handle types */
enum test_interrupt_handle_type {
@@ -372,7 +372,7 @@ test_interrupt_full_path_check(enum test_interrupt_handle_type intr_type)
if (test_interrupt_trigger_interrupt() < 0)
return -1;
- /* check flag in 3 seconds */
+ /* check flag */
for (count = 0; flag == 0 && count < 3; count++)
rte_delay_ms(TEST_INTERRUPT_CHECK_INTERVAL);
diff --git a/app/test/test_lpm6.c b/app/test/test_lpm6.c
index 9163cd7..9545982 100644
--- a/app/test/test_lpm6.c
+++ b/app/test/test_lpm6.c
@@ -220,7 +220,7 @@ test1(void)
}
/*
- * Create lpm table then delete lpm table 100 times
+ * Create lpm table then delete lpm table 20 times
* Use a slightly different rules size each time
*/
int32_t
@@ -234,7 +234,7 @@ test2(void)
config.flags = 0;
/* rte_lpm6_free: Free NULL */
- for (i = 0; i < 100; i++) {
+ for (i = 0; i < 20; i++) {
config.max_rules = MAX_RULES - i;
lpm = rte_lpm6_create(__func__, SOCKET_ID_ANY, &config);
TEST_LPM_ASSERT(lpm != NULL);
@@ -693,7 +693,7 @@ test13(void)
}
/*
- * Add 2^16 routes with different first 16 bits and depth 25.
+ * Add 2^12 routes with different first 12 bits and depth 25.
* Add one more route with the same depth and check that results in a failure.
* After that delete the last rule and create the one that was attempted to be
* created. This checks tbl8 exhaustion.
@@ -706,10 +706,10 @@ test14(void)
uint8_t ip[] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
uint8_t depth = 25, next_hop_add = 100;
int32_t status = 0;
- int i, j;
+ int i;
config.max_rules = MAX_RULES;
- config.number_tbl8s = NUMBER_TBL8S;
+ config.number_tbl8s = 256;
config.flags = 0;
lpm = rte_lpm6_create(__func__, SOCKET_ID_ANY, &config);
@@ -717,28 +717,22 @@ test14(void)
for (i = 0; i < 256; i++) {
ip[0] = (uint8_t)i;
- for (j = 0; j < 256; j++) {
- ip[1] = (uint8_t)j;
- status = rte_lpm6_add(lpm, ip, depth, next_hop_add);
- TEST_LPM_ASSERT(status == 0);
- }
+ status = rte_lpm6_add(lpm, ip, depth, next_hop_add);
+ TEST_LPM_ASSERT(status == 0);
}
ip[0] = 255;
- ip[1] = 255;
- ip[2] = 1;
+ ip[1] = 1;
status = rte_lpm6_add(lpm, ip, depth, next_hop_add);
TEST_LPM_ASSERT(status == -ENOSPC);
ip[0] = 255;
- ip[1] = 255;
- ip[2] = 0;
+ ip[1] = 0;
status = rte_lpm6_delete(lpm, ip, depth);
TEST_LPM_ASSERT(status == 0);
ip[0] = 255;
- ip[1] = 255;
- ip[2] = 1;
+ ip[1] = 1;
status = rte_lpm6_add(lpm, ip, depth, next_hop_add);
TEST_LPM_ASSERT(status == 0);
@@ -847,7 +841,7 @@ test17(void)
TEST_LPM_ASSERT(lpm != NULL);
/* Loop with rte_lpm6_add. */
- for (depth = 1; depth <= 128; depth++) {
+ for (depth = 1; depth <= 16; depth++) {
/* Let the next_hop_add value = depth. Just for change. */
next_hop_add = depth;
@@ -864,7 +858,7 @@ test17(void)
}
/* Loop with rte_lpm6_delete. */
- for (depth = 128; depth >= 1; depth--) {
+ for (depth = 16; depth >= 1; depth--) {
next_hop_add = (uint8_t) (depth - 1);
status = rte_lpm6_delete(lpm, ip2, depth);
@@ -1493,7 +1487,7 @@ test22(void)
/*
* Add an extended rule (i.e. depth greater than 24, lookup (hit), delete,
- * lookup (miss) in a for loop of 1000 times. This will check tbl8 extension
+ * lookup (miss) in a for loop of 30 times. This will check tbl8 extension
* and contraction.
*/
int32_t
@@ -1517,7 +1511,7 @@ test23(void)
depth = 128;
next_hop_add = 100;
- for (i = 0; i < 1000; i++) {
+ for (i = 0; i < 30; i++) {
status = rte_lpm6_add(lpm, ip, depth, next_hop_add);
TEST_LPM_ASSERT(status == 0);
@@ -1760,6 +1754,7 @@ test_lpm6(void)
int status = -1, global_status = 0;
for (i = 0; i < NUM_LPM6_TESTS; i++) {
+ printf("# test %02d\n", i);
status = tests6[i]();
if (status < 0) {
diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
index 98ff93a..59f9979 100644
--- a/app/test/test_mbuf.c
+++ b/app/test/test_mbuf.c
@@ -748,7 +748,7 @@ test_refcnt_iter(unsigned lcore, unsigned iter)
__func__, lcore, iter, tref);
return;
}
- rte_delay_ms(1000);
+ rte_delay_ms(100);
}
rte_panic("(lcore=%u, iter=%u): after %us only "
diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index f0f823b..893d5d0 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -74,7 +74,7 @@
#define N 65536
#define TIME_S 5
#define MEMPOOL_ELT_SIZE 2048
-#define MAX_KEEP 128
+#define MAX_KEEP 16
#define MEMPOOL_SIZE ((rte_lcore_count()*(MAX_KEEP+RTE_MEMPOOL_CACHE_MAX_SIZE))-1)
static struct rte_mempool *mp;
@@ -222,7 +222,7 @@ static int test_mempool_single_producer(void)
unsigned int i;
void *obj = NULL;
uint64_t start_cycles, end_cycles;
- uint64_t duration = rte_get_timer_hz() * 8;
+ uint64_t duration = rte_get_timer_hz() / 4;
start_cycles = rte_get_timer_cycles();
while (1) {
@@ -262,7 +262,7 @@ static int test_mempool_single_consumer(void)
unsigned int i;
void * obj;
uint64_t start_cycles, end_cycles;
- uint64_t duration = rte_get_timer_hz() * 5;
+ uint64_t duration = rte_get_timer_hz() / 8;
start_cycles = rte_get_timer_cycles();
while (1) {
diff --git a/app/test/test_per_lcore.c b/app/test/test_per_lcore.c
index b16449a..f452cdb 100644
--- a/app/test/test_per_lcore.c
+++ b/app/test/test_per_lcore.c
@@ -92,8 +92,8 @@ display_vars(__attribute__((unused)) void *arg)
static int
test_per_lcore_delay(__attribute__((unused)) void *arg)
{
- rte_delay_ms(5000);
- printf("wait 5000ms on lcore %u\n", rte_lcore_id());
+ rte_delay_ms(100);
+ printf("wait 100ms on lcore %u\n", rte_lcore_id());
return 0;
}
diff --git a/app/test/test_ring.c b/app/test/test_ring.c
index 0d7523e..4f8dc8f 100644
--- a/app/test/test_ring.c
+++ b/app/test/test_ring.c
@@ -130,7 +130,7 @@ check_live_watermark_change(__attribute__((unused)) void *dummy)
/* init the object table */
memset(obj_table, 0, sizeof(obj_table));
- end_time = rte_get_timer_cycles() + (hz * 2);
+ end_time = rte_get_timer_cycles() + (hz / 4);
/* check that bulk and watermark are 4 and 32 (respectively) */
while (diff >= 0) {
@@ -194,9 +194,9 @@ test_live_watermark_change(void)
* watermark and quota */
rte_eal_remote_launch(check_live_watermark_change, NULL, lcore_id2);
- rte_delay_ms(1000);
+ rte_delay_ms(100);
rte_ring_set_water_mark(r, 32);
- rte_delay_ms(1000);
+ rte_delay_ms(100);
if (rte_eal_wait_lcore(lcore_id2) < 0)
return -1;
diff --git a/app/test/test_spinlock.c b/app/test/test_spinlock.c
index 16ced7f..180d6de 100644
--- a/app/test/test_spinlock.c
+++ b/app/test/test_spinlock.c
@@ -129,7 +129,7 @@ test_spinlock_recursive_per_core(__attribute__((unused)) void *arg)
static rte_spinlock_t lk = RTE_SPINLOCK_INITIALIZER;
static uint64_t lock_count[RTE_MAX_LCORE] = {0};
-#define TIME_S 5
+#define TIME_MS 100
static int
load_loop_fn(void *func_param)
@@ -145,7 +145,7 @@ load_loop_fn(void *func_param)
while (rte_atomic32_read(&synchro) == 0);
begin = rte_get_timer_cycles();
- while (time_diff / hz < TIME_S) {
+ while (time_diff < hz * TIME_MS / 1000) {
if (use_lock)
rte_spinlock_lock(&lk);
lcount++;
@@ -258,7 +258,7 @@ test_spinlock(void)
RTE_LCORE_FOREACH_SLAVE(i) {
rte_spinlock_unlock(&sl_tab[i]);
- rte_delay_ms(100);
+ rte_delay_ms(10);
}
rte_eal_mp_wait_lcore();
diff --git a/app/test/test_timer.c b/app/test/test_timer.c
index 944e2ad..bc07925 100644
--- a/app/test/test_timer.c
+++ b/app/test/test_timer.c
@@ -137,7 +137,7 @@
#include <rte_random.h>
#include <rte_malloc.h>
-#define TEST_DURATION_S 20 /* in seconds */
+#define TEST_DURATION_S 1 /* in seconds */
#define NB_TIMER 4
#define RTE_LOGTYPE_TESTTIMER RTE_LOGTYPE_USER3
@@ -305,7 +305,7 @@ timer_stress2_main_loop(__attribute__((unused)) void *arg)
{
static struct rte_timer *timers;
int i, ret;
- uint64_t delay = rte_get_timer_hz() / 4;
+ uint64_t delay = rte_get_timer_hz() / 20;
unsigned lcore_id = rte_lcore_id();
unsigned master = rte_get_master_lcore();
int32_t my_collisions = 0;
@@ -346,7 +346,7 @@ timer_stress2_main_loop(__attribute__((unused)) void *arg)
rte_atomic32_add(&collisions, my_collisions);
/* wait long enough for timers to expire */
- rte_delay_ms(500);
+ rte_delay_ms(100);
/* all cores rendezvous */
if (lcore_id == master) {
@@ -396,7 +396,7 @@ timer_stress2_main_loop(__attribute__((unused)) void *arg)
}
/* wait long enough for timers to expire */
- rte_delay_ms(500);
+ rte_delay_ms(100);
/* now check that we get the right number of callbacks */
if (lcore_id == master) {
@@ -495,13 +495,13 @@ timer_basic_main_loop(__attribute__((unused)) void *arg)
/* launch all timers on core 0 */
if (lcore_id == rte_get_master_lcore()) {
- mytimer_reset(&mytiminfo[0], hz, SINGLE, lcore_id,
+ mytimer_reset(&mytiminfo[0], hz/4, SINGLE, lcore_id,
timer_basic_cb);
- mytimer_reset(&mytiminfo[1], hz*2, SINGLE, lcore_id,
+ mytimer_reset(&mytiminfo[1], hz/2, SINGLE, lcore_id,
timer_basic_cb);
- mytimer_reset(&mytiminfo[2], hz, PERIODICAL, lcore_id,
+ mytimer_reset(&mytiminfo[2], hz/4, PERIODICAL, lcore_id,
timer_basic_cb);
- mytimer_reset(&mytiminfo[3], hz, PERIODICAL,
+ mytimer_reset(&mytiminfo[3], hz/4, PERIODICAL,
rte_get_next_lcore(lcore_id, 0, 1),
timer_basic_cb);
}
@@ -591,7 +591,7 @@ test_timer(void)
end_time = cur_time + (hz * TEST_DURATION_S);
/* start other cores */
- printf("Start timer stress tests (%d seconds)\n", TEST_DURATION_S);
+ printf("Start timer stress tests\n");
rte_eal_mp_remote_launch(timer_stress_main_loop, NULL, CALL_MASTER);
rte_eal_mp_wait_lcore();
@@ -612,7 +612,7 @@ test_timer(void)
end_time = cur_time + (hz * TEST_DURATION_S);
/* start other cores */
- printf("\nStart timer basic tests (%d seconds)\n", TEST_DURATION_S);
+ printf("\nStart timer basic tests\n");
rte_eal_mp_remote_launch(timer_basic_main_loop, NULL, CALL_MASTER);
rte_eal_mp_wait_lcore();
--
2.7.0
^ permalink raw reply [flat|nested] 7+ messages in thread
* [dpdk-dev] [PATCH v2 2/4] app/test: reduce memory needs
2016-05-11 14:26 ` [dpdk-dev] [PATCH v2 0/4] reduce autotests constraints Thomas Monjalon
2016-05-11 14:26 ` [dpdk-dev] [PATCH v2 1/4] app/test: shorten execution time Thomas Monjalon
@ 2016-05-11 14:26 ` Thomas Monjalon
2016-05-11 14:26 ` [dpdk-dev] [PATCH v2 3/4] app/test: remove unused constants Thomas Monjalon
` (2 subsequent siblings)
4 siblings, 0 replies; 7+ messages in thread
From: Thomas Monjalon @ 2016-05-11 14:26 UTC (permalink / raw)
To: dev
Adjust memory parameter (--socket-mem) of the autotests
to be able to make fast_test in a constrained environment (e.g. a VM).
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
---
app/test/autotest_data.py | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 8a92bb4..25cca4e 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -109,7 +109,7 @@ parallel_test_group_list = [
},
{
"Prefix": "group_2",
- "Memory" : "128",
+ "Memory" : "16",
"Tests" :
[
{
@@ -164,7 +164,7 @@ parallel_test_group_list = [
},
{
"Prefix": "group_3",
- "Memory" : per_sockets(1024),
+ "Memory" : per_sockets(390),
"Tests" :
[
{
@@ -293,7 +293,7 @@ parallel_test_group_list = [
},
{
"Prefix": "group_6",
- "Memory" : per_sockets(620),
+ "Memory" : per_sockets(128),
"Tests" :
[
{
@@ -330,7 +330,7 @@ parallel_test_group_list = [
},
{
"Prefix" : "group_7",
- "Memory" : "400",
+ "Memory" : "64",
"Tests" :
[
{
@@ -418,7 +418,7 @@ non_parallel_test_group_list = [
},
{
"Prefix" : "power",
- "Memory" : per_sockets(512),
+ "Memory" : "16",
"Tests" :
[
{
@@ -431,7 +431,7 @@ non_parallel_test_group_list = [
},
{
"Prefix" : "power_acpi_cpufreq",
- "Memory" : per_sockets(512),
+ "Memory" : "16",
"Tests" :
[
{
@@ -444,7 +444,7 @@ non_parallel_test_group_list = [
},
{
"Prefix" : "power_kvm_vm",
- "Memory" : "512",
+ "Memory" : "16",
"Tests" :
[
{
--
2.7.0
^ permalink raw reply [flat|nested] 7+ messages in thread
* [dpdk-dev] [PATCH v2 3/4] app/test: remove unused constants
2016-05-11 14:26 ` [dpdk-dev] [PATCH v2 0/4] reduce autotests constraints Thomas Monjalon
2016-05-11 14:26 ` [dpdk-dev] [PATCH v2 1/4] app/test: shorten execution time Thomas Monjalon
2016-05-11 14:26 ` [dpdk-dev] [PATCH v2 2/4] app/test: reduce memory needs Thomas Monjalon
@ 2016-05-11 14:26 ` Thomas Monjalon
2016-05-11 14:26 ` [dpdk-dev] [PATCH v2 4/4] app/test: move cycles autotest to first group Thomas Monjalon
2016-05-24 15:20 ` [dpdk-dev] [PATCH v2 0/4] reduce autotests constraints Thomas Monjalon
4 siblings, 0 replies; 7+ messages in thread
From: Thomas Monjalon @ 2016-05-11 14:26 UTC (permalink / raw)
To: dev
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
---
app/test/test_lpm6.c | 2 --
app/test/test_memcpy.c | 15 ---------------
app/test/test_mempool.c | 2 --
app/test/test_ring.c | 2 --
4 files changed, 21 deletions(-)
diff --git a/app/test/test_lpm6.c b/app/test/test_lpm6.c
index 9545982..458a10b 100644
--- a/app/test/test_lpm6.c
+++ b/app/test/test_lpm6.c
@@ -113,8 +113,6 @@ rte_lpm6_test tests6[] = {
};
#define NUM_LPM6_TESTS (sizeof(tests6)/sizeof(tests6[0]))
-#define RTE_LPM6_TBL24_NUM_ENTRIES (1 << 24)
-#define RTE_LPM6_LOOKUP_SUCCESS 0x04000000
#define MAX_DEPTH 128
#define MAX_RULES 1000000
#define NUMBER_TBL8S (1 << 16)
diff --git a/app/test/test_memcpy.c b/app/test/test_memcpy.c
index b2bb4e0..8195e20 100644
--- a/app/test/test_memcpy.c
+++ b/app/test/test_memcpy.c
@@ -37,10 +37,7 @@
#include <stdlib.h>
#include <rte_common.h>
-#include <rte_cycles.h>
#include <rte_random.h>
-#include <rte_malloc.h>
-
#include <rte_memcpy.h>
#include "test.h"
@@ -65,18 +62,6 @@ static size_t buf_sizes[TEST_VALUE_RANGE];
#define SMALL_BUFFER_SIZE TEST_VALUE_RANGE
#endif /* TEST_VALUE_RANGE == 0 */
-
-/*
- * Arrays of this size are used for measuring uncached memory accesses by
- * picking a random location within the buffer. Make this smaller if there are
- * memory allocation errors.
- */
-#define LARGE_BUFFER_SIZE (100 * 1024 * 1024)
-
-/* How many times to run timing loop for performance tests */
-#define TEST_ITERATIONS 1000000
-#define TEST_BATCH_SIZE 100
-
/* Data is aligned on this many bytes (power of 2) */
#define ALIGNMENT_UNIT 32
diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index 893d5d0..f439834 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -71,8 +71,6 @@
* put them back in the pool.
*/
-#define N 65536
-#define TIME_S 5
#define MEMPOOL_ELT_SIZE 2048
#define MAX_KEEP 16
#define MEMPOOL_SIZE ((rte_lcore_count()*(MAX_KEEP+RTE_MEMPOOL_CACHE_MAX_SIZE))-1)
diff --git a/app/test/test_ring.c b/app/test/test_ring.c
index 4f8dc8f..9095e59 100644
--- a/app/test/test_ring.c
+++ b/app/test/test_ring.c
@@ -100,8 +100,6 @@
#define RING_SIZE 4096
#define MAX_BULK 32
-#define N 65536
-#define TIME_S 5
static rte_atomic32_t synchro;
--
2.7.0
^ permalink raw reply [flat|nested] 7+ messages in thread
* [dpdk-dev] [PATCH v2 4/4] app/test: move cycles autotest to first group
2016-05-11 14:26 ` [dpdk-dev] [PATCH v2 0/4] reduce autotests constraints Thomas Monjalon
` (2 preceding siblings ...)
2016-05-11 14:26 ` [dpdk-dev] [PATCH v2 3/4] app/test: remove unused constants Thomas Monjalon
@ 2016-05-11 14:26 ` Thomas Monjalon
2016-05-24 15:20 ` [dpdk-dev] [PATCH v2 0/4] reduce autotests constraints Thomas Monjalon
4 siblings, 0 replies; 7+ messages in thread
From: Thomas Monjalon @ 2016-05-11 14:26 UTC (permalink / raw)
To: dev
The cycles test was wrongly in the group of mempool perf test.
It is moved in parallel test group 1 with timer test.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
---
app/test/autotest_data.py | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 25cca4e..78d2edd 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -58,6 +58,12 @@ parallel_test_group_list = [
"Tests" :
[
{
+ "Name" : "Cycles autotest",
+ "Command" : "cycles_autotest",
+ "Func" : default_autotest,
+ "Report" : None,
+ },
+ {
"Name" : "Timer autotest",
"Command" : "timer_autotest",
"Func" : timer_autotest,
@@ -377,12 +383,6 @@ non_parallel_test_group_list = [
"Tests" :
[
{
- "Name" : "Cycles autotest",
- "Command" : "cycles_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
"Name" : "Mempool performance autotest",
"Command" : "mempool_perf_autotest",
"Func" : default_autotest,
--
2.7.0
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [dpdk-dev] [PATCH v2 0/4] reduce autotests constraints
2016-05-11 14:26 ` [dpdk-dev] [PATCH v2 0/4] reduce autotests constraints Thomas Monjalon
` (3 preceding siblings ...)
2016-05-11 14:26 ` [dpdk-dev] [PATCH v2 4/4] app/test: move cycles autotest to first group Thomas Monjalon
@ 2016-05-24 15:20 ` Thomas Monjalon
4 siblings, 0 replies; 7+ messages in thread
From: Thomas Monjalon @ 2016-05-24 15:20 UTC (permalink / raw)
To: dev
2016-05-11 16:26, Thomas Monjalon:
> In order to make autotests easy to run often,
> the time and memory constraints are reduced.
Applied
There is some remaining work:
> Red autotest: Success [01m 36s] [02m 13s]
> The RED autotest needs some work.
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2016-05-24 15:20 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-05-04 17:32 [dpdk-dev] [PATCH] app/test: shorten execution time Thomas Monjalon
2016-05-11 14:26 ` [dpdk-dev] [PATCH v2 0/4] reduce autotests constraints Thomas Monjalon
2016-05-11 14:26 ` [dpdk-dev] [PATCH v2 1/4] app/test: shorten execution time Thomas Monjalon
2016-05-11 14:26 ` [dpdk-dev] [PATCH v2 2/4] app/test: reduce memory needs Thomas Monjalon
2016-05-11 14:26 ` [dpdk-dev] [PATCH v2 3/4] app/test: remove unused constants Thomas Monjalon
2016-05-11 14:26 ` [dpdk-dev] [PATCH v2 4/4] app/test: move cycles autotest to first group Thomas Monjalon
2016-05-24 15:20 ` [dpdk-dev] [PATCH v2 0/4] reduce autotests constraints Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).