* [dpdk-dev] [PATCH v1 1/4] app: make python apps pep8 compliant
2016-12-08 15:51 [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant John McNamara
@ 2016-12-08 15:51 ` John McNamara
2016-12-08 15:51 ` [dpdk-dev] [PATCH v1 2/4] app: make python apps python2/3 compliant John McNamara
` (19 subsequent siblings)
20 siblings, 0 replies; 28+ messages in thread
From: John McNamara @ 2016-12-08 15:51 UTC (permalink / raw)
To: dev; +Cc: mkletzan, John McNamara
Make all DPDK python application compliant with the PEP8 standard
to allow for consistency checking of patches and to allow further
refactoring.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
app/cmdline_test/cmdline_test.py | 81 +-
app/cmdline_test/cmdline_test_data.py | 401 +++++-----
app/test/autotest.py | 40 +-
app/test/autotest_data.py | 829 +++++++++++----------
app/test/autotest_runner.py | 739 +++++++++---------
app/test/autotest_test_funcs.py | 479 ++++++------
doc/guides/conf.py | 9 +-
examples/ip_pipeline/config/diagram-generator.py | 13 +-
.../ip_pipeline/config/pipeline-to-core-mapping.py | 11 +-
tools/cpu_layout.py | 55 +-
tools/dpdk-devbind.py | 23 +-
tools/dpdk-pmdinfo.py | 61 +-
12 files changed, 1375 insertions(+), 1366 deletions(-)
diff --git a/app/cmdline_test/cmdline_test.py b/app/cmdline_test/cmdline_test.py
index 8efc5ea..4729987 100755
--- a/app/cmdline_test/cmdline_test.py
+++ b/app/cmdline_test/cmdline_test.py
@@ -33,16 +33,21 @@
# Script that runs cmdline_test app and feeds keystrokes into it.
-import sys, pexpect, string, os, cmdline_test_data
+import cmdline_test_data
+import os
+import pexpect
+import sys
+
#
# function to run test
#
-def runTest(child,test):
- child.send(test["Sequence"])
- if test["Result"] == None:
- return 0
- child.expect(test["Result"],1)
+def runTest(child, test):
+ child.send(test["Sequence"])
+ if test["Result"] is None:
+ return 0
+ child.expect(test["Result"], 1)
+
#
# history test is a special case
@@ -57,57 +62,57 @@ def runTest(child,test):
# This is a self-contained test, it needs only a pexpect child
#
def runHistoryTest(child):
- # find out history size
- child.sendline(cmdline_test_data.CMD_GET_BUFSIZE)
- child.expect("History buffer size: \\d+", timeout=1)
- history_size = int(child.after[len(cmdline_test_data.BUFSIZE_TEMPLATE):])
- i = 0
+ # find out history size
+ child.sendline(cmdline_test_data.CMD_GET_BUFSIZE)
+ child.expect("History buffer size: \\d+", timeout=1)
+ history_size = int(child.after[len(cmdline_test_data.BUFSIZE_TEMPLATE):])
+ i = 0
- # fill the history with numbers
- while i < history_size / 10:
- # add 1 to prevent from parsing as octals
- child.send("1" + str(i).zfill(8) + cmdline_test_data.ENTER)
- # the app will simply print out the number
- child.expect(str(i + 100000000), timeout=1)
- i += 1
- # scroll back history
- child.send(cmdline_test_data.UP * (i + 2) + cmdline_test_data.ENTER)
- child.expect("100000000", timeout=1)
+ # fill the history with numbers
+ while i < history_size / 10:
+ # add 1 to prevent from parsing as octals
+ child.send("1" + str(i).zfill(8) + cmdline_test_data.ENTER)
+ # the app will simply print out the number
+ child.expect(str(i + 100000000), timeout=1)
+ i += 1
+ # scroll back history
+ child.send(cmdline_test_data.UP * (i + 2) + cmdline_test_data.ENTER)
+ child.expect("100000000", timeout=1)
# the path to cmdline_test executable is supplied via command-line.
if len(sys.argv) < 2:
- print "Error: please supply cmdline_test app path"
- sys.exit(1)
+ print "Error: please supply cmdline_test app path"
+ sys.exit(1)
test_app_path = sys.argv[1]
if not os.path.exists(test_app_path):
- print "Error: please supply cmdline_test app path"
- sys.exit(1)
+ print "Error: please supply cmdline_test app path"
+ sys.exit(1)
child = pexpect.spawn(test_app_path)
print "Running command-line tests..."
for test in cmdline_test_data.tests:
- print (test["Name"] + ":").ljust(30),
- try:
- runTest(child,test)
- print "PASS"
- except:
- print "FAIL"
- print child
- sys.exit(1)
+ print (test["Name"] + ":").ljust(30),
+ try:
+ runTest(child, test)
+ print "PASS"
+ except:
+ print "FAIL"
+ print child
+ sys.exit(1)
# since last test quits the app, run new instance
child = pexpect.spawn(test_app_path)
print ("History fill test:").ljust(30),
try:
- runHistoryTest(child)
- print "PASS"
+ runHistoryTest(child)
+ print "PASS"
except:
- print "FAIL"
- print child
- sys.exit(1)
+ print "FAIL"
+ print child
+ sys.exit(1)
child.close()
sys.exit(0)
diff --git a/app/cmdline_test/cmdline_test_data.py b/app/cmdline_test/cmdline_test_data.py
index b1945a5..3ce6cbc 100644
--- a/app/cmdline_test/cmdline_test_data.py
+++ b/app/cmdline_test/cmdline_test_data.py
@@ -33,8 +33,6 @@
# collection of static data
-import sys
-
# keycode constants
CTRL_A = chr(1)
CTRL_B = chr(2)
@@ -95,217 +93,220 @@
# and expected output (if any).
tests = [
-# test basic commands
- {"Name" : "command test 1",
- "Sequence" : "ambiguous first" + ENTER,
- "Result" : CMD1},
- {"Name" : "command test 2",
- "Sequence" : "ambiguous second" + ENTER,
- "Result" : CMD2},
- {"Name" : "command test 3",
- "Sequence" : "ambiguous ambiguous" + ENTER,
- "Result" : AMBIG},
- {"Name" : "command test 4",
- "Sequence" : "ambiguous ambiguous2" + ENTER,
- "Result" : AMBIG},
+ # test basic commands
+ {"Name": "command test 1",
+ "Sequence": "ambiguous first" + ENTER,
+ "Result": CMD1},
+ {"Name": "command test 2",
+ "Sequence": "ambiguous second" + ENTER,
+ "Result": CMD2},
+ {"Name": "command test 3",
+ "Sequence": "ambiguous ambiguous" + ENTER,
+ "Result": AMBIG},
+ {"Name": "command test 4",
+ "Sequence": "ambiguous ambiguous2" + ENTER,
+ "Result": AMBIG},
- {"Name" : "invalid command test 1",
- "Sequence" : "ambiguous invalid" + ENTER,
- "Result" : BAD_ARG},
-# test invalid commands
- {"Name" : "invalid command test 2",
- "Sequence" : "invalid" + ENTER,
- "Result" : NOT_FOUND},
- {"Name" : "invalid command test 3",
- "Sequence" : "ambiguousinvalid" + ENTER2,
- "Result" : NOT_FOUND},
+ {"Name": "invalid command test 1",
+ "Sequence": "ambiguous invalid" + ENTER,
+ "Result": BAD_ARG},
+ # test invalid commands
+ {"Name": "invalid command test 2",
+ "Sequence": "invalid" + ENTER,
+ "Result": NOT_FOUND},
+ {"Name": "invalid command test 3",
+ "Sequence": "ambiguousinvalid" + ENTER2,
+ "Result": NOT_FOUND},
-# test arrows and deletes
- {"Name" : "arrows & delete test 1",
- "Sequence" : "singlebad" + LEFT*2 + CTRL_B + DEL*3 + ENTER,
- "Result" : SINGLE},
- {"Name" : "arrows & delete test 2",
- "Sequence" : "singlebad" + LEFT*5 + RIGHT + CTRL_F + DEL*3 + ENTER,
- "Result" : SINGLE},
+ # test arrows and deletes
+ {"Name": "arrows & delete test 1",
+ "Sequence": "singlebad" + LEFT*2 + CTRL_B + DEL*3 + ENTER,
+ "Result": SINGLE},
+ {"Name": "arrows & delete test 2",
+ "Sequence": "singlebad" + LEFT*5 + RIGHT + CTRL_F + DEL*3 + ENTER,
+ "Result": SINGLE},
-# test backspace
- {"Name" : "backspace test",
- "Sequence" : "singlebad" + BKSPACE*3 + ENTER,
- "Result" : SINGLE},
+ # test backspace
+ {"Name": "backspace test",
+ "Sequence": "singlebad" + BKSPACE*3 + ENTER,
+ "Result": SINGLE},
-# test goto left and goto right
- {"Name" : "goto left test",
- "Sequence" : "biguous first" + CTRL_A + "am" + ENTER,
- "Result" : CMD1},
- {"Name" : "goto right test",
- "Sequence" : "biguous fir" + CTRL_A + "am" + CTRL_E + "st" + ENTER,
- "Result" : CMD1},
+ # test goto left and goto right
+ {"Name": "goto left test",
+ "Sequence": "biguous first" + CTRL_A + "am" + ENTER,
+ "Result": CMD1},
+ {"Name": "goto right test",
+ "Sequence": "biguous fir" + CTRL_A + "am" + CTRL_E + "st" + ENTER,
+ "Result": CMD1},
-# test goto words
- {"Name" : "goto left word test",
- "Sequence" : "ambiguous st" + ALT_B + "fir" + ENTER,
- "Result" : CMD1},
- {"Name" : "goto right word test",
- "Sequence" : "ambig first" + CTRL_A + ALT_F + "uous" + ENTER,
- "Result" : CMD1},
+ # test goto words
+ {"Name": "goto left word test",
+ "Sequence": "ambiguous st" + ALT_B + "fir" + ENTER,
+ "Result": CMD1},
+ {"Name": "goto right word test",
+ "Sequence": "ambig first" + CTRL_A + ALT_F + "uous" + ENTER,
+ "Result": CMD1},
-# test removing words
- {"Name" : "remove left word 1",
- "Sequence" : "single invalid" + CTRL_W + ENTER,
- "Result" : SINGLE},
- {"Name" : "remove left word 2",
- "Sequence" : "single invalid" + ALT_BKSPACE + ENTER,
- "Result" : SINGLE},
- {"Name" : "remove right word",
- "Sequence" : "single invalid" + ALT_B + ALT_D + ENTER,
- "Result" : SINGLE},
+ # test removing words
+ {"Name": "remove left word 1",
+ "Sequence": "single invalid" + CTRL_W + ENTER,
+ "Result": SINGLE},
+ {"Name": "remove left word 2",
+ "Sequence": "single invalid" + ALT_BKSPACE + ENTER,
+ "Result": SINGLE},
+ {"Name": "remove right word",
+ "Sequence": "single invalid" + ALT_B + ALT_D + ENTER,
+ "Result": SINGLE},
-# test kill buffer (copy and paste)
- {"Name" : "killbuffer test 1",
- "Sequence" : "ambiguous" + CTRL_A + CTRL_K + " first" + CTRL_A + CTRL_Y + ENTER,
- "Result" : CMD1},
- {"Name" : "killbuffer test 2",
- "Sequence" : "ambiguous" + CTRL_A + CTRL_K + CTRL_Y*26 + ENTER,
- "Result" : NOT_FOUND},
+ # test kill buffer (copy and paste)
+ {"Name": "killbuffer test 1",
+ "Sequence": "ambiguous" + CTRL_A + CTRL_K + " first" + CTRL_A +
+ CTRL_Y + ENTER,
+ "Result": CMD1},
+ {"Name": "killbuffer test 2",
+ "Sequence": "ambiguous" + CTRL_A + CTRL_K + CTRL_Y*26 + ENTER,
+ "Result": NOT_FOUND},
-# test newline
- {"Name" : "newline test",
- "Sequence" : "invalid" + CTRL_C + "single" + ENTER,
- "Result" : SINGLE},
+ # test newline
+ {"Name": "newline test",
+ "Sequence": "invalid" + CTRL_C + "single" + ENTER,
+ "Result": SINGLE},
-# test redisplay (nothing should really happen)
- {"Name" : "redisplay test",
- "Sequence" : "single" + CTRL_L + ENTER,
- "Result" : SINGLE},
+ # test redisplay (nothing should really happen)
+ {"Name": "redisplay test",
+ "Sequence": "single" + CTRL_L + ENTER,
+ "Result": SINGLE},
-# test autocomplete
- {"Name" : "autocomplete test 1",
- "Sequence" : "si" + TAB + ENTER,
- "Result" : SINGLE},
- {"Name" : "autocomplete test 2",
- "Sequence" : "si" + TAB + "_" + TAB + ENTER,
- "Result" : SINGLE_LONG},
- {"Name" : "autocomplete test 3",
- "Sequence" : "in" + TAB + ENTER,
- "Result" : NOT_FOUND},
- {"Name" : "autocomplete test 4",
- "Sequence" : "am" + TAB + ENTER,
- "Result" : BAD_ARG},
- {"Name" : "autocomplete test 5",
- "Sequence" : "am" + TAB + "fir" + TAB + ENTER,
- "Result" : CMD1},
- {"Name" : "autocomplete test 6",
- "Sequence" : "am" + TAB + "fir" + TAB + TAB + ENTER,
- "Result" : CMD1},
- {"Name" : "autocomplete test 7",
- "Sequence" : "am" + TAB + "fir" + TAB + " " + TAB + ENTER,
- "Result" : CMD1},
- {"Name" : "autocomplete test 8",
- "Sequence" : "am" + TAB + " am" + TAB + " " + ENTER,
- "Result" : AMBIG},
- {"Name" : "autocomplete test 9",
- "Sequence" : "am" + TAB + "inv" + TAB + ENTER,
- "Result" : BAD_ARG},
- {"Name" : "autocomplete test 10",
- "Sequence" : "au" + TAB + ENTER,
- "Result" : NOT_FOUND},
- {"Name" : "autocomplete test 11",
- "Sequence" : "au" + TAB + "1" + ENTER,
- "Result" : AUTO1},
- {"Name" : "autocomplete test 12",
- "Sequence" : "au" + TAB + "2" + ENTER,
- "Result" : AUTO2},
- {"Name" : "autocomplete test 13",
- "Sequence" : "au" + TAB + "2" + TAB + ENTER,
- "Result" : AUTO2},
- {"Name" : "autocomplete test 14",
- "Sequence" : "au" + TAB + "2 " + TAB + ENTER,
- "Result" : AUTO2},
- {"Name" : "autocomplete test 15",
- "Sequence" : "24" + TAB + ENTER,
- "Result" : "24"},
+ # test autocomplete
+ {"Name": "autocomplete test 1",
+ "Sequence": "si" + TAB + ENTER,
+ "Result": SINGLE},
+ {"Name": "autocomplete test 2",
+ "Sequence": "si" + TAB + "_" + TAB + ENTER,
+ "Result": SINGLE_LONG},
+ {"Name": "autocomplete test 3",
+ "Sequence": "in" + TAB + ENTER,
+ "Result": NOT_FOUND},
+ {"Name": "autocomplete test 4",
+ "Sequence": "am" + TAB + ENTER,
+ "Result": BAD_ARG},
+ {"Name": "autocomplete test 5",
+ "Sequence": "am" + TAB + "fir" + TAB + ENTER,
+ "Result": CMD1},
+ {"Name": "autocomplete test 6",
+ "Sequence": "am" + TAB + "fir" + TAB + TAB + ENTER,
+ "Result": CMD1},
+ {"Name": "autocomplete test 7",
+ "Sequence": "am" + TAB + "fir" + TAB + " " + TAB + ENTER,
+ "Result": CMD1},
+ {"Name": "autocomplete test 8",
+ "Sequence": "am" + TAB + " am" + TAB + " " + ENTER,
+ "Result": AMBIG},
+ {"Name": "autocomplete test 9",
+ "Sequence": "am" + TAB + "inv" + TAB + ENTER,
+ "Result": BAD_ARG},
+ {"Name": "autocomplete test 10",
+ "Sequence": "au" + TAB + ENTER,
+ "Result": NOT_FOUND},
+ {"Name": "autocomplete test 11",
+ "Sequence": "au" + TAB + "1" + ENTER,
+ "Result": AUTO1},
+ {"Name": "autocomplete test 12",
+ "Sequence": "au" + TAB + "2" + ENTER,
+ "Result": AUTO2},
+ {"Name": "autocomplete test 13",
+ "Sequence": "au" + TAB + "2" + TAB + ENTER,
+ "Result": AUTO2},
+ {"Name": "autocomplete test 14",
+ "Sequence": "au" + TAB + "2 " + TAB + ENTER,
+ "Result": AUTO2},
+ {"Name": "autocomplete test 15",
+ "Sequence": "24" + TAB + ENTER,
+ "Result": "24"},
-# test history
- {"Name" : "history test 1",
- "Sequence" : "invalid" + ENTER + "single" + ENTER + "invalid" + ENTER + UP + CTRL_P + ENTER,
- "Result" : SINGLE},
- {"Name" : "history test 2",
- "Sequence" : "invalid" + ENTER + "ambiguous first" + ENTER + "invalid" + ENTER + "single" + ENTER + UP * 3 + CTRL_N + DOWN + ENTER,
- "Result" : SINGLE},
+ # test history
+ {"Name": "history test 1",
+ "Sequence": "invalid" + ENTER + "single" + ENTER + "invalid" +
+ ENTER + UP + CTRL_P + ENTER,
+ "Result": SINGLE},
+ {"Name": "history test 2",
+ "Sequence": "invalid" + ENTER + "ambiguous first" + ENTER + "invalid" +
+ ENTER + "single" + ENTER + UP * 3 + CTRL_N + DOWN + ENTER,
+ "Result": SINGLE},
-#
-# tests that improve coverage
-#
+ #
+ # tests that improve coverage
+ #
-# empty space tests
- {"Name" : "empty space test 1",
- "Sequence" : RIGHT + LEFT + CTRL_B + CTRL_F + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 2",
- "Sequence" : BKSPACE + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 3",
- "Sequence" : CTRL_E*2 + CTRL_A*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 4",
- "Sequence" : ALT_F*2 + ALT_B*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 5",
- "Sequence" : " " + CTRL_E*2 + CTRL_A*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 6",
- "Sequence" : " " + CTRL_A + ALT_F*2 + ALT_B*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 7",
- "Sequence" : " " + CTRL_A + CTRL_D + CTRL_E + CTRL_D + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 8",
- "Sequence" : " space" + CTRL_W*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 9",
- "Sequence" : " space" + ALT_BKSPACE*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 10",
- "Sequence" : " space " + CTRL_A + ALT_D*3 + ENTER,
- "Result" : PROMPT},
+ # empty space tests
+ {"Name": "empty space test 1",
+ "Sequence": RIGHT + LEFT + CTRL_B + CTRL_F + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 2",
+ "Sequence": BKSPACE + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 3",
+ "Sequence": CTRL_E*2 + CTRL_A*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 4",
+ "Sequence": ALT_F*2 + ALT_B*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 5",
+ "Sequence": " " + CTRL_E*2 + CTRL_A*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 6",
+ "Sequence": " " + CTRL_A + ALT_F*2 + ALT_B*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 7",
+ "Sequence": " " + CTRL_A + CTRL_D + CTRL_E + CTRL_D + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 8",
+ "Sequence": " space" + CTRL_W*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 9",
+ "Sequence": " space" + ALT_BKSPACE*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 10",
+ "Sequence": " space " + CTRL_A + ALT_D*3 + ENTER,
+ "Result": PROMPT},
-# non-printable char tests
- {"Name" : "non-printable test 1",
- "Sequence" : chr(27) + chr(47) + ENTER,
- "Result" : PROMPT},
- {"Name" : "non-printable test 2",
- "Sequence" : chr(27) + chr(128) + ENTER*7,
- "Result" : PROMPT},
- {"Name" : "non-printable test 3",
- "Sequence" : chr(27) + chr(91) + chr(127) + ENTER*6,
- "Result" : PROMPT},
+ # non-printable char tests
+ {"Name": "non-printable test 1",
+ "Sequence": chr(27) + chr(47) + ENTER,
+ "Result": PROMPT},
+ {"Name": "non-printable test 2",
+ "Sequence": chr(27) + chr(128) + ENTER*7,
+ "Result": PROMPT},
+ {"Name": "non-printable test 3",
+ "Sequence": chr(27) + chr(91) + chr(127) + ENTER*6,
+ "Result": PROMPT},
-# miscellaneous tests
- {"Name" : "misc test 1",
- "Sequence" : ENTER,
- "Result" : PROMPT},
- {"Name" : "misc test 2",
- "Sequence" : "single #comment" + ENTER,
- "Result" : SINGLE},
- {"Name" : "misc test 3",
- "Sequence" : "#empty line" + ENTER,
- "Result" : PROMPT},
- {"Name" : "misc test 4",
- "Sequence" : " single " + ENTER,
- "Result" : SINGLE},
- {"Name" : "misc test 5",
- "Sequence" : "single#" + ENTER,
- "Result" : SINGLE},
- {"Name" : "misc test 6",
- "Sequence" : 'a' * 257 + ENTER,
- "Result" : NOT_FOUND},
- {"Name" : "misc test 7",
- "Sequence" : "clear_history" + UP*5 + DOWN*5 + ENTER,
- "Result" : PROMPT},
- {"Name" : "misc test 8",
- "Sequence" : "a" + HELP + CTRL_C,
- "Result" : PROMPT},
- {"Name" : "misc test 9",
- "Sequence" : CTRL_D*3,
- "Result" : None},
+ # miscellaneous tests
+ {"Name": "misc test 1",
+ "Sequence": ENTER,
+ "Result": PROMPT},
+ {"Name": "misc test 2",
+ "Sequence": "single #comment" + ENTER,
+ "Result": SINGLE},
+ {"Name": "misc test 3",
+ "Sequence": "#empty line" + ENTER,
+ "Result": PROMPT},
+ {"Name": "misc test 4",
+ "Sequence": " single " + ENTER,
+ "Result": SINGLE},
+ {"Name": "misc test 5",
+ "Sequence": "single#" + ENTER,
+ "Result": SINGLE},
+ {"Name": "misc test 6",
+ "Sequence": 'a' * 257 + ENTER,
+ "Result": NOT_FOUND},
+ {"Name": "misc test 7",
+ "Sequence": "clear_history" + UP*5 + DOWN*5 + ENTER,
+ "Result": PROMPT},
+ {"Name": "misc test 8",
+ "Sequence": "a" + HELP + CTRL_C,
+ "Result": PROMPT},
+ {"Name": "misc test 9",
+ "Sequence": CTRL_D*3,
+ "Result": None},
]
diff --git a/app/test/autotest.py b/app/test/autotest.py
index b9fd6b6..3a00538 100644
--- a/app/test/autotest.py
+++ b/app/test/autotest.py
@@ -33,44 +33,46 @@
# Script that uses either test app or qemu controlled by python-pexpect
-import sys, autotest_data, autotest_runner
-
+import autotest_data
+import autotest_runner
+import sys
def usage():
- print"Usage: autotest.py [test app|test iso image]",
- print "[target] [whitelist|-blacklist]"
+ print"Usage: autotest.py [test app|test iso image]",
+ print "[target] [whitelist|-blacklist]"
if len(sys.argv) < 3:
- usage()
- sys.exit(1)
+ usage()
+ sys.exit(1)
target = sys.argv[2]
-test_whitelist=None
-test_blacklist=None
+test_whitelist = None
+test_blacklist = None
# get blacklist/whitelist
if len(sys.argv) > 3:
- testlist = sys.argv[3].split(',')
- testlist = [test.lower() for test in testlist]
- if testlist[0].startswith('-'):
- testlist[0] = testlist[0].lstrip('-')
- test_blacklist = testlist
- else:
- test_whitelist = testlist
+ testlist = sys.argv[3].split(',')
+ testlist = [test.lower() for test in testlist]
+ if testlist[0].startswith('-'):
+ testlist[0] = testlist[0].lstrip('-')
+ test_blacklist = testlist
+ else:
+ test_whitelist = testlist
-cmdline = "%s -c f -n 4"%(sys.argv[1])
+cmdline = "%s -c f -n 4" % (sys.argv[1])
print cmdline
-runner = autotest_runner.AutotestRunner(cmdline, target, test_blacklist, test_whitelist)
+runner = autotest_runner.AutotestRunner(cmdline, target, test_blacklist,
+ test_whitelist)
for test_group in autotest_data.parallel_test_group_list:
- runner.add_parallel_test_group(test_group)
+ runner.add_parallel_test_group(test_group)
for test_group in autotest_data.non_parallel_test_group_list:
- runner.add_non_parallel_test_group(test_group)
+ runner.add_non_parallel_test_group(test_group)
num_fails = runner.run_all_tests()
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 9e8fd94..5176064 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -36,12 +36,14 @@
from glob import glob
from autotest_test_funcs import *
+
# quick and dirty function to find out number of sockets
def num_sockets():
- result = len(glob("/sys/devices/system/node/node*"))
- if result == 0:
- return 1
- return result
+ result = len(glob("/sys/devices/system/node/node*"))
+ if result == 0:
+ return 1
+ return result
+
# Assign given number to each socket
# e.g. 32 becomes 32,32 or 32,32,32,32
@@ -51,420 +53,419 @@ def per_sockets(num):
# groups of tests that can be run in parallel
# the grouping has been found largely empirically
parallel_test_group_list = [
-
-{
- "Prefix": "group_1",
- "Memory" : per_sockets(8),
- "Tests" :
- [
- {
- "Name" : "Cycles autotest",
- "Command" : "cycles_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Timer autotest",
- "Command" : "timer_autotest",
- "Func" : timer_autotest,
- "Report" : None,
- },
- {
- "Name" : "Debug autotest",
- "Command" : "debug_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Errno autotest",
- "Command" : "errno_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Meter autotest",
- "Command" : "meter_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Common autotest",
- "Command" : "common_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Resource autotest",
- "Command" : "resource_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "group_2",
- "Memory" : "16",
- "Tests" :
- [
- {
- "Name" : "Memory autotest",
- "Command" : "memory_autotest",
- "Func" : memory_autotest,
- "Report" : None,
- },
- {
- "Name" : "Read/write lock autotest",
- "Command" : "rwlock_autotest",
- "Func" : rwlock_autotest,
- "Report" : None,
- },
- {
- "Name" : "Logs autotest",
- "Command" : "logs_autotest",
- "Func" : logs_autotest,
- "Report" : None,
- },
- {
- "Name" : "CPU flags autotest",
- "Command" : "cpuflags_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Version autotest",
- "Command" : "version_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "EAL filesystem autotest",
- "Command" : "eal_fs_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "EAL flags autotest",
- "Command" : "eal_flags_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Hash autotest",
- "Command" : "hash_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ],
-},
-{
- "Prefix": "group_3",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "LPM autotest",
- "Command" : "lpm_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "LPM6 autotest",
- "Command" : "lpm6_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Memcpy autotest",
- "Command" : "memcpy_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Memzone autotest",
- "Command" : "memzone_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "String autotest",
- "Command" : "string_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Alarm autotest",
- "Command" : "alarm_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "group_4",
- "Memory" : per_sockets(128),
- "Tests" :
- [
- {
- "Name" : "PCI autotest",
- "Command" : "pci_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Malloc autotest",
- "Command" : "malloc_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Multi-process autotest",
- "Command" : "multiprocess_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Mbuf autotest",
- "Command" : "mbuf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Per-lcore autotest",
- "Command" : "per_lcore_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Ring autotest",
- "Command" : "ring_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "group_5",
- "Memory" : "32",
- "Tests" :
- [
- {
- "Name" : "Spinlock autotest",
- "Command" : "spinlock_autotest",
- "Func" : spinlock_autotest,
- "Report" : None,
- },
- {
- "Name" : "Byte order autotest",
- "Command" : "byteorder_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "TAILQ autotest",
- "Command" : "tailq_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Command-line autotest",
- "Command" : "cmdline_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Interrupts autotest",
- "Command" : "interrupt_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "group_6",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "Function reentrancy autotest",
- "Command" : "func_reentrancy_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Mempool autotest",
- "Command" : "mempool_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Atomics autotest",
- "Command" : "atomic_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Prefetch autotest",
- "Command" : "prefetch_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" :"Red autotest",
- "Command" : "red_autotest",
- "Func" :default_autotest,
- "Report" :None,
- },
- ]
-},
-{
- "Prefix" : "group_7",
- "Memory" : "64",
- "Tests" :
- [
- {
- "Name" : "PMD ring autotest",
- "Command" : "ring_pmd_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Access list control autotest",
- "Command" : "acl_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" :"Sched autotest",
- "Command" : "sched_autotest",
- "Func" :default_autotest,
- "Report" :None,
- },
- ]
-},
+ {
+ "Prefix": "group_1",
+ "Memory": per_sockets(8),
+ "Tests":
+ [
+ {
+ "Name": "Cycles autotest",
+ "Command": "cycles_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Timer autotest",
+ "Command": "timer_autotest",
+ "Func": timer_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Debug autotest",
+ "Command": "debug_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Errno autotest",
+ "Command": "errno_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Meter autotest",
+ "Command": "meter_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Common autotest",
+ "Command": "common_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Resource autotest",
+ "Command": "resource_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "group_2",
+ "Memory": "16",
+ "Tests":
+ [
+ {
+ "Name": "Memory autotest",
+ "Command": "memory_autotest",
+ "Func": memory_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Read/write lock autotest",
+ "Command": "rwlock_autotest",
+ "Func": rwlock_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Logs autotest",
+ "Command": "logs_autotest",
+ "Func": logs_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "CPU flags autotest",
+ "Command": "cpuflags_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Version autotest",
+ "Command": "version_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "EAL filesystem autotest",
+ "Command": "eal_fs_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "EAL flags autotest",
+ "Command": "eal_flags_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Hash autotest",
+ "Command": "hash_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ],
+ },
+ {
+ "Prefix": "group_3",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "LPM autotest",
+ "Command": "lpm_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "LPM6 autotest",
+ "Command": "lpm6_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Memcpy autotest",
+ "Command": "memcpy_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Memzone autotest",
+ "Command": "memzone_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "String autotest",
+ "Command": "string_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Alarm autotest",
+ "Command": "alarm_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "group_4",
+ "Memory": per_sockets(128),
+ "Tests":
+ [
+ {
+ "Name": "PCI autotest",
+ "Command": "pci_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Malloc autotest",
+ "Command": "malloc_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Multi-process autotest",
+ "Command": "multiprocess_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Mbuf autotest",
+ "Command": "mbuf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Per-lcore autotest",
+ "Command": "per_lcore_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Ring autotest",
+ "Command": "ring_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "group_5",
+ "Memory": "32",
+ "Tests":
+ [
+ {
+ "Name": "Spinlock autotest",
+ "Command": "spinlock_autotest",
+ "Func": spinlock_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Byte order autotest",
+ "Command": "byteorder_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "TAILQ autotest",
+ "Command": "tailq_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Command-line autotest",
+ "Command": "cmdline_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Interrupts autotest",
+ "Command": "interrupt_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "group_6",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "Function reentrancy autotest",
+ "Command": "func_reentrancy_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Mempool autotest",
+ "Command": "mempool_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Atomics autotest",
+ "Command": "atomic_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Prefetch autotest",
+ "Command": "prefetch_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Red autotest",
+ "Command": "red_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "group_7",
+ "Memory": "64",
+ "Tests":
+ [
+ {
+ "Name": "PMD ring autotest",
+ "Command": "ring_pmd_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Access list control autotest",
+ "Command": "acl_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Sched autotest",
+ "Command": "sched_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
]
# tests that should not be run when any other tests are running
non_parallel_test_group_list = [
-{
- "Prefix" : "kni",
- "Memory" : "512",
- "Tests" :
- [
- {
- "Name" : "KNI autotest",
- "Command" : "kni_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "mempool_perf",
- "Memory" : per_sockets(256),
- "Tests" :
- [
- {
- "Name" : "Mempool performance autotest",
- "Command" : "mempool_perf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "memcpy_perf",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "Memcpy performance autotest",
- "Command" : "memcpy_perf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "hash_perf",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "Hash performance autotest",
- "Command" : "hash_perf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix" : "power",
- "Memory" : "16",
- "Tests" :
- [
- {
- "Name" : "Power autotest",
- "Command" : "power_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix" : "power_acpi_cpufreq",
- "Memory" : "16",
- "Tests" :
- [
- {
- "Name" : "Power ACPI cpufreq autotest",
- "Command" : "power_acpi_cpufreq_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix" : "power_kvm_vm",
- "Memory" : "16",
- "Tests" :
- [
- {
- "Name" : "Power KVM VM autotest",
- "Command" : "power_kvm_vm_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "timer_perf",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "Timer performance autotest",
- "Command" : "timer_perf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
+ {
+ "Prefix": "kni",
+ "Memory": "512",
+ "Tests":
+ [
+ {
+ "Name": "KNI autotest",
+ "Command": "kni_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "mempool_perf",
+ "Memory": per_sockets(256),
+ "Tests":
+ [
+ {
+ "Name": "Mempool performance autotest",
+ "Command": "mempool_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "memcpy_perf",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "Memcpy performance autotest",
+ "Command": "memcpy_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "hash_perf",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "Hash performance autotest",
+ "Command": "hash_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "power",
+ "Memory": "16",
+ "Tests":
+ [
+ {
+ "Name": "Power autotest",
+ "Command": "power_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "power_acpi_cpufreq",
+ "Memory": "16",
+ "Tests":
+ [
+ {
+ "Name": "Power ACPI cpufreq autotest",
+ "Command": "power_acpi_cpufreq_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "power_kvm_vm",
+ "Memory": "16",
+ "Tests":
+ [
+ {
+ "Name": "Power KVM VM autotest",
+ "Command": "power_kvm_vm_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "timer_perf",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "Timer performance autotest",
+ "Command": "timer_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
-#
-# Please always make sure that ring_perf is the last test!
-#
-{
- "Prefix": "ring_perf",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "Ring performance autotest",
- "Command" : "ring_perf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
+ #
+ # Please always make sure that ring_perf is the last test!
+ #
+ {
+ "Prefix": "ring_perf",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "Ring performance autotest",
+ "Command": "ring_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
]
diff --git a/app/test/autotest_runner.py b/app/test/autotest_runner.py
index 21d3be2..55b63a8 100644
--- a/app/test/autotest_runner.py
+++ b/app/test/autotest_runner.py
@@ -33,20 +33,29 @@
# The main logic behind running autotests in parallel
-import multiprocessing, subprocess, sys, pexpect, re, time, os, StringIO, csv
+import StringIO
+import csv
+import multiprocessing
+import pexpect
+import re
+import subprocess
+import sys
+import time
# wait for prompt
+
+
def wait_prompt(child):
- try:
- child.sendline()
- result = child.expect(["RTE>>", pexpect.TIMEOUT, pexpect.EOF],
- timeout = 120)
- except:
- return False
- if result == 0:
- return True
- else:
- return False
+ try:
+ child.sendline()
+ result = child.expect(["RTE>>", pexpect.TIMEOUT, pexpect.EOF],
+ timeout=120)
+ except:
+ return False
+ if result == 0:
+ return True
+ else:
+ return False
# run a test group
# each result tuple in results list consists of:
@@ -60,363 +69,363 @@ def wait_prompt(child):
# this function needs to be outside AutotestRunner class
# because otherwise Pool won't work (or rather it will require
# quite a bit of effort to make it work).
-def run_test_group(cmdline, test_group):
- results = []
- child = None
- start_time = time.time()
- startuplog = None
-
- # run test app
- try:
- # prepare logging of init
- startuplog = StringIO.StringIO()
-
- print >>startuplog, "\n%s %s\n" % ("="*20, test_group["Prefix"])
- print >>startuplog, "\ncmdline=%s" % cmdline
-
- child = pexpect.spawn(cmdline, logfile=startuplog)
-
- # wait for target to boot
- if not wait_prompt(child):
- child.close()
-
- results.append((-1, "Fail [No prompt]", "Start %s" % test_group["Prefix"],
- time.time() - start_time, startuplog.getvalue(), None))
-
- # mark all tests as failed
- for test in test_group["Tests"]:
- results.append((-1, "Fail [No prompt]", test["Name"],
- time.time() - start_time, "", None))
- # exit test
- return results
-
- except:
- results.append((-1, "Fail [Can't run]", "Start %s" % test_group["Prefix"],
- time.time() - start_time, startuplog.getvalue(), None))
-
- # mark all tests as failed
- for t in test_group["Tests"]:
- results.append((-1, "Fail [Can't run]", t["Name"],
- time.time() - start_time, "", None))
- # exit test
- return results
-
- # startup was successful
- results.append((0, "Success", "Start %s" % test_group["Prefix"],
- time.time() - start_time, startuplog.getvalue(), None))
-
- # parse the binary for available test commands
- binary = cmdline.split()[0]
- stripped = 'not stripped' not in subprocess.check_output(['file', binary])
- if not stripped:
- symbols = subprocess.check_output(['nm', binary]).decode('utf-8')
- avail_cmds = re.findall('test_register_(\w+)', symbols)
-
- # run all tests in test group
- for test in test_group["Tests"]:
-
- # create log buffer for each test
- # in multiprocessing environment, the logging would be
- # interleaved and will create a mess, hence the buffering
- logfile = StringIO.StringIO()
- child.logfile = logfile
-
- result = ()
-
- # make a note when the test started
- start_time = time.time()
-
- try:
- # print test name to log buffer
- print >>logfile, "\n%s %s\n" % ("-"*20, test["Name"])
-
- # run test function associated with the test
- if stripped or test["Command"] in avail_cmds:
- result = test["Func"](child, test["Command"])
- else:
- result = (0, "Skipped [Not Available]")
-
- # make a note when the test was finished
- end_time = time.time()
-
- # append test data to the result tuple
- result += (test["Name"], end_time - start_time,
- logfile.getvalue())
-
- # call report function, if any defined, and supply it with
- # target and complete log for test run
- if test["Report"]:
- report = test["Report"](self.target, log)
-
- # append report to results tuple
- result += (report,)
- else:
- # report is None
- result += (None,)
- except:
- # make a note when the test crashed
- end_time = time.time()
-
- # mark test as failed
- result = (-1, "Fail [Crash]", test["Name"],
- end_time - start_time, logfile.getvalue(), None)
- finally:
- # append the results to the results list
- results.append(result)
-
- # regardless of whether test has crashed, try quitting it
- try:
- child.sendline("quit")
- child.close()
- # if the test crashed, just do nothing instead
- except:
- # nop
- pass
-
- # return test results
- return results
-
+def run_test_group(cmdline, test_group):
+ results = []
+ child = None
+ start_time = time.time()
+ startuplog = None
+
+ # run test app
+ try:
+ # prepare logging of init
+ startuplog = StringIO.StringIO()
+
+ print >>startuplog, "\n%s %s\n" % ("=" * 20, test_group["Prefix"])
+ print >>startuplog, "\ncmdline=%s" % cmdline
+
+ child = pexpect.spawn(cmdline, logfile=startuplog)
+
+ # wait for target to boot
+ if not wait_prompt(child):
+ child.close()
+
+ results.append((-1,
+ "Fail [No prompt]",
+ "Start %s" % test_group["Prefix"],
+ time.time() - start_time,
+ startuplog.getvalue(),
+ None))
+
+ # mark all tests as failed
+ for test in test_group["Tests"]:
+ results.append((-1, "Fail [No prompt]", test["Name"],
+ time.time() - start_time, "", None))
+ # exit test
+ return results
+
+ except:
+ results.append((-1,
+ "Fail [Can't run]",
+ "Start %s" % test_group["Prefix"],
+ time.time() - start_time,
+ startuplog.getvalue(),
+ None))
+
+ # mark all tests as failed
+ for t in test_group["Tests"]:
+ results.append((-1, "Fail [Can't run]", t["Name"],
+ time.time() - start_time, "", None))
+ # exit test
+ return results
+
+ # startup was successful
+ results.append((0, "Success", "Start %s" % test_group["Prefix"],
+ time.time() - start_time, startuplog.getvalue(), None))
+
+ # parse the binary for available test commands
+ binary = cmdline.split()[0]
+ stripped = 'not stripped' not in subprocess.check_output(['file', binary])
+ if not stripped:
+ symbols = subprocess.check_output(['nm', binary]).decode('utf-8')
+ avail_cmds = re.findall('test_register_(\w+)', symbols)
+
+ # run all tests in test group
+ for test in test_group["Tests"]:
+
+ # create log buffer for each test
+ # in multiprocessing environment, the logging would be
+ # interleaved and will create a mess, hence the buffering
+ logfile = StringIO.StringIO()
+ child.logfile = logfile
+
+ result = ()
+
+ # make a note when the test started
+ start_time = time.time()
+
+ try:
+ # print test name to log buffer
+ print >>logfile, "\n%s %s\n" % ("-" * 20, test["Name"])
+
+ # run test function associated with the test
+ if stripped or test["Command"] in avail_cmds:
+ result = test["Func"](child, test["Command"])
+ else:
+ result = (0, "Skipped [Not Available]")
+
+ # make a note when the test was finished
+ end_time = time.time()
+
+ # append test data to the result tuple
+ result += (test["Name"], end_time - start_time,
+ logfile.getvalue())
+
+ # call report function, if any defined, and supply it with
+ # target and complete log for test run
+ if test["Report"]:
+ report = test["Report"](self.target, log)
+
+ # append report to results tuple
+ result += (report,)
+ else:
+ # report is None
+ result += (None,)
+ except:
+ # make a note when the test crashed
+ end_time = time.time()
+
+ # mark test as failed
+ result = (-1, "Fail [Crash]", test["Name"],
+ end_time - start_time, logfile.getvalue(), None)
+ finally:
+ # append the results to the results list
+ results.append(result)
+
+ # regardless of whether test has crashed, try quitting it
+ try:
+ child.sendline("quit")
+ child.close()
+ # if the test crashed, just do nothing instead
+ except:
+ # nop
+ pass
+
+ # return test results
+ return results
# class representing an instance of autotests run
class AutotestRunner:
- cmdline = ""
- parallel_test_groups = []
- non_parallel_test_groups = []
- logfile = None
- csvwriter = None
- target = ""
- start = None
- n_tests = 0
- fails = 0
- log_buffers = []
- blacklist = []
- whitelist = []
-
-
- def __init__(self, cmdline, target, blacklist, whitelist):
- self.cmdline = cmdline
- self.target = target
- self.blacklist = blacklist
- self.whitelist = whitelist
-
- # log file filename
- logfile = "%s.log" % target
- csvfile = "%s.csv" % target
-
- self.logfile = open(logfile, "w")
- csvfile = open(csvfile, "w")
- self.csvwriter = csv.writer(csvfile)
-
- # prepare results table
- self.csvwriter.writerow(["test_name","test_result","result_str"])
-
-
-
- # set up cmdline string
- def __get_cmdline(self, test):
- cmdline = self.cmdline
-
- # append memory limitations for each test
- # otherwise tests won't run in parallel
- if not "i686" in self.target:
- cmdline += " --socket-mem=%s"% test["Memory"]
- else:
- # affinitize startup so that tests don't fail on i686
- cmdline = "taskset 1 " + cmdline
- cmdline += " -m " + str(sum(map(int,test["Memory"].split(","))))
-
- # set group prefix for autotest group
- # otherwise they won't run in parallel
- cmdline += " --file-prefix=%s"% test["Prefix"]
-
- return cmdline
-
-
-
- def add_parallel_test_group(self,test_group):
- self.parallel_test_groups.append(test_group)
-
- def add_non_parallel_test_group(self,test_group):
- self.non_parallel_test_groups.append(test_group)
-
-
- def __process_results(self, results):
- # this iterates over individual test results
- for i, result in enumerate(results):
-
- # increase total number of tests that were run
- # do not include "start" test
- if i > 0:
- self.n_tests += 1
-
- # unpack result tuple
- test_result, result_str, test_name, \
- test_time, log, report = result
-
- # get total run time
- cur_time = time.time()
- total_time = int(cur_time - self.start)
-
- # print results, test run time and total time since start
- print ("%s:" % test_name).ljust(30),
- print result_str.ljust(29),
- print "[%02dm %02ds]" % (test_time / 60, test_time % 60),
-
- # don't print out total time every line, it's the same anyway
- if i == len(results) - 1:
- print "[%02dm %02ds]" % (total_time / 60, total_time % 60)
- else:
- print ""
-
- # if test failed and it wasn't a "start" test
- if test_result < 0 and not i == 0:
- self.fails += 1
-
- # collect logs
- self.log_buffers.append(log)
-
- # create report if it exists
- if report:
- try:
- f = open("%s_%s_report.rst" % (self.target,test_name), "w")
- except IOError:
- print "Report for %s could not be created!" % test_name
- else:
- with f:
- f.write(report)
-
- # write test result to CSV file
- if i != 0:
- self.csvwriter.writerow([test_name, test_result, result_str])
-
-
-
-
- # this function iterates over test groups and removes each
- # test that is not in whitelist/blacklist
- def __filter_groups(self, test_groups):
- groups_to_remove = []
-
- # filter out tests from parallel test groups
- for i, test_group in enumerate(test_groups):
-
- # iterate over a copy so that we could safely delete individual tests
- for test in test_group["Tests"][:]:
- test_id = test["Command"]
-
- # dump tests are specified in full e.g. "Dump_mempool"
- if "_autotest" in test_id:
- test_id = test_id[:-len("_autotest")]
-
- # filter out blacklisted/whitelisted tests
- if self.blacklist and test_id in self.blacklist:
- test_group["Tests"].remove(test)
- continue
- if self.whitelist and test_id not in self.whitelist:
- test_group["Tests"].remove(test)
- continue
-
- # modify or remove original group
- if len(test_group["Tests"]) > 0:
- test_groups[i] = test_group
- else:
- # remember which groups should be deleted
- # put the numbers backwards so that we start
- # deleting from the end, not from the beginning
- groups_to_remove.insert(0, i)
-
- # remove test groups that need to be removed
- for i in groups_to_remove:
- del test_groups[i]
-
- return test_groups
-
-
-
- # iterate over test groups and run tests associated with them
- def run_all_tests(self):
- # filter groups
- self.parallel_test_groups = \
- self.__filter_groups(self.parallel_test_groups)
- self.non_parallel_test_groups = \
- self.__filter_groups(self.non_parallel_test_groups)
-
- # create a pool of worker threads
- pool = multiprocessing.Pool(processes=1)
-
- results = []
-
- # whatever happens, try to save as much logs as possible
- try:
-
- # create table header
- print ""
- print "Test name".ljust(30),
- print "Test result".ljust(29),
- print "Test".center(9),
- print "Total".center(9)
- print "=" * 80
-
- # make a note of tests start time
- self.start = time.time()
-
- # assign worker threads to run test groups
- for test_group in self.parallel_test_groups:
- result = pool.apply_async(run_test_group,
- [self.__get_cmdline(test_group), test_group])
- results.append(result)
-
- # iterate while we have group execution results to get
- while len(results) > 0:
-
- # iterate over a copy to be able to safely delete results
- # this iterates over a list of group results
- for group_result in results[:]:
-
- # if the thread hasn't finished yet, continue
- if not group_result.ready():
- continue
-
- res = group_result.get()
-
- self.__process_results(res)
-
- # remove result from results list once we're done with it
- results.remove(group_result)
-
- # run non_parallel tests. they are run one by one, synchronously
- for test_group in self.non_parallel_test_groups:
- group_result = run_test_group(self.__get_cmdline(test_group), test_group)
-
- self.__process_results(group_result)
-
- # get total run time
- cur_time = time.time()
- total_time = int(cur_time - self.start)
-
- # print out summary
- print "=" * 80
- print "Total run time: %02dm %02ds" % (total_time / 60, total_time % 60)
- if self.fails != 0:
- print "Number of failed tests: %s" % str(self.fails)
-
- # write summary to logfile
- self.logfile.write("Summary\n")
- self.logfile.write("Target: ".ljust(15) + "%s\n" % self.target)
- self.logfile.write("Tests: ".ljust(15) + "%i\n" % self.n_tests)
- self.logfile.write("Failed tests: ".ljust(15) + "%i\n" % self.fails)
- except:
- print "Exception occured"
- print sys.exc_info()
- self.fails = 1
-
- # drop logs from all executions to a logfile
- for buf in self.log_buffers:
- self.logfile.write(buf.replace("\r",""))
-
- log_buffers = []
-
- return self.fails
+ cmdline = ""
+ parallel_test_groups = []
+ non_parallel_test_groups = []
+ logfile = None
+ csvwriter = None
+ target = ""
+ start = None
+ n_tests = 0
+ fails = 0
+ log_buffers = []
+ blacklist = []
+ whitelist = []
+
+ def __init__(self, cmdline, target, blacklist, whitelist):
+ self.cmdline = cmdline
+ self.target = target
+ self.blacklist = blacklist
+ self.whitelist = whitelist
+
+ # log file filename
+ logfile = "%s.log" % target
+ csvfile = "%s.csv" % target
+
+ self.logfile = open(logfile, "w")
+ csvfile = open(csvfile, "w")
+ self.csvwriter = csv.writer(csvfile)
+
+ # prepare results table
+ self.csvwriter.writerow(["test_name", "test_result", "result_str"])
+
+ # set up cmdline string
+ def __get_cmdline(self, test):
+ cmdline = self.cmdline
+
+ # append memory limitations for each test
+ # otherwise tests won't run in parallel
+ if "i686" not in self.target:
+ cmdline += " --socket-mem=%s" % test["Memory"]
+ else:
+ # affinitize startup so that tests don't fail on i686
+ cmdline = "taskset 1 " + cmdline
+ cmdline += " -m " + str(sum(map(int, test["Memory"].split(","))))
+
+ # set group prefix for autotest group
+ # otherwise they won't run in parallel
+ cmdline += " --file-prefix=%s" % test["Prefix"]
+
+ return cmdline
+
+ def add_parallel_test_group(self, test_group):
+ self.parallel_test_groups.append(test_group)
+
+ def add_non_parallel_test_group(self, test_group):
+ self.non_parallel_test_groups.append(test_group)
+
+ def __process_results(self, results):
+ # this iterates over individual test results
+ for i, result in enumerate(results):
+
+ # increase total number of tests that were run
+ # do not include "start" test
+ if i > 0:
+ self.n_tests += 1
+
+ # unpack result tuple
+ test_result, result_str, test_name, \
+ test_time, log, report = result
+
+ # get total run time
+ cur_time = time.time()
+ total_time = int(cur_time - self.start)
+
+ # print results, test run time and total time since start
+ print ("%s:" % test_name).ljust(30),
+ print result_str.ljust(29),
+ print "[%02dm %02ds]" % (test_time / 60, test_time % 60),
+
+ # don't print out total time every line, it's the same anyway
+ if i == len(results) - 1:
+ print "[%02dm %02ds]" % (total_time / 60, total_time % 60)
+ else:
+ print ""
+
+ # if test failed and it wasn't a "start" test
+ if test_result < 0 and not i == 0:
+ self.fails += 1
+
+ # collect logs
+ self.log_buffers.append(log)
+
+ # create report if it exists
+ if report:
+ try:
+ f = open("%s_%s_report.rst" %
+ (self.target, test_name), "w")
+ except IOError:
+ print "Report for %s could not be created!" % test_name
+ else:
+ with f:
+ f.write(report)
+
+ # write test result to CSV file
+ if i != 0:
+ self.csvwriter.writerow([test_name, test_result, result_str])
+
+ # this function iterates over test groups and removes each
+ # test that is not in whitelist/blacklist
+ def __filter_groups(self, test_groups):
+ groups_to_remove = []
+
+ # filter out tests from parallel test groups
+ for i, test_group in enumerate(test_groups):
+
+ # iterate over a copy so that we could safely delete individual
+ # tests
+ for test in test_group["Tests"][:]:
+ test_id = test["Command"]
+
+ # dump tests are specified in full e.g. "Dump_mempool"
+ if "_autotest" in test_id:
+ test_id = test_id[:-len("_autotest")]
+
+ # filter out blacklisted/whitelisted tests
+ if self.blacklist and test_id in self.blacklist:
+ test_group["Tests"].remove(test)
+ continue
+ if self.whitelist and test_id not in self.whitelist:
+ test_group["Tests"].remove(test)
+ continue
+
+ # modify or remove original group
+ if len(test_group["Tests"]) > 0:
+ test_groups[i] = test_group
+ else:
+ # remember which groups should be deleted
+ # put the numbers backwards so that we start
+ # deleting from the end, not from the beginning
+ groups_to_remove.insert(0, i)
+
+ # remove test groups that need to be removed
+ for i in groups_to_remove:
+ del test_groups[i]
+
+ return test_groups
+
+ # iterate over test groups and run tests associated with them
+ def run_all_tests(self):
+ # filter groups
+ self.parallel_test_groups = \
+ self.__filter_groups(self.parallel_test_groups)
+ self.non_parallel_test_groups = \
+ self.__filter_groups(self.non_parallel_test_groups)
+
+ # create a pool of worker threads
+ pool = multiprocessing.Pool(processes=1)
+
+ results = []
+
+ # whatever happens, try to save as much logs as possible
+ try:
+
+ # create table header
+ print ""
+ print "Test name".ljust(30),
+ print "Test result".ljust(29),
+ print "Test".center(9),
+ print "Total".center(9)
+ print "=" * 80
+
+ # make a note of tests start time
+ self.start = time.time()
+
+ # assign worker threads to run test groups
+ for test_group in self.parallel_test_groups:
+ result = pool.apply_async(run_test_group,
+ [self.__get_cmdline(test_group),
+ test_group])
+ results.append(result)
+
+ # iterate while we have group execution results to get
+ while len(results) > 0:
+
+ # iterate over a copy to be able to safely delete results
+ # this iterates over a list of group results
+ for group_result in results[:]:
+
+ # if the thread hasn't finished yet, continue
+ if not group_result.ready():
+ continue
+
+ res = group_result.get()
+
+ self.__process_results(res)
+
+ # remove result from results list once we're done with it
+ results.remove(group_result)
+
+ # run non_parallel tests. they are run one by one, synchronously
+ for test_group in self.non_parallel_test_groups:
+ group_result = run_test_group(
+ self.__get_cmdline(test_group), test_group)
+
+ self.__process_results(group_result)
+
+ # get total run time
+ cur_time = time.time()
+ total_time = int(cur_time - self.start)
+
+ # print out summary
+ print "=" * 80
+ print "Total run time: %02dm %02ds" % (total_time / 60,
+ total_time % 60)
+ if self.fails != 0:
+ print "Number of failed tests: %s" % str(self.fails)
+
+ # write summary to logfile
+ self.logfile.write("Summary\n")
+ self.logfile.write("Target: ".ljust(15) + "%s\n" % self.target)
+ self.logfile.write("Tests: ".ljust(15) + "%i\n" % self.n_tests)
+ self.logfile.write("Failed tests: ".ljust(
+ 15) + "%i\n" % self.fails)
+ except:
+ print "Exception occurred"
+ print sys.exc_info()
+ self.fails = 1
+
+ # drop logs from all executions to a logfile
+ for buf in self.log_buffers:
+ self.logfile.write(buf.replace("\r", ""))
+
+ return self.fails
diff --git a/app/test/autotest_test_funcs.py b/app/test/autotest_test_funcs.py
index 14cffd0..c482ea8 100644
--- a/app/test/autotest_test_funcs.py
+++ b/app/test/autotest_test_funcs.py
@@ -33,257 +33,272 @@
# Test functions
-import sys, pexpect, time, os, re
+import pexpect
# default autotest, used to run most tests
# waits for "Test OK"
+
+
def default_autotest(child, test_name):
- child.sendline(test_name)
- result = child.expect(["Test OK", "Test Failed",
- "Command not found", pexpect.TIMEOUT], timeout = 900)
- if result == 1:
- return -1, "Fail"
- elif result == 2:
- return -1, "Fail [Not found]"
- elif result == 3:
- return -1, "Fail [Timeout]"
- return 0, "Success"
+ child.sendline(test_name)
+ result = child.expect(["Test OK", "Test Failed",
+ "Command not found", pexpect.TIMEOUT], timeout=900)
+ if result == 1:
+ return -1, "Fail"
+ elif result == 2:
+ return -1, "Fail [Not found]"
+ elif result == 3:
+ return -1, "Fail [Timeout]"
+ return 0, "Success"
# autotest used to run dump commands
# just fires the command
+
+
def dump_autotest(child, test_name):
- child.sendline(test_name)
- return 0, "Success"
+ child.sendline(test_name)
+ return 0, "Success"
# memory autotest
# reads output and waits for Test OK
+
+
def memory_autotest(child, test_name):
- child.sendline(test_name)
- regexp = "phys:0x[0-9a-f]*, len:([0-9]*), virt:0x[0-9a-f]*, socket_id:[0-9]*"
- index = child.expect([regexp, pexpect.TIMEOUT], timeout = 180)
- if index != 0:
- return -1, "Fail [Timeout]"
- size = int(child.match.groups()[0], 16)
- if size <= 0:
- return -1, "Fail [Bad size]"
- index = child.expect(["Test OK", "Test Failed",
- pexpect.TIMEOUT], timeout = 10)
- if index == 1:
- return -1, "Fail"
- elif index == 2:
- return -1, "Fail [Timeout]"
- return 0, "Success"
+ child.sendline(test_name)
+ regexp = "phys:0x[0-9a-f]*, len:([0-9]*), virt:0x[0-9a-f]*, " \
+ "socket_id:[0-9]*"
+ index = child.expect([regexp, pexpect.TIMEOUT], timeout=180)
+ if index != 0:
+ return -1, "Fail [Timeout]"
+ size = int(child.match.groups()[0], 16)
+ if size <= 0:
+ return -1, "Fail [Bad size]"
+ index = child.expect(["Test OK", "Test Failed",
+ pexpect.TIMEOUT], timeout=10)
+ if index == 1:
+ return -1, "Fail"
+ elif index == 2:
+ return -1, "Fail [Timeout]"
+ return 0, "Success"
+
def spinlock_autotest(child, test_name):
- i = 0
- ir = 0
- child.sendline(test_name)
- while True:
- index = child.expect(["Test OK",
- "Test Failed",
- "Hello from core ([0-9]*) !",
- "Hello from within recursive locks from ([0-9]*) !",
- pexpect.TIMEOUT], timeout = 5)
- # ok
- if index == 0:
- break
-
- # message, check ordering
- elif index == 2:
- if int(child.match.groups()[0]) < i:
- return -1, "Fail [Bad order]"
- i = int(child.match.groups()[0])
- elif index == 3:
- if int(child.match.groups()[0]) < ir:
- return -1, "Fail [Bad order]"
- ir = int(child.match.groups()[0])
-
- # fail
- elif index == 4:
- return -1, "Fail [Timeout]"
- elif index == 1:
- return -1, "Fail"
-
- return 0, "Success"
+ i = 0
+ ir = 0
+ child.sendline(test_name)
+ while True:
+ index = child.expect(["Test OK",
+ "Test Failed",
+ "Hello from core ([0-9]*) !",
+ "Hello from within recursive locks "
+ "from ([0-9]*) !",
+ pexpect.TIMEOUT], timeout=5)
+ # ok
+ if index == 0:
+ break
+
+ # message, check ordering
+ elif index == 2:
+ if int(child.match.groups()[0]) < i:
+ return -1, "Fail [Bad order]"
+ i = int(child.match.groups()[0])
+ elif index == 3:
+ if int(child.match.groups()[0]) < ir:
+ return -1, "Fail [Bad order]"
+ ir = int(child.match.groups()[0])
+
+ # fail
+ elif index == 4:
+ return -1, "Fail [Timeout]"
+ elif index == 1:
+ return -1, "Fail"
+
+ return 0, "Success"
+
def rwlock_autotest(child, test_name):
- i = 0
- child.sendline(test_name)
- while True:
- index = child.expect(["Test OK",
- "Test Failed",
- "Hello from core ([0-9]*) !",
- "Global write lock taken on master core ([0-9]*)",
- pexpect.TIMEOUT], timeout = 10)
- # ok
- if index == 0:
- if i != 0xffff:
- return -1, "Fail [Message is missing]"
- break
-
- # message, check ordering
- elif index == 2:
- if int(child.match.groups()[0]) < i:
- return -1, "Fail [Bad order]"
- i = int(child.match.groups()[0])
-
- # must be the last message, check ordering
- elif index == 3:
- i = 0xffff
-
- elif index == 4:
- return -1, "Fail [Timeout]"
-
- # fail
- else:
- return -1, "Fail"
-
- return 0, "Success"
+ i = 0
+ child.sendline(test_name)
+ while True:
+ index = child.expect(["Test OK",
+ "Test Failed",
+ "Hello from core ([0-9]*) !",
+ "Global write lock taken on master "
+ "core ([0-9]*)",
+ pexpect.TIMEOUT], timeout=10)
+ # ok
+ if index == 0:
+ if i != 0xffff:
+ return -1, "Fail [Message is missing]"
+ break
+
+ # message, check ordering
+ elif index == 2:
+ if int(child.match.groups()[0]) < i:
+ return -1, "Fail [Bad order]"
+ i = int(child.match.groups()[0])
+
+ # must be the last message, check ordering
+ elif index == 3:
+ i = 0xffff
+
+ elif index == 4:
+ return -1, "Fail [Timeout]"
+
+ # fail
+ else:
+ return -1, "Fail"
+
+ return 0, "Success"
+
def logs_autotest(child, test_name):
- i = 0
- child.sendline(test_name)
-
- log_list = [
- "TESTAPP1: error message",
- "TESTAPP1: critical message",
- "TESTAPP2: critical message",
- "TESTAPP1: error message",
- ]
-
- for log_msg in log_list:
- index = child.expect([log_msg,
- "Test OK",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 10)
-
- if index == 3:
- return -1, "Fail [Timeout]"
- # not ok
- elif index != 0:
- return -1, "Fail"
-
- index = child.expect(["Test OK",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 10)
-
- return 0, "Success"
+ child.sendline(test_name)
+
+ log_list = [
+ "TESTAPP1: error message",
+ "TESTAPP1: critical message",
+ "TESTAPP2: critical message",
+ "TESTAPP1: error message",
+ ]
+
+ for log_msg in log_list:
+ index = child.expect([log_msg,
+ "Test OK",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=10)
+
+ if index == 3:
+ return -1, "Fail [Timeout]"
+ # not ok
+ elif index != 0:
+ return -1, "Fail"
+
+ index = child.expect(["Test OK",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=10)
+
+ return 0, "Success"
+
def timer_autotest(child, test_name):
- i = 0
- child.sendline(test_name)
-
- index = child.expect(["Start timer stress tests",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 5)
-
- if index == 1:
- return -1, "Fail"
- elif index == 2:
- return -1, "Fail [Timeout]"
-
- index = child.expect(["Start timer stress tests 2",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 5)
-
- if index == 1:
- return -1, "Fail"
- elif index == 2:
- return -1, "Fail [Timeout]"
-
- index = child.expect(["Start timer basic tests",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 5)
-
- if index == 1:
- return -1, "Fail"
- elif index == 2:
- return -1, "Fail [Timeout]"
-
- prev_lcore_timer1 = -1
-
- lcore_tim0 = -1
- lcore_tim1 = -1
- lcore_tim2 = -1
- lcore_tim3 = -1
-
- while True:
- index = child.expect(["TESTTIMER: ([0-9]*): callback id=([0-9]*) count=([0-9]*) on core ([0-9]*)",
- "Test OK",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 10)
-
- if index == 1:
- break
-
- if index == 2:
- return -1, "Fail"
- elif index == 3:
- return -1, "Fail [Timeout]"
-
- try:
- t = int(child.match.groups()[0])
- id = int(child.match.groups()[1])
- cnt = int(child.match.groups()[2])
- lcore = int(child.match.groups()[3])
- except:
- return -1, "Fail [Cannot parse]"
-
- # timer0 always expires on the same core when cnt < 20
- if id == 0:
- if lcore_tim0 == -1:
- lcore_tim0 = lcore
- elif lcore != lcore_tim0 and cnt < 20:
- return -1, "Fail [lcore != lcore_tim0 (%d, %d)]"%(lcore, lcore_tim0)
- if cnt > 21:
- return -1, "Fail [tim0 cnt > 21]"
-
- # timer1 each time expires on a different core
- if id == 1:
- if lcore == lcore_tim1:
- return -1, "Fail [lcore == lcore_tim1 (%d, %d)]"%(lcore, lcore_tim1)
- lcore_tim1 = lcore
- if cnt > 10:
- return -1, "Fail [tim1 cnt > 30]"
-
- # timer0 always expires on the same core
- if id == 2:
- if lcore_tim2 == -1:
- lcore_tim2 = lcore
- elif lcore != lcore_tim2:
- return -1, "Fail [lcore != lcore_tim2 (%d, %d)]"%(lcore, lcore_tim2)
- if cnt > 30:
- return -1, "Fail [tim2 cnt > 30]"
-
- # timer0 always expires on the same core
- if id == 3:
- if lcore_tim3 == -1:
- lcore_tim3 = lcore
- elif lcore != lcore_tim3:
- return -1, "Fail [lcore_tim3 changed (%d -> %d)]"%(lcore, lcore_tim3)
- if cnt > 30:
- return -1, "Fail [tim3 cnt > 30]"
-
- # must be 2 different cores
- if lcore_tim0 == lcore_tim3:
- return -1, "Fail [lcore_tim0 (%d) == lcore_tim3 (%d)]"%(lcore_tim0, lcore_tim3)
-
- return 0, "Success"
+ child.sendline(test_name)
+
+ index = child.expect(["Start timer stress tests",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=5)
+
+ if index == 1:
+ return -1, "Fail"
+ elif index == 2:
+ return -1, "Fail [Timeout]"
+
+ index = child.expect(["Start timer stress tests 2",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=5)
+
+ if index == 1:
+ return -1, "Fail"
+ elif index == 2:
+ return -1, "Fail [Timeout]"
+
+ index = child.expect(["Start timer basic tests",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=5)
+
+ if index == 1:
+ return -1, "Fail"
+ elif index == 2:
+ return -1, "Fail [Timeout]"
+
+ lcore_tim0 = -1
+ lcore_tim1 = -1
+ lcore_tim2 = -1
+ lcore_tim3 = -1
+
+ while True:
+ index = child.expect(["TESTTIMER: ([0-9]*): callback id=([0-9]*) "
+ "count=([0-9]*) on core ([0-9]*)",
+ "Test OK",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=10)
+
+ if index == 1:
+ break
+
+ if index == 2:
+ return -1, "Fail"
+ elif index == 3:
+ return -1, "Fail [Timeout]"
+
+ try:
+ id = int(child.match.groups()[1])
+ cnt = int(child.match.groups()[2])
+ lcore = int(child.match.groups()[3])
+ except:
+ return -1, "Fail [Cannot parse]"
+
+ # timer0 always expires on the same core when cnt < 20
+ if id == 0:
+ if lcore_tim0 == -1:
+ lcore_tim0 = lcore
+ elif lcore != lcore_tim0 and cnt < 20:
+ return -1, "Fail [lcore != lcore_tim0 (%d, %d)]" \
+ % (lcore, lcore_tim0)
+ if cnt > 21:
+ return -1, "Fail [tim0 cnt > 21]"
+
+ # timer1 each time expires on a different core
+ if id == 1:
+ if lcore == lcore_tim1:
+ return -1, "Fail [lcore == lcore_tim1 (%d, %d)]" \
+ % (lcore, lcore_tim1)
+ lcore_tim1 = lcore
+ if cnt > 10:
+ return -1, "Fail [tim1 cnt > 30]"
+
+ # timer0 always expires on the same core
+ if id == 2:
+ if lcore_tim2 == -1:
+ lcore_tim2 = lcore
+ elif lcore != lcore_tim2:
+ return -1, "Fail [lcore != lcore_tim2 (%d, %d)]" \
+ % (lcore, lcore_tim2)
+ if cnt > 30:
+ return -1, "Fail [tim2 cnt > 30]"
+
+ # timer0 always expires on the same core
+ if id == 3:
+ if lcore_tim3 == -1:
+ lcore_tim3 = lcore
+ elif lcore != lcore_tim3:
+ return -1, "Fail [lcore_tim3 changed (%d -> %d)]" \
+ % (lcore, lcore_tim3)
+ if cnt > 30:
+ return -1, "Fail [tim3 cnt > 30]"
+
+ # must be 2 different cores
+ if lcore_tim0 == lcore_tim3:
+ return -1, "Fail [lcore_tim0 (%d) == lcore_tim3 (%d)]" \
+ % (lcore_tim0, lcore_tim3)
+
+ return 0, "Success"
+
def ring_autotest(child, test_name):
- child.sendline(test_name)
- index = child.expect(["Test OK", "Test Failed",
- pexpect.TIMEOUT], timeout = 2)
- if index == 1:
- return -1, "Fail"
- elif index == 2:
- return -1, "Fail [Timeout]"
-
- child.sendline("set_watermark test 100")
- child.sendline("dump_ring test")
- index = child.expect([" watermark=100",
- pexpect.TIMEOUT], timeout = 1)
- if index != 0:
- return -1, "Fail [Bad watermark]"
-
- return 0, "Success"
+ child.sendline(test_name)
+ index = child.expect(["Test OK", "Test Failed",
+ pexpect.TIMEOUT], timeout=2)
+ if index == 1:
+ return -1, "Fail"
+ elif index == 2:
+ return -1, "Fail [Timeout]"
+
+ child.sendline("set_watermark test 100")
+ child.sendline("dump_ring test")
+ index = child.expect([" watermark=100",
+ pexpect.TIMEOUT], timeout=1)
+ if index != 0:
+ return -1, "Fail [Bad watermark]"
+
+ return 0, "Success"
diff --git a/doc/guides/conf.py b/doc/guides/conf.py
index 29e8efb..34c62de 100644
--- a/doc/guides/conf.py
+++ b/doc/guides/conf.py
@@ -58,7 +58,8 @@
html_show_copyright = False
highlight_language = 'none'
-version = subprocess.check_output(['make', '-sRrC', '../../', 'showversion']).decode('utf-8').rstrip()
+version = subprocess.check_output(['make', '-sRrC', '../../', 'showversion'])
+version = version.decode('utf-8').rstrip()
release = version
master_doc = 'index'
@@ -94,6 +95,7 @@
'preamble': latex_preamble
}
+
# Override the default Latex formatter in order to modify the
# code/verbatim blocks.
class CustomLatexFormatter(LatexFormatter):
@@ -117,12 +119,12 @@ def __init__(self, **options):
("tools/devbind", "dpdk-devbind",
"check device status and bind/unbind them from drivers", "", 8)]
-######## :numref: fallback ########
+
+# ####### :numref: fallback ########
# The following hook functions add some simple handling for the :numref:
# directive for Sphinx versions prior to 1.3.1. The functions replace the
# :numref: reference with a link to the target (for all Sphinx doc types).
# It doesn't try to label figures/tables.
-
def numref_role(reftype, rawtext, text, lineno, inliner):
"""
Add a Sphinx role to handle numref references. Note, we can't convert
@@ -136,6 +138,7 @@ def numref_role(reftype, rawtext, text, lineno, inliner):
internal=True)
return [newnode], []
+
def process_numref(app, doctree, from_docname):
"""
Process the numref nodes once the doctree has been built and prior to
diff --git a/examples/ip_pipeline/config/diagram-generator.py b/examples/ip_pipeline/config/diagram-generator.py
index 6b7170b..1748833 100755
--- a/examples/ip_pipeline/config/diagram-generator.py
+++ b/examples/ip_pipeline/config/diagram-generator.py
@@ -36,7 +36,8 @@
# the DPDK ip_pipeline application.
#
# The input configuration file is translated to an output file in DOT syntax,
-# which is then used to create the image file using graphviz (www.graphviz.org).
+# which is then used to create the image file using graphviz
+# (www.graphviz.org).
#
from __future__ import print_function
@@ -94,6 +95,7 @@
# SOURCEx | SOURCEx | SOURCEx | PIPELINEy | SOURCEx
# SINKx | SINKx | PIPELINEy | SINKx | SINKx
+
#
# Parse the input configuration file to detect the graph nodes and edges
#
@@ -321,16 +323,17 @@ def process_config_file(cfgfile):
#
print('Creating image file "%s" ...' % imgfile)
if os.system('which dot > /dev/null'):
- print('Error: Unable to locate "dot" executable.' \
- 'Please install the "graphviz" package (www.graphviz.org).')
+ print('Error: Unable to locate "dot" executable.'
+ 'Please install the "graphviz" package (www.graphviz.org).')
return
os.system(dot_cmd)
if __name__ == '__main__':
- parser = argparse.ArgumentParser(description=\
- 'Create diagram for IP pipeline configuration file.')
+ parser = argparse.ArgumentParser(description='Create diagram for IP '
+ 'pipeline configuration '
+ 'file.')
parser.add_argument(
'-f',
diff --git a/examples/ip_pipeline/config/pipeline-to-core-mapping.py b/examples/ip_pipeline/config/pipeline-to-core-mapping.py
index c2050b8..7a4eaa2 100755
--- a/examples/ip_pipeline/config/pipeline-to-core-mapping.py
+++ b/examples/ip_pipeline/config/pipeline-to-core-mapping.py
@@ -39,15 +39,14 @@
#
from __future__ import print_function
-import sys
-import errno
-import os
-import re
+from collections import namedtuple
+import argparse
import array
+import errno
import itertools
+import os
import re
-import argparse
-from collections import namedtuple
+import sys
# default values
enable_stage0_traceout = 1
diff --git a/tools/cpu_layout.py b/tools/cpu_layout.py
index d38d0b5..ccc22ec 100755
--- a/tools/cpu_layout.py
+++ b/tools/cpu_layout.py
@@ -38,40 +38,40 @@
cores = []
core_map = {}
-fd=open("/proc/cpuinfo")
+fd = open("/proc/cpuinfo")
lines = fd.readlines()
fd.close()
core_details = []
core_lines = {}
for line in lines:
- if len(line.strip()) != 0:
- name, value = line.split(":", 1)
- core_lines[name.strip()] = value.strip()
- else:
- core_details.append(core_lines)
- core_lines = {}
+ if len(line.strip()) != 0:
+ name, value = line.split(":", 1)
+ core_lines[name.strip()] = value.strip()
+ else:
+ core_details.append(core_lines)
+ core_lines = {}
for core in core_details:
- for field in ["processor", "core id", "physical id"]:
- if field not in core:
- print "Error getting '%s' value from /proc/cpuinfo" % field
- sys.exit(1)
- core[field] = int(core[field])
+ for field in ["processor", "core id", "physical id"]:
+ if field not in core:
+ print "Error getting '%s' value from /proc/cpuinfo" % field
+ sys.exit(1)
+ core[field] = int(core[field])
- if core["core id"] not in cores:
- cores.append(core["core id"])
- if core["physical id"] not in sockets:
- sockets.append(core["physical id"])
- key = (core["physical id"], core["core id"])
- if key not in core_map:
- core_map[key] = []
- core_map[key].append(core["processor"])
+ if core["core id"] not in cores:
+ cores.append(core["core id"])
+ if core["physical id"] not in sockets:
+ sockets.append(core["physical id"])
+ key = (core["physical id"], core["core id"])
+ if key not in core_map:
+ core_map[key] = []
+ core_map[key].append(core["processor"])
print "============================================================"
print "Core and Socket Information (as reported by '/proc/cpuinfo')"
print "============================================================\n"
-print "cores = ",cores
+print "cores = ", cores
print "sockets = ", sockets
print ""
@@ -81,15 +81,16 @@
print " ".ljust(max_core_id_len + len('Core ')),
for s in sockets:
- print "Socket %s" % str(s).ljust(max_core_map_len - len('Socket ')),
+ print "Socket %s" % str(s).ljust(max_core_map_len - len('Socket ')),
print ""
+
print " ".ljust(max_core_id_len + len('Core ')),
for s in sockets:
- print "--------".ljust(max_core_map_len),
+ print "--------".ljust(max_core_map_len),
print ""
for c in cores:
- print "Core %s" % str(c).ljust(max_core_id_len),
- for s in sockets:
- print str(core_map[(s,c)]).ljust(max_core_map_len),
- print ""
+ print "Core %s" % str(c).ljust(max_core_id_len),
+ for s in sockets:
+ print str(core_map[(s, c)]).ljust(max_core_map_len),
+ print ""
diff --git a/tools/dpdk-devbind.py b/tools/dpdk-devbind.py
index f1d374d..4f51a4b 100755
--- a/tools/dpdk-devbind.py
+++ b/tools/dpdk-devbind.py
@@ -93,10 +93,10 @@ def usage():
Unbind a device (Equivalent to \"-b none\")
--force:
- By default, network devices which are used by Linux - as indicated by having
- routes in the routing table - cannot be modified. Using the --force
- flag overrides this behavior, allowing active links to be forcibly
- unbound.
+ By default, network devices which are used by Linux - as indicated by
+ having routes in the routing table - cannot be modified. Using the
+ --force flag overrides this behavior, allowing active links to be
+ forcibly unbound.
WARNING: This can lead to loss of network connection and should be used
with caution.
@@ -151,7 +151,7 @@ def find_module(mod):
# check for a copy based off current path
tools_dir = dirname(abspath(sys.argv[0]))
- if (tools_dir.endswith("tools")):
+ if tools_dir.endswith("tools"):
base_dir = dirname(tools_dir)
find_out = check_output(["find", base_dir, "-name", mod + ".ko"])
if len(find_out) > 0: # something matched
@@ -249,7 +249,7 @@ def get_nic_details():
dev = {}
dev_lines = check_output(["lspci", "-Dvmmn"]).splitlines()
for dev_line in dev_lines:
- if (len(dev_line) == 0):
+ if len(dev_line) == 0:
if dev["Class"][0:2] == NETWORK_BASE_CLASS:
# convert device and vendor ids to numbers, then add to global
dev["Vendor"] = int(dev["Vendor"], 16)
@@ -315,8 +315,8 @@ def get_crypto_details():
dev = {}
dev_lines = check_output(["lspci", "-Dvmmn"]).splitlines()
for dev_line in dev_lines:
- if (len(dev_line) == 0):
- if (dev["Class"][0:2] == CRYPTO_BASE_CLASS):
+ if len(dev_line) == 0:
+ if dev["Class"][0:2] == CRYPTO_BASE_CLASS:
# convert device and vendor ids to numbers, then add to global
dev["Vendor"] = int(dev["Vendor"], 16)
dev["Device"] = int(dev["Device"], 16)
@@ -513,7 +513,8 @@ def display_devices(title, dev_list, extra_params=None):
for dev in dev_list:
if extra_params is not None:
strings.append("%s '%s' %s" % (dev["Slot"],
- dev["Device_str"], extra_params % dev))
+ dev["Device_str"],
+ extra_params % dev))
else:
strings.append("%s '%s'" % (dev["Slot"], dev["Device_str"]))
# sort before printing, so that the entries appear in PCI order
@@ -532,7 +533,7 @@ def show_status():
# split our list of network devices into the three categories above
for d in devices.keys():
- if (NETWORK_BASE_CLASS in devices[d]["Class"]):
+ if NETWORK_BASE_CLASS in devices[d]["Class"]:
if not has_driver(d):
no_drv.append(devices[d])
continue
@@ -555,7 +556,7 @@ def show_status():
no_drv = []
for d in devices.keys():
- if (CRYPTO_BASE_CLASS in devices[d]["Class"]):
+ if CRYPTO_BASE_CLASS in devices[d]["Class"]:
if not has_driver(d):
no_drv.append(devices[d])
continue
diff --git a/tools/dpdk-pmdinfo.py b/tools/dpdk-pmdinfo.py
index 3db9819..3d3ad7d 100755
--- a/tools/dpdk-pmdinfo.py
+++ b/tools/dpdk-pmdinfo.py
@@ -4,52 +4,20 @@
# Utility to dump PMD_INFO_STRING support from an object file
#
# -------------------------------------------------------------------------
+import json
import os
+import platform
+import string
import sys
+from elftools.common.exceptions import ELFError
+from elftools.common.py3compat import (byte2int, bytes2str, str2bytes)
+from elftools.elf.elffile import ELFFile
from optparse import OptionParser
-import string
-import json
-import platform
# For running from development directory. It should take precedence over the
# installed pyelftools.
sys.path.insert(0, '.')
-
-from elftools import __version__
-from elftools.common.exceptions import ELFError
-from elftools.common.py3compat import (
- ifilter, byte2int, bytes2str, itervalues, str2bytes)
-from elftools.elf.elffile import ELFFile
-from elftools.elf.dynamic import DynamicSection, DynamicSegment
-from elftools.elf.enums import ENUM_D_TAG
-from elftools.elf.segments import InterpSegment
-from elftools.elf.sections import SymbolTableSection
-from elftools.elf.gnuversions import (
- GNUVerSymSection, GNUVerDefSection,
- GNUVerNeedSection,
-)
-from elftools.elf.relocation import RelocationSection
-from elftools.elf.descriptions import (
- describe_ei_class, describe_ei_data, describe_ei_version,
- describe_ei_osabi, describe_e_type, describe_e_machine,
- describe_e_version_numeric, describe_p_type, describe_p_flags,
- describe_sh_type, describe_sh_flags,
- describe_symbol_type, describe_symbol_bind, describe_symbol_visibility,
- describe_symbol_shndx, describe_reloc_type, describe_dyn_tag,
- describe_ver_flags,
-)
-from elftools.elf.constants import E_FLAGS
-from elftools.dwarf.dwarfinfo import DWARFInfo
-from elftools.dwarf.descriptions import (
- describe_reg_name, describe_attr_value, set_global_machine_arch,
- describe_CFI_instructions, describe_CFI_register_rule,
- describe_CFI_CFA_rule,
-)
-from elftools.dwarf.constants import (
- DW_LNS_copy, DW_LNS_set_file, DW_LNE_define_file)
-from elftools.dwarf.callframe import CIE, FDE
-
raw_output = False
pcidb = None
@@ -326,7 +294,7 @@ def parse_pmd_info_string(self, mystring):
for i in optional_pmd_info:
try:
print("%s: %s" % (i['tag'], pmdinfo[i['id']]))
- except KeyError as e:
+ except KeyError:
continue
if (len(pmdinfo["pci_ids"]) != 0):
@@ -475,7 +443,7 @@ def process_dt_needed_entries(self):
with open(library, 'rb') as file:
try:
libelf = ReadElf(file, sys.stdout)
- except ELFError as e:
+ except ELFError:
print("%s is no an ELF file" % library)
continue
libelf.process_dt_needed_entries()
@@ -491,7 +459,7 @@ def scan_autoload_path(autoload_path):
try:
dirs = os.listdir(autoload_path)
- except OSError as e:
+ except OSError:
# Couldn't read the directory, give up
return
@@ -503,10 +471,10 @@ def scan_autoload_path(autoload_path):
try:
file = open(dpath, 'rb')
readelf = ReadElf(file, sys.stdout)
- except ELFError as e:
+ except ELFError:
# this is likely not an elf file, skip it
continue
- except IOError as e:
+ except IOError:
# No permission to read the file, skip it
continue
@@ -531,7 +499,7 @@ def scan_for_autoload_pmds(dpdk_path):
file = open(dpdk_path, 'rb')
try:
readelf = ReadElf(file, sys.stdout)
- except ElfError as e:
+ except ElfError:
if raw_output is False:
print("Unable to parse %s" % file)
return
@@ -557,7 +525,7 @@ def main(stream=None):
global raw_output
global pcidb
- pcifile_default = "./pci.ids" # for unknown OS's assume local file
+ pcifile_default = "./pci.ids" # For unknown OS's assume local file
if platform.system() == 'Linux':
pcifile_default = "/usr/share/hwdata/pci.ids"
elif platform.system() == 'FreeBSD':
@@ -577,7 +545,8 @@ def main(stream=None):
"to get vendor names from",
default=pcifile_default, metavar="FILE")
optparser.add_option("-t", "--table", dest="tblout",
- help="output information on hw support as a hex table",
+ help="output information on hw support as a "
+ "hex table",
action='store_true')
optparser.add_option("-p", "--plugindir", dest="pdir",
help="scan dpdk for autoload plugins",
--
2.7.4
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v1 2/4] app: make python apps python2/3 compliant
2016-12-08 15:51 [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant John McNamara
2016-12-08 15:51 ` [dpdk-dev] [PATCH v1 1/4] app: make python apps pep8 compliant John McNamara
@ 2016-12-08 15:51 ` John McNamara
2016-12-08 15:51 ` [dpdk-dev] [PATCH v1 3/4] app: give python apps a consistent shebang line John McNamara
` (18 subsequent siblings)
20 siblings, 0 replies; 28+ messages in thread
From: John McNamara @ 2016-12-08 15:51 UTC (permalink / raw)
To: dev; +Cc: mkletzan, John McNamara
Make all the DPDK Python apps work with Python 2 or 3 to
allow them to work with whatever is the system default.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
app/cmdline_test/cmdline_test.py | 26 ++++++++++++------------
app/cmdline_test/cmdline_test_data.py | 2 +-
app/test/autotest.py | 10 ++++-----
app/test/autotest_runner.py | 37 +++++++++++++++++-----------------
tools/cpu_layout.py | 38 ++++++++++++++++++-----------------
tools/dpdk-pmdinfo.py | 12 ++++++-----
6 files changed, 64 insertions(+), 61 deletions(-)
diff --git a/app/cmdline_test/cmdline_test.py b/app/cmdline_test/cmdline_test.py
index 4729987..229f71f 100755
--- a/app/cmdline_test/cmdline_test.py
+++ b/app/cmdline_test/cmdline_test.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# BSD LICENSE
#
@@ -32,7 +32,7 @@
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# Script that runs cmdline_test app and feeds keystrokes into it.
-
+from __future__ import print_function
import cmdline_test_data
import os
import pexpect
@@ -81,38 +81,38 @@ def runHistoryTest(child):
# the path to cmdline_test executable is supplied via command-line.
if len(sys.argv) < 2:
- print "Error: please supply cmdline_test app path"
+ print("Error: please supply cmdline_test app path")
sys.exit(1)
test_app_path = sys.argv[1]
if not os.path.exists(test_app_path):
- print "Error: please supply cmdline_test app path"
+ print("Error: please supply cmdline_test app path")
sys.exit(1)
child = pexpect.spawn(test_app_path)
-print "Running command-line tests..."
+print("Running command-line tests...")
for test in cmdline_test_data.tests:
- print (test["Name"] + ":").ljust(30),
+ testname = (test["Name"] + ":").ljust(30)
try:
runTest(child, test)
- print "PASS"
+ print(testname, "PASS")
except:
- print "FAIL"
- print child
+ print(testname, "FAIL")
+ print(child)
sys.exit(1)
# since last test quits the app, run new instance
child = pexpect.spawn(test_app_path)
-print ("History fill test:").ljust(30),
+testname = ("History fill test:").ljust(30)
try:
runHistoryTest(child)
- print "PASS"
+ print(testname, "PASS")
except:
- print "FAIL"
- print child
+ print(testname, "FAIL")
+ print(child)
sys.exit(1)
child.close()
sys.exit(0)
diff --git a/app/cmdline_test/cmdline_test_data.py b/app/cmdline_test/cmdline_test_data.py
index 3ce6cbc..9cc966b 100644
--- a/app/cmdline_test/cmdline_test_data.py
+++ b/app/cmdline_test/cmdline_test_data.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# BSD LICENSE
#
diff --git a/app/test/autotest.py b/app/test/autotest.py
index 3a00538..5c19a02 100644
--- a/app/test/autotest.py
+++ b/app/test/autotest.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# BSD LICENSE
#
@@ -32,15 +32,15 @@
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# Script that uses either test app or qemu controlled by python-pexpect
-
+from __future__ import print_function
import autotest_data
import autotest_runner
import sys
def usage():
- print"Usage: autotest.py [test app|test iso image]",
- print "[target] [whitelist|-blacklist]"
+ print("Usage: autotest.py [test app|test iso image] ",
+ "[target] [whitelist|-blacklist]")
if len(sys.argv) < 3:
usage()
@@ -63,7 +63,7 @@ def usage():
cmdline = "%s -c f -n 4" % (sys.argv[1])
-print cmdline
+print(cmdline)
runner = autotest_runner.AutotestRunner(cmdline, target, test_blacklist,
test_whitelist)
diff --git a/app/test/autotest_runner.py b/app/test/autotest_runner.py
index 55b63a8..7aeb0bd 100644
--- a/app/test/autotest_runner.py
+++ b/app/test/autotest_runner.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# BSD LICENSE
#
@@ -271,15 +271,16 @@ def __process_results(self, results):
total_time = int(cur_time - self.start)
# print results, test run time and total time since start
- print ("%s:" % test_name).ljust(30),
- print result_str.ljust(29),
- print "[%02dm %02ds]" % (test_time / 60, test_time % 60),
+ result = ("%s:" % test_name).ljust(30)
+ result += result_str.ljust(29)
+ result += "[%02dm %02ds]" % (test_time / 60, test_time % 60)
# don't print out total time every line, it's the same anyway
if i == len(results) - 1:
- print "[%02dm %02ds]" % (total_time / 60, total_time % 60)
+ print(result,
+ "[%02dm %02ds]" % (total_time / 60, total_time % 60))
else:
- print ""
+ print(result)
# if test failed and it wasn't a "start" test
if test_result < 0 and not i == 0:
@@ -294,7 +295,7 @@ def __process_results(self, results):
f = open("%s_%s_report.rst" %
(self.target, test_name), "w")
except IOError:
- print "Report for %s could not be created!" % test_name
+ print("Report for %s could not be created!" % test_name)
else:
with f:
f.write(report)
@@ -360,12 +361,10 @@ def run_all_tests(self):
try:
# create table header
- print ""
- print "Test name".ljust(30),
- print "Test result".ljust(29),
- print "Test".center(9),
- print "Total".center(9)
- print "=" * 80
+ print("")
+ print("Test name".ljust(30), "Test result".ljust(29),
+ "Test".center(9), "Total".center(9))
+ print("=" * 80)
# make a note of tests start time
self.start = time.time()
@@ -407,11 +406,11 @@ def run_all_tests(self):
total_time = int(cur_time - self.start)
# print out summary
- print "=" * 80
- print "Total run time: %02dm %02ds" % (total_time / 60,
- total_time % 60)
+ print("=" * 80)
+ print("Total run time: %02dm %02ds" % (total_time / 60,
+ total_time % 60))
if self.fails != 0:
- print "Number of failed tests: %s" % str(self.fails)
+ print("Number of failed tests: %s" % str(self.fails))
# write summary to logfile
self.logfile.write("Summary\n")
@@ -420,8 +419,8 @@ def run_all_tests(self):
self.logfile.write("Failed tests: ".ljust(
15) + "%i\n" % self.fails)
except:
- print "Exception occurred"
- print sys.exc_info()
+ print("Exception occurred")
+ print(sys.exc_info())
self.fails = 1
# drop logs from all executions to a logfile
diff --git a/tools/cpu_layout.py b/tools/cpu_layout.py
index ccc22ec..0e049a6 100755
--- a/tools/cpu_layout.py
+++ b/tools/cpu_layout.py
@@ -1,4 +1,5 @@
-#! /usr/bin/python
+#!/usr/bin/env python
+
#
# BSD LICENSE
#
@@ -31,7 +32,7 @@
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
-
+from __future__ import print_function
import sys
sockets = []
@@ -55,7 +56,7 @@
for core in core_details:
for field in ["processor", "core id", "physical id"]:
if field not in core:
- print "Error getting '%s' value from /proc/cpuinfo" % field
+ print("Error getting '%s' value from /proc/cpuinfo" % field)
sys.exit(1)
core[field] = int(core[field])
@@ -68,29 +69,30 @@
core_map[key] = []
core_map[key].append(core["processor"])
-print "============================================================"
-print "Core and Socket Information (as reported by '/proc/cpuinfo')"
-print "============================================================\n"
-print "cores = ", cores
-print "sockets = ", sockets
-print ""
+print("============================================================")
+print("Core and Socket Information (as reported by '/proc/cpuinfo')")
+print("============================================================\n")
+print("cores = ", cores)
+print("sockets = ", sockets)
+print("")
max_processor_len = len(str(len(cores) * len(sockets) * 2 - 1))
max_core_map_len = max_processor_len * 2 + len('[, ]') + len('Socket ')
max_core_id_len = len(str(max(cores)))
-print " ".ljust(max_core_id_len + len('Core ')),
+output = " ".ljust(max_core_id_len + len('Core '))
for s in sockets:
- print "Socket %s" % str(s).ljust(max_core_map_len - len('Socket ')),
-print ""
+ output += " Socket %s" % str(s).ljust(max_core_map_len - len('Socket '))
+print(output)
-print " ".ljust(max_core_id_len + len('Core ')),
+output = " ".ljust(max_core_id_len + len('Core '))
for s in sockets:
- print "--------".ljust(max_core_map_len),
-print ""
+ output += " --------".ljust(max_core_map_len)
+ output += " "
+print(output)
for c in cores:
- print "Core %s" % str(c).ljust(max_core_id_len),
+ output = "Core %s" % str(c).ljust(max_core_id_len)
for s in sockets:
- print str(core_map[(s, c)]).ljust(max_core_map_len),
- print ""
+ output += " " + str(core_map[(s, c)]).ljust(max_core_map_len)
+ print(output)
diff --git a/tools/dpdk-pmdinfo.py b/tools/dpdk-pmdinfo.py
index 3d3ad7d..097982e 100755
--- a/tools/dpdk-pmdinfo.py
+++ b/tools/dpdk-pmdinfo.py
@@ -1,9 +1,11 @@
#!/usr/bin/env python
+
# -------------------------------------------------------------------------
#
# Utility to dump PMD_INFO_STRING support from an object file
#
# -------------------------------------------------------------------------
+from __future__ import print_function
import json
import os
import platform
@@ -54,7 +56,7 @@ def addDevice(self, deviceStr):
self.devices[devID] = Device(deviceStr)
def report(self):
- print self.ID, self.name
+ print(self.ID, self.name)
for id, dev in self.devices.items():
dev.report()
@@ -80,7 +82,7 @@ def __init__(self, deviceStr):
self.subdevices = {}
def report(self):
- print "\t%s\t%s" % (self.ID, self.name)
+ print("\t%s\t%s" % (self.ID, self.name))
for subID, subdev in self.subdevices.items():
subdev.report()
@@ -126,7 +128,7 @@ def __init__(self, vendor, device, name):
self.name = name
def report(self):
- print "\t\t%s\t%s\t%s" % (self.vendorID, self.deviceID, self.name)
+ print("\t\t%s\t%s\t%s" % (self.vendorID, self.deviceID, self.name))
class PCIIds:
@@ -154,7 +156,7 @@ def reportVendors(self):
"""Reports the vendors
"""
for vid, v in self.vendors.items():
- print v.ID, v.name
+ print(v.ID, v.name)
def report(self, vendor=None):
"""
@@ -185,7 +187,7 @@ def findDate(self, content):
def parse(self):
if len(self.contents) < 1:
- print "data/%s-pci.ids not found" % self.date
+ print("data/%s-pci.ids not found" % self.date)
else:
vendorID = ""
deviceID = ""
--
2.7.4
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v1 3/4] app: give python apps a consistent shebang line
2016-12-08 15:51 [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant John McNamara
2016-12-08 15:51 ` [dpdk-dev] [PATCH v1 1/4] app: make python apps pep8 compliant John McNamara
2016-12-08 15:51 ` [dpdk-dev] [PATCH v1 2/4] app: make python apps python2/3 compliant John McNamara
@ 2016-12-08 15:51 ` John McNamara
2016-12-08 15:51 ` [dpdk-dev] [PATCH v1 4/4] doc: add required python versions to coding guidelines John McNamara
` (17 subsequent siblings)
20 siblings, 0 replies; 28+ messages in thread
From: John McNamara @ 2016-12-08 15:51 UTC (permalink / raw)
To: dev; +Cc: mkletzan, John McNamara
Add a consistent "env python" shebang line to the DPDK Python
apps so that they can call the default system python.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
app/test/autotest_data.py | 2 +-
app/test/autotest_test_funcs.py | 2 +-
doc/guides/conf.py | 2 ++
tools/dpdk-devbind.py | 3 ++-
4 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 5176064..7be345a 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# BSD LICENSE
#
diff --git a/app/test/autotest_test_funcs.py b/app/test/autotest_test_funcs.py
index c482ea8..1fa8cf0 100644
--- a/app/test/autotest_test_funcs.py
+++ b/app/test/autotest_test_funcs.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# BSD LICENSE
#
diff --git a/doc/guides/conf.py b/doc/guides/conf.py
index 34c62de..97c5d0e 100644
--- a/doc/guides/conf.py
+++ b/doc/guides/conf.py
@@ -1,3 +1,5 @@
+#!/usr/bin/env python
+
# BSD LICENSE
# Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
# All rights reserved.
diff --git a/tools/dpdk-devbind.py b/tools/dpdk-devbind.py
index 4f51a4b..a5b2af5 100755
--- a/tools/dpdk-devbind.py
+++ b/tools/dpdk-devbind.py
@@ -1,4 +1,5 @@
-#! /usr/bin/python
+#!/usr/bin/env python
+
#
# BSD LICENSE
#
--
2.7.4
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v1 4/4] doc: add required python versions to coding guidelines
2016-12-08 15:51 [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant John McNamara
` (2 preceding siblings ...)
2016-12-08 15:51 ` [dpdk-dev] [PATCH v1 3/4] app: give python apps a consistent shebang line John McNamara
@ 2016-12-08 15:51 ` John McNamara
2016-12-08 16:03 ` [dpdk-dev] [PATCH v2 1/4] app: make python apps pep8 compliant John McNamara
` (16 subsequent siblings)
20 siblings, 0 replies; 28+ messages in thread
From: John McNamara @ 2016-12-08 15:51 UTC (permalink / raw)
To: dev; +Cc: mkletzan, John McNamara
Add a requirement to support both Python 2 and 3 to the
DPDK Python Coding Standards.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
app/test/autotest_data.py | 188 +++++++++++++++----------------
doc/guides/contributing/coding_style.rst | 3 +-
2 files changed, 96 insertions(+), 95 deletions(-)
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 7be345a..0cf4cfd 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -59,46 +59,46 @@ def per_sockets(num):
"Tests":
[
{
- "Name": "Cycles autotest",
+ "Name": "Cycles autotest",
"Command": "cycles_autotest",
- "Func": default_autotest,
- "Report": None,
+ "Func": default_autotest,
+ "Report": None,
},
{
"Name": "Timer autotest",
- "Command": "timer_autotest",
+ "Command": "timer_autotest",
"Func": timer_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "Debug autotest",
- "Command": "debug_autotest",
+ "Command": "debug_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "Errno autotest",
- "Command": "errno_autotest",
+ "Command": "errno_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "Meter autotest",
- "Command": "meter_autotest",
+ "Command": "meter_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "Common autotest",
- "Command": "common_autotest",
+ "Command": "common_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "Resource autotest",
- "Command": "resource_autotest",
+ "Command": "resource_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
]
},
@@ -109,51 +109,51 @@ def per_sockets(num):
[
{
"Name": "Memory autotest",
- "Command": "memory_autotest",
+ "Command": "memory_autotest",
"Func": memory_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "Read/write lock autotest",
- "Command": "rwlock_autotest",
+ "Command": "rwlock_autotest",
"Func": rwlock_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "Logs autotest",
- "Command": "logs_autotest",
+ "Command": "logs_autotest",
"Func": logs_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "CPU flags autotest",
- "Command": "cpuflags_autotest",
+ "Command": "cpuflags_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "Version autotest",
- "Command": "version_autotest",
+ "Command": "version_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "EAL filesystem autotest",
- "Command": "eal_fs_autotest",
+ "Command": "eal_fs_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "EAL flags autotest",
- "Command": "eal_flags_autotest",
+ "Command": "eal_flags_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "Hash autotest",
- "Command": "hash_autotest",
+ "Command": "hash_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
],
},
@@ -164,9 +164,9 @@ def per_sockets(num):
[
{
"Name": "LPM autotest",
- "Command": "lpm_autotest",
+ "Command": "lpm_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "LPM6 autotest",
@@ -176,27 +176,27 @@ def per_sockets(num):
},
{
"Name": "Memcpy autotest",
- "Command": "memcpy_autotest",
+ "Command": "memcpy_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "Memzone autotest",
- "Command": "memzone_autotest",
+ "Command": "memzone_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "String autotest",
- "Command": "string_autotest",
+ "Command": "string_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "Alarm autotest",
- "Command": "alarm_autotest",
+ "Command": "alarm_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
]
},
@@ -207,39 +207,39 @@ def per_sockets(num):
[
{
"Name": "PCI autotest",
- "Command": "pci_autotest",
+ "Command": "pci_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "Malloc autotest",
- "Command": "malloc_autotest",
+ "Command": "malloc_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "Multi-process autotest",
- "Command": "multiprocess_autotest",
+ "Command": "multiprocess_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "Mbuf autotest",
- "Command": "mbuf_autotest",
+ "Command": "mbuf_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "Per-lcore autotest",
- "Command": "per_lcore_autotest",
+ "Command": "per_lcore_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "Ring autotest",
- "Command": "ring_autotest",
+ "Command": "ring_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
]
},
@@ -250,33 +250,33 @@ def per_sockets(num):
[
{
"Name": "Spinlock autotest",
- "Command": "spinlock_autotest",
+ "Command": "spinlock_autotest",
"Func": spinlock_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "Byte order autotest",
- "Command": "byteorder_autotest",
+ "Command": "byteorder_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "TAILQ autotest",
- "Command": "tailq_autotest",
+ "Command": "tailq_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "Command-line autotest",
- "Command": "cmdline_autotest",
+ "Command": "cmdline_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "Interrupts autotest",
- "Command": "interrupt_autotest",
+ "Command": "interrupt_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
]
},
@@ -287,33 +287,33 @@ def per_sockets(num):
[
{
"Name": "Function reentrancy autotest",
- "Command": "func_reentrancy_autotest",
+ "Command": "func_reentrancy_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "Mempool autotest",
- "Command": "mempool_autotest",
+ "Command": "mempool_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "Atomics autotest",
- "Command": "atomic_autotest",
+ "Command": "atomic_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "Prefetch autotest",
- "Command": "prefetch_autotest",
+ "Command": "prefetch_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
- "Name": "Red autotest",
+ "Name": "Red autotest",
"Command": "red_autotest",
- "Func": default_autotest,
- "Report": None,
+ "Func": default_autotest,
+ "Report": None,
},
]
},
@@ -324,21 +324,21 @@ def per_sockets(num):
[
{
"Name": "PMD ring autotest",
- "Command": "ring_pmd_autotest",
+ "Command": "ring_pmd_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
"Name": "Access list control autotest",
- "Command": "acl_autotest",
+ "Command": "acl_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
{
- "Name": "Sched autotest",
+ "Name": "Sched autotest",
"Command": "sched_autotest",
- "Func": default_autotest,
- "Report": None,
+ "Func": default_autotest,
+ "Report": None,
},
]
},
@@ -354,9 +354,9 @@ def per_sockets(num):
[
{
"Name": "KNI autotest",
- "Command": "kni_autotest",
+ "Command": "kni_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
]
},
@@ -367,9 +367,9 @@ def per_sockets(num):
[
{
"Name": "Mempool performance autotest",
- "Command": "mempool_perf_autotest",
+ "Command": "mempool_perf_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
]
},
@@ -380,9 +380,9 @@ def per_sockets(num):
[
{
"Name": "Memcpy performance autotest",
- "Command": "memcpy_perf_autotest",
+ "Command": "memcpy_perf_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
]
},
@@ -393,9 +393,9 @@ def per_sockets(num):
[
{
"Name": "Hash performance autotest",
- "Command": "hash_perf_autotest",
+ "Command": "hash_perf_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
]
},
@@ -408,7 +408,7 @@ def per_sockets(num):
"Name": "Power autotest",
"Command": "power_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
]
},
@@ -445,9 +445,9 @@ def per_sockets(num):
[
{
"Name": "Timer performance autotest",
- "Command": "timer_perf_autotest",
+ "Command": "timer_perf_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
]
},
@@ -462,9 +462,9 @@ def per_sockets(num):
[
{
"Name": "Ring performance autotest",
- "Command": "ring_perf_autotest",
+ "Command": "ring_perf_autotest",
"Func": default_autotest,
- "Report": None,
+ "Report": None,
},
]
},
diff --git a/doc/guides/contributing/coding_style.rst b/doc/guides/contributing/coding_style.rst
index 1eb67f3..4163960 100644
--- a/doc/guides/contributing/coding_style.rst
+++ b/doc/guides/contributing/coding_style.rst
@@ -690,6 +690,7 @@ Control Statements
Python Code
-----------
-All python code should be compliant with `PEP8 (Style Guide for Python Code) <https://www.python.org/dev/peps/pep-0008/>`_.
+All Python code should work with Python 2.7+ and 3.2+ and be compliant with
+`PEP8 (Style Guide for Python Code) <https://www.python.org/dev/peps/pep-0008/>`_.
The ``pep8`` tool can be used for testing compliance with the guidelines.
--
2.7.4
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 1/4] app: make python apps pep8 compliant
2016-12-08 15:51 [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant John McNamara
` (3 preceding siblings ...)
2016-12-08 15:51 ` [dpdk-dev] [PATCH v1 4/4] doc: add required python versions to coding guidelines John McNamara
@ 2016-12-08 16:03 ` John McNamara
2016-12-08 16:03 ` [dpdk-dev] [PATCH v2 2/4] app: make python apps python2/3 compliant John McNamara
` (15 subsequent siblings)
20 siblings, 0 replies; 28+ messages in thread
From: John McNamara @ 2016-12-08 16:03 UTC (permalink / raw)
To: dev; +Cc: mkletzan, John McNamara
Make all DPDK python application compliant with the PEP8 standard
to allow for consistency checking of patches and to allow further
refactoring.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
V2: Fixed file/patch order from sloppy rebase.
app/cmdline_test/cmdline_test.py | 81 +-
app/cmdline_test/cmdline_test_data.py | 401 +++++-----
app/test/autotest.py | 40 +-
app/test/autotest_data.py | 831 +++++++++++----------
app/test/autotest_runner.py | 739 +++++++++---------
app/test/autotest_test_funcs.py | 479 ++++++------
doc/guides/conf.py | 9 +-
examples/ip_pipeline/config/diagram-generator.py | 13 +-
.../ip_pipeline/config/pipeline-to-core-mapping.py | 11 +-
tools/cpu_layout.py | 55 +-
tools/dpdk-devbind.py | 23 +-
tools/dpdk-pmdinfo.py | 61 +-
12 files changed, 1376 insertions(+), 1367 deletions(-)
diff --git a/app/cmdline_test/cmdline_test.py b/app/cmdline_test/cmdline_test.py
index 8efc5ea..4729987 100755
--- a/app/cmdline_test/cmdline_test.py
+++ b/app/cmdline_test/cmdline_test.py
@@ -33,16 +33,21 @@
# Script that runs cmdline_test app and feeds keystrokes into it.
-import sys, pexpect, string, os, cmdline_test_data
+import cmdline_test_data
+import os
+import pexpect
+import sys
+
#
# function to run test
#
-def runTest(child,test):
- child.send(test["Sequence"])
- if test["Result"] == None:
- return 0
- child.expect(test["Result"],1)
+def runTest(child, test):
+ child.send(test["Sequence"])
+ if test["Result"] is None:
+ return 0
+ child.expect(test["Result"], 1)
+
#
# history test is a special case
@@ -57,57 +62,57 @@ def runTest(child,test):
# This is a self-contained test, it needs only a pexpect child
#
def runHistoryTest(child):
- # find out history size
- child.sendline(cmdline_test_data.CMD_GET_BUFSIZE)
- child.expect("History buffer size: \\d+", timeout=1)
- history_size = int(child.after[len(cmdline_test_data.BUFSIZE_TEMPLATE):])
- i = 0
+ # find out history size
+ child.sendline(cmdline_test_data.CMD_GET_BUFSIZE)
+ child.expect("History buffer size: \\d+", timeout=1)
+ history_size = int(child.after[len(cmdline_test_data.BUFSIZE_TEMPLATE):])
+ i = 0
- # fill the history with numbers
- while i < history_size / 10:
- # add 1 to prevent from parsing as octals
- child.send("1" + str(i).zfill(8) + cmdline_test_data.ENTER)
- # the app will simply print out the number
- child.expect(str(i + 100000000), timeout=1)
- i += 1
- # scroll back history
- child.send(cmdline_test_data.UP * (i + 2) + cmdline_test_data.ENTER)
- child.expect("100000000", timeout=1)
+ # fill the history with numbers
+ while i < history_size / 10:
+ # add 1 to prevent from parsing as octals
+ child.send("1" + str(i).zfill(8) + cmdline_test_data.ENTER)
+ # the app will simply print out the number
+ child.expect(str(i + 100000000), timeout=1)
+ i += 1
+ # scroll back history
+ child.send(cmdline_test_data.UP * (i + 2) + cmdline_test_data.ENTER)
+ child.expect("100000000", timeout=1)
# the path to cmdline_test executable is supplied via command-line.
if len(sys.argv) < 2:
- print "Error: please supply cmdline_test app path"
- sys.exit(1)
+ print "Error: please supply cmdline_test app path"
+ sys.exit(1)
test_app_path = sys.argv[1]
if not os.path.exists(test_app_path):
- print "Error: please supply cmdline_test app path"
- sys.exit(1)
+ print "Error: please supply cmdline_test app path"
+ sys.exit(1)
child = pexpect.spawn(test_app_path)
print "Running command-line tests..."
for test in cmdline_test_data.tests:
- print (test["Name"] + ":").ljust(30),
- try:
- runTest(child,test)
- print "PASS"
- except:
- print "FAIL"
- print child
- sys.exit(1)
+ print (test["Name"] + ":").ljust(30),
+ try:
+ runTest(child, test)
+ print "PASS"
+ except:
+ print "FAIL"
+ print child
+ sys.exit(1)
# since last test quits the app, run new instance
child = pexpect.spawn(test_app_path)
print ("History fill test:").ljust(30),
try:
- runHistoryTest(child)
- print "PASS"
+ runHistoryTest(child)
+ print "PASS"
except:
- print "FAIL"
- print child
- sys.exit(1)
+ print "FAIL"
+ print child
+ sys.exit(1)
child.close()
sys.exit(0)
diff --git a/app/cmdline_test/cmdline_test_data.py b/app/cmdline_test/cmdline_test_data.py
index b1945a5..3ce6cbc 100644
--- a/app/cmdline_test/cmdline_test_data.py
+++ b/app/cmdline_test/cmdline_test_data.py
@@ -33,8 +33,6 @@
# collection of static data
-import sys
-
# keycode constants
CTRL_A = chr(1)
CTRL_B = chr(2)
@@ -95,217 +93,220 @@
# and expected output (if any).
tests = [
-# test basic commands
- {"Name" : "command test 1",
- "Sequence" : "ambiguous first" + ENTER,
- "Result" : CMD1},
- {"Name" : "command test 2",
- "Sequence" : "ambiguous second" + ENTER,
- "Result" : CMD2},
- {"Name" : "command test 3",
- "Sequence" : "ambiguous ambiguous" + ENTER,
- "Result" : AMBIG},
- {"Name" : "command test 4",
- "Sequence" : "ambiguous ambiguous2" + ENTER,
- "Result" : AMBIG},
+ # test basic commands
+ {"Name": "command test 1",
+ "Sequence": "ambiguous first" + ENTER,
+ "Result": CMD1},
+ {"Name": "command test 2",
+ "Sequence": "ambiguous second" + ENTER,
+ "Result": CMD2},
+ {"Name": "command test 3",
+ "Sequence": "ambiguous ambiguous" + ENTER,
+ "Result": AMBIG},
+ {"Name": "command test 4",
+ "Sequence": "ambiguous ambiguous2" + ENTER,
+ "Result": AMBIG},
- {"Name" : "invalid command test 1",
- "Sequence" : "ambiguous invalid" + ENTER,
- "Result" : BAD_ARG},
-# test invalid commands
- {"Name" : "invalid command test 2",
- "Sequence" : "invalid" + ENTER,
- "Result" : NOT_FOUND},
- {"Name" : "invalid command test 3",
- "Sequence" : "ambiguousinvalid" + ENTER2,
- "Result" : NOT_FOUND},
+ {"Name": "invalid command test 1",
+ "Sequence": "ambiguous invalid" + ENTER,
+ "Result": BAD_ARG},
+ # test invalid commands
+ {"Name": "invalid command test 2",
+ "Sequence": "invalid" + ENTER,
+ "Result": NOT_FOUND},
+ {"Name": "invalid command test 3",
+ "Sequence": "ambiguousinvalid" + ENTER2,
+ "Result": NOT_FOUND},
-# test arrows and deletes
- {"Name" : "arrows & delete test 1",
- "Sequence" : "singlebad" + LEFT*2 + CTRL_B + DEL*3 + ENTER,
- "Result" : SINGLE},
- {"Name" : "arrows & delete test 2",
- "Sequence" : "singlebad" + LEFT*5 + RIGHT + CTRL_F + DEL*3 + ENTER,
- "Result" : SINGLE},
+ # test arrows and deletes
+ {"Name": "arrows & delete test 1",
+ "Sequence": "singlebad" + LEFT*2 + CTRL_B + DEL*3 + ENTER,
+ "Result": SINGLE},
+ {"Name": "arrows & delete test 2",
+ "Sequence": "singlebad" + LEFT*5 + RIGHT + CTRL_F + DEL*3 + ENTER,
+ "Result": SINGLE},
-# test backspace
- {"Name" : "backspace test",
- "Sequence" : "singlebad" + BKSPACE*3 + ENTER,
- "Result" : SINGLE},
+ # test backspace
+ {"Name": "backspace test",
+ "Sequence": "singlebad" + BKSPACE*3 + ENTER,
+ "Result": SINGLE},
-# test goto left and goto right
- {"Name" : "goto left test",
- "Sequence" : "biguous first" + CTRL_A + "am" + ENTER,
- "Result" : CMD1},
- {"Name" : "goto right test",
- "Sequence" : "biguous fir" + CTRL_A + "am" + CTRL_E + "st" + ENTER,
- "Result" : CMD1},
+ # test goto left and goto right
+ {"Name": "goto left test",
+ "Sequence": "biguous first" + CTRL_A + "am" + ENTER,
+ "Result": CMD1},
+ {"Name": "goto right test",
+ "Sequence": "biguous fir" + CTRL_A + "am" + CTRL_E + "st" + ENTER,
+ "Result": CMD1},
-# test goto words
- {"Name" : "goto left word test",
- "Sequence" : "ambiguous st" + ALT_B + "fir" + ENTER,
- "Result" : CMD1},
- {"Name" : "goto right word test",
- "Sequence" : "ambig first" + CTRL_A + ALT_F + "uous" + ENTER,
- "Result" : CMD1},
+ # test goto words
+ {"Name": "goto left word test",
+ "Sequence": "ambiguous st" + ALT_B + "fir" + ENTER,
+ "Result": CMD1},
+ {"Name": "goto right word test",
+ "Sequence": "ambig first" + CTRL_A + ALT_F + "uous" + ENTER,
+ "Result": CMD1},
-# test removing words
- {"Name" : "remove left word 1",
- "Sequence" : "single invalid" + CTRL_W + ENTER,
- "Result" : SINGLE},
- {"Name" : "remove left word 2",
- "Sequence" : "single invalid" + ALT_BKSPACE + ENTER,
- "Result" : SINGLE},
- {"Name" : "remove right word",
- "Sequence" : "single invalid" + ALT_B + ALT_D + ENTER,
- "Result" : SINGLE},
+ # test removing words
+ {"Name": "remove left word 1",
+ "Sequence": "single invalid" + CTRL_W + ENTER,
+ "Result": SINGLE},
+ {"Name": "remove left word 2",
+ "Sequence": "single invalid" + ALT_BKSPACE + ENTER,
+ "Result": SINGLE},
+ {"Name": "remove right word",
+ "Sequence": "single invalid" + ALT_B + ALT_D + ENTER,
+ "Result": SINGLE},
-# test kill buffer (copy and paste)
- {"Name" : "killbuffer test 1",
- "Sequence" : "ambiguous" + CTRL_A + CTRL_K + " first" + CTRL_A + CTRL_Y + ENTER,
- "Result" : CMD1},
- {"Name" : "killbuffer test 2",
- "Sequence" : "ambiguous" + CTRL_A + CTRL_K + CTRL_Y*26 + ENTER,
- "Result" : NOT_FOUND},
+ # test kill buffer (copy and paste)
+ {"Name": "killbuffer test 1",
+ "Sequence": "ambiguous" + CTRL_A + CTRL_K + " first" + CTRL_A +
+ CTRL_Y + ENTER,
+ "Result": CMD1},
+ {"Name": "killbuffer test 2",
+ "Sequence": "ambiguous" + CTRL_A + CTRL_K + CTRL_Y*26 + ENTER,
+ "Result": NOT_FOUND},
-# test newline
- {"Name" : "newline test",
- "Sequence" : "invalid" + CTRL_C + "single" + ENTER,
- "Result" : SINGLE},
+ # test newline
+ {"Name": "newline test",
+ "Sequence": "invalid" + CTRL_C + "single" + ENTER,
+ "Result": SINGLE},
-# test redisplay (nothing should really happen)
- {"Name" : "redisplay test",
- "Sequence" : "single" + CTRL_L + ENTER,
- "Result" : SINGLE},
+ # test redisplay (nothing should really happen)
+ {"Name": "redisplay test",
+ "Sequence": "single" + CTRL_L + ENTER,
+ "Result": SINGLE},
-# test autocomplete
- {"Name" : "autocomplete test 1",
- "Sequence" : "si" + TAB + ENTER,
- "Result" : SINGLE},
- {"Name" : "autocomplete test 2",
- "Sequence" : "si" + TAB + "_" + TAB + ENTER,
- "Result" : SINGLE_LONG},
- {"Name" : "autocomplete test 3",
- "Sequence" : "in" + TAB + ENTER,
- "Result" : NOT_FOUND},
- {"Name" : "autocomplete test 4",
- "Sequence" : "am" + TAB + ENTER,
- "Result" : BAD_ARG},
- {"Name" : "autocomplete test 5",
- "Sequence" : "am" + TAB + "fir" + TAB + ENTER,
- "Result" : CMD1},
- {"Name" : "autocomplete test 6",
- "Sequence" : "am" + TAB + "fir" + TAB + TAB + ENTER,
- "Result" : CMD1},
- {"Name" : "autocomplete test 7",
- "Sequence" : "am" + TAB + "fir" + TAB + " " + TAB + ENTER,
- "Result" : CMD1},
- {"Name" : "autocomplete test 8",
- "Sequence" : "am" + TAB + " am" + TAB + " " + ENTER,
- "Result" : AMBIG},
- {"Name" : "autocomplete test 9",
- "Sequence" : "am" + TAB + "inv" + TAB + ENTER,
- "Result" : BAD_ARG},
- {"Name" : "autocomplete test 10",
- "Sequence" : "au" + TAB + ENTER,
- "Result" : NOT_FOUND},
- {"Name" : "autocomplete test 11",
- "Sequence" : "au" + TAB + "1" + ENTER,
- "Result" : AUTO1},
- {"Name" : "autocomplete test 12",
- "Sequence" : "au" + TAB + "2" + ENTER,
- "Result" : AUTO2},
- {"Name" : "autocomplete test 13",
- "Sequence" : "au" + TAB + "2" + TAB + ENTER,
- "Result" : AUTO2},
- {"Name" : "autocomplete test 14",
- "Sequence" : "au" + TAB + "2 " + TAB + ENTER,
- "Result" : AUTO2},
- {"Name" : "autocomplete test 15",
- "Sequence" : "24" + TAB + ENTER,
- "Result" : "24"},
+ # test autocomplete
+ {"Name": "autocomplete test 1",
+ "Sequence": "si" + TAB + ENTER,
+ "Result": SINGLE},
+ {"Name": "autocomplete test 2",
+ "Sequence": "si" + TAB + "_" + TAB + ENTER,
+ "Result": SINGLE_LONG},
+ {"Name": "autocomplete test 3",
+ "Sequence": "in" + TAB + ENTER,
+ "Result": NOT_FOUND},
+ {"Name": "autocomplete test 4",
+ "Sequence": "am" + TAB + ENTER,
+ "Result": BAD_ARG},
+ {"Name": "autocomplete test 5",
+ "Sequence": "am" + TAB + "fir" + TAB + ENTER,
+ "Result": CMD1},
+ {"Name": "autocomplete test 6",
+ "Sequence": "am" + TAB + "fir" + TAB + TAB + ENTER,
+ "Result": CMD1},
+ {"Name": "autocomplete test 7",
+ "Sequence": "am" + TAB + "fir" + TAB + " " + TAB + ENTER,
+ "Result": CMD1},
+ {"Name": "autocomplete test 8",
+ "Sequence": "am" + TAB + " am" + TAB + " " + ENTER,
+ "Result": AMBIG},
+ {"Name": "autocomplete test 9",
+ "Sequence": "am" + TAB + "inv" + TAB + ENTER,
+ "Result": BAD_ARG},
+ {"Name": "autocomplete test 10",
+ "Sequence": "au" + TAB + ENTER,
+ "Result": NOT_FOUND},
+ {"Name": "autocomplete test 11",
+ "Sequence": "au" + TAB + "1" + ENTER,
+ "Result": AUTO1},
+ {"Name": "autocomplete test 12",
+ "Sequence": "au" + TAB + "2" + ENTER,
+ "Result": AUTO2},
+ {"Name": "autocomplete test 13",
+ "Sequence": "au" + TAB + "2" + TAB + ENTER,
+ "Result": AUTO2},
+ {"Name": "autocomplete test 14",
+ "Sequence": "au" + TAB + "2 " + TAB + ENTER,
+ "Result": AUTO2},
+ {"Name": "autocomplete test 15",
+ "Sequence": "24" + TAB + ENTER,
+ "Result": "24"},
-# test history
- {"Name" : "history test 1",
- "Sequence" : "invalid" + ENTER + "single" + ENTER + "invalid" + ENTER + UP + CTRL_P + ENTER,
- "Result" : SINGLE},
- {"Name" : "history test 2",
- "Sequence" : "invalid" + ENTER + "ambiguous first" + ENTER + "invalid" + ENTER + "single" + ENTER + UP * 3 + CTRL_N + DOWN + ENTER,
- "Result" : SINGLE},
+ # test history
+ {"Name": "history test 1",
+ "Sequence": "invalid" + ENTER + "single" + ENTER + "invalid" +
+ ENTER + UP + CTRL_P + ENTER,
+ "Result": SINGLE},
+ {"Name": "history test 2",
+ "Sequence": "invalid" + ENTER + "ambiguous first" + ENTER + "invalid" +
+ ENTER + "single" + ENTER + UP * 3 + CTRL_N + DOWN + ENTER,
+ "Result": SINGLE},
-#
-# tests that improve coverage
-#
+ #
+ # tests that improve coverage
+ #
-# empty space tests
- {"Name" : "empty space test 1",
- "Sequence" : RIGHT + LEFT + CTRL_B + CTRL_F + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 2",
- "Sequence" : BKSPACE + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 3",
- "Sequence" : CTRL_E*2 + CTRL_A*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 4",
- "Sequence" : ALT_F*2 + ALT_B*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 5",
- "Sequence" : " " + CTRL_E*2 + CTRL_A*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 6",
- "Sequence" : " " + CTRL_A + ALT_F*2 + ALT_B*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 7",
- "Sequence" : " " + CTRL_A + CTRL_D + CTRL_E + CTRL_D + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 8",
- "Sequence" : " space" + CTRL_W*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 9",
- "Sequence" : " space" + ALT_BKSPACE*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 10",
- "Sequence" : " space " + CTRL_A + ALT_D*3 + ENTER,
- "Result" : PROMPT},
+ # empty space tests
+ {"Name": "empty space test 1",
+ "Sequence": RIGHT + LEFT + CTRL_B + CTRL_F + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 2",
+ "Sequence": BKSPACE + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 3",
+ "Sequence": CTRL_E*2 + CTRL_A*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 4",
+ "Sequence": ALT_F*2 + ALT_B*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 5",
+ "Sequence": " " + CTRL_E*2 + CTRL_A*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 6",
+ "Sequence": " " + CTRL_A + ALT_F*2 + ALT_B*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 7",
+ "Sequence": " " + CTRL_A + CTRL_D + CTRL_E + CTRL_D + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 8",
+ "Sequence": " space" + CTRL_W*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 9",
+ "Sequence": " space" + ALT_BKSPACE*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 10",
+ "Sequence": " space " + CTRL_A + ALT_D*3 + ENTER,
+ "Result": PROMPT},
-# non-printable char tests
- {"Name" : "non-printable test 1",
- "Sequence" : chr(27) + chr(47) + ENTER,
- "Result" : PROMPT},
- {"Name" : "non-printable test 2",
- "Sequence" : chr(27) + chr(128) + ENTER*7,
- "Result" : PROMPT},
- {"Name" : "non-printable test 3",
- "Sequence" : chr(27) + chr(91) + chr(127) + ENTER*6,
- "Result" : PROMPT},
+ # non-printable char tests
+ {"Name": "non-printable test 1",
+ "Sequence": chr(27) + chr(47) + ENTER,
+ "Result": PROMPT},
+ {"Name": "non-printable test 2",
+ "Sequence": chr(27) + chr(128) + ENTER*7,
+ "Result": PROMPT},
+ {"Name": "non-printable test 3",
+ "Sequence": chr(27) + chr(91) + chr(127) + ENTER*6,
+ "Result": PROMPT},
-# miscellaneous tests
- {"Name" : "misc test 1",
- "Sequence" : ENTER,
- "Result" : PROMPT},
- {"Name" : "misc test 2",
- "Sequence" : "single #comment" + ENTER,
- "Result" : SINGLE},
- {"Name" : "misc test 3",
- "Sequence" : "#empty line" + ENTER,
- "Result" : PROMPT},
- {"Name" : "misc test 4",
- "Sequence" : " single " + ENTER,
- "Result" : SINGLE},
- {"Name" : "misc test 5",
- "Sequence" : "single#" + ENTER,
- "Result" : SINGLE},
- {"Name" : "misc test 6",
- "Sequence" : 'a' * 257 + ENTER,
- "Result" : NOT_FOUND},
- {"Name" : "misc test 7",
- "Sequence" : "clear_history" + UP*5 + DOWN*5 + ENTER,
- "Result" : PROMPT},
- {"Name" : "misc test 8",
- "Sequence" : "a" + HELP + CTRL_C,
- "Result" : PROMPT},
- {"Name" : "misc test 9",
- "Sequence" : CTRL_D*3,
- "Result" : None},
+ # miscellaneous tests
+ {"Name": "misc test 1",
+ "Sequence": ENTER,
+ "Result": PROMPT},
+ {"Name": "misc test 2",
+ "Sequence": "single #comment" + ENTER,
+ "Result": SINGLE},
+ {"Name": "misc test 3",
+ "Sequence": "#empty line" + ENTER,
+ "Result": PROMPT},
+ {"Name": "misc test 4",
+ "Sequence": " single " + ENTER,
+ "Result": SINGLE},
+ {"Name": "misc test 5",
+ "Sequence": "single#" + ENTER,
+ "Result": SINGLE},
+ {"Name": "misc test 6",
+ "Sequence": 'a' * 257 + ENTER,
+ "Result": NOT_FOUND},
+ {"Name": "misc test 7",
+ "Sequence": "clear_history" + UP*5 + DOWN*5 + ENTER,
+ "Result": PROMPT},
+ {"Name": "misc test 8",
+ "Sequence": "a" + HELP + CTRL_C,
+ "Result": PROMPT},
+ {"Name": "misc test 9",
+ "Sequence": CTRL_D*3,
+ "Result": None},
]
diff --git a/app/test/autotest.py b/app/test/autotest.py
index b9fd6b6..3a00538 100644
--- a/app/test/autotest.py
+++ b/app/test/autotest.py
@@ -33,44 +33,46 @@
# Script that uses either test app or qemu controlled by python-pexpect
-import sys, autotest_data, autotest_runner
-
+import autotest_data
+import autotest_runner
+import sys
def usage():
- print"Usage: autotest.py [test app|test iso image]",
- print "[target] [whitelist|-blacklist]"
+ print"Usage: autotest.py [test app|test iso image]",
+ print "[target] [whitelist|-blacklist]"
if len(sys.argv) < 3:
- usage()
- sys.exit(1)
+ usage()
+ sys.exit(1)
target = sys.argv[2]
-test_whitelist=None
-test_blacklist=None
+test_whitelist = None
+test_blacklist = None
# get blacklist/whitelist
if len(sys.argv) > 3:
- testlist = sys.argv[3].split(',')
- testlist = [test.lower() for test in testlist]
- if testlist[0].startswith('-'):
- testlist[0] = testlist[0].lstrip('-')
- test_blacklist = testlist
- else:
- test_whitelist = testlist
+ testlist = sys.argv[3].split(',')
+ testlist = [test.lower() for test in testlist]
+ if testlist[0].startswith('-'):
+ testlist[0] = testlist[0].lstrip('-')
+ test_blacklist = testlist
+ else:
+ test_whitelist = testlist
-cmdline = "%s -c f -n 4"%(sys.argv[1])
+cmdline = "%s -c f -n 4" % (sys.argv[1])
print cmdline
-runner = autotest_runner.AutotestRunner(cmdline, target, test_blacklist, test_whitelist)
+runner = autotest_runner.AutotestRunner(cmdline, target, test_blacklist,
+ test_whitelist)
for test_group in autotest_data.parallel_test_group_list:
- runner.add_parallel_test_group(test_group)
+ runner.add_parallel_test_group(test_group)
for test_group in autotest_data.non_parallel_test_group_list:
- runner.add_non_parallel_test_group(test_group)
+ runner.add_non_parallel_test_group(test_group)
num_fails = runner.run_all_tests()
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 9e8fd94..0cf4cfd 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# BSD LICENSE
#
@@ -36,12 +36,14 @@
from glob import glob
from autotest_test_funcs import *
+
# quick and dirty function to find out number of sockets
def num_sockets():
- result = len(glob("/sys/devices/system/node/node*"))
- if result == 0:
- return 1
- return result
+ result = len(glob("/sys/devices/system/node/node*"))
+ if result == 0:
+ return 1
+ return result
+
# Assign given number to each socket
# e.g. 32 becomes 32,32 or 32,32,32,32
@@ -51,420 +53,419 @@ def per_sockets(num):
# groups of tests that can be run in parallel
# the grouping has been found largely empirically
parallel_test_group_list = [
-
-{
- "Prefix": "group_1",
- "Memory" : per_sockets(8),
- "Tests" :
- [
- {
- "Name" : "Cycles autotest",
- "Command" : "cycles_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Timer autotest",
- "Command" : "timer_autotest",
- "Func" : timer_autotest,
- "Report" : None,
- },
- {
- "Name" : "Debug autotest",
- "Command" : "debug_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Errno autotest",
- "Command" : "errno_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Meter autotest",
- "Command" : "meter_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Common autotest",
- "Command" : "common_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Resource autotest",
- "Command" : "resource_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "group_2",
- "Memory" : "16",
- "Tests" :
- [
- {
- "Name" : "Memory autotest",
- "Command" : "memory_autotest",
- "Func" : memory_autotest,
- "Report" : None,
- },
- {
- "Name" : "Read/write lock autotest",
- "Command" : "rwlock_autotest",
- "Func" : rwlock_autotest,
- "Report" : None,
- },
- {
- "Name" : "Logs autotest",
- "Command" : "logs_autotest",
- "Func" : logs_autotest,
- "Report" : None,
- },
- {
- "Name" : "CPU flags autotest",
- "Command" : "cpuflags_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Version autotest",
- "Command" : "version_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "EAL filesystem autotest",
- "Command" : "eal_fs_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "EAL flags autotest",
- "Command" : "eal_flags_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Hash autotest",
- "Command" : "hash_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ],
-},
-{
- "Prefix": "group_3",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "LPM autotest",
- "Command" : "lpm_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "LPM6 autotest",
- "Command" : "lpm6_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Memcpy autotest",
- "Command" : "memcpy_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Memzone autotest",
- "Command" : "memzone_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "String autotest",
- "Command" : "string_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Alarm autotest",
- "Command" : "alarm_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "group_4",
- "Memory" : per_sockets(128),
- "Tests" :
- [
- {
- "Name" : "PCI autotest",
- "Command" : "pci_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Malloc autotest",
- "Command" : "malloc_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Multi-process autotest",
- "Command" : "multiprocess_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Mbuf autotest",
- "Command" : "mbuf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Per-lcore autotest",
- "Command" : "per_lcore_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Ring autotest",
- "Command" : "ring_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "group_5",
- "Memory" : "32",
- "Tests" :
- [
- {
- "Name" : "Spinlock autotest",
- "Command" : "spinlock_autotest",
- "Func" : spinlock_autotest,
- "Report" : None,
- },
- {
- "Name" : "Byte order autotest",
- "Command" : "byteorder_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "TAILQ autotest",
- "Command" : "tailq_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Command-line autotest",
- "Command" : "cmdline_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Interrupts autotest",
- "Command" : "interrupt_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "group_6",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "Function reentrancy autotest",
- "Command" : "func_reentrancy_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Mempool autotest",
- "Command" : "mempool_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Atomics autotest",
- "Command" : "atomic_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Prefetch autotest",
- "Command" : "prefetch_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" :"Red autotest",
- "Command" : "red_autotest",
- "Func" :default_autotest,
- "Report" :None,
- },
- ]
-},
-{
- "Prefix" : "group_7",
- "Memory" : "64",
- "Tests" :
- [
- {
- "Name" : "PMD ring autotest",
- "Command" : "ring_pmd_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Access list control autotest",
- "Command" : "acl_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" :"Sched autotest",
- "Command" : "sched_autotest",
- "Func" :default_autotest,
- "Report" :None,
- },
- ]
-},
+ {
+ "Prefix": "group_1",
+ "Memory": per_sockets(8),
+ "Tests":
+ [
+ {
+ "Name": "Cycles autotest",
+ "Command": "cycles_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Timer autotest",
+ "Command": "timer_autotest",
+ "Func": timer_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Debug autotest",
+ "Command": "debug_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Errno autotest",
+ "Command": "errno_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Meter autotest",
+ "Command": "meter_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Common autotest",
+ "Command": "common_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Resource autotest",
+ "Command": "resource_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "group_2",
+ "Memory": "16",
+ "Tests":
+ [
+ {
+ "Name": "Memory autotest",
+ "Command": "memory_autotest",
+ "Func": memory_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Read/write lock autotest",
+ "Command": "rwlock_autotest",
+ "Func": rwlock_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Logs autotest",
+ "Command": "logs_autotest",
+ "Func": logs_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "CPU flags autotest",
+ "Command": "cpuflags_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Version autotest",
+ "Command": "version_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "EAL filesystem autotest",
+ "Command": "eal_fs_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "EAL flags autotest",
+ "Command": "eal_flags_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Hash autotest",
+ "Command": "hash_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ],
+ },
+ {
+ "Prefix": "group_3",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "LPM autotest",
+ "Command": "lpm_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "LPM6 autotest",
+ "Command": "lpm6_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Memcpy autotest",
+ "Command": "memcpy_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Memzone autotest",
+ "Command": "memzone_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "String autotest",
+ "Command": "string_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Alarm autotest",
+ "Command": "alarm_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "group_4",
+ "Memory": per_sockets(128),
+ "Tests":
+ [
+ {
+ "Name": "PCI autotest",
+ "Command": "pci_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Malloc autotest",
+ "Command": "malloc_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Multi-process autotest",
+ "Command": "multiprocess_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Mbuf autotest",
+ "Command": "mbuf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Per-lcore autotest",
+ "Command": "per_lcore_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Ring autotest",
+ "Command": "ring_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "group_5",
+ "Memory": "32",
+ "Tests":
+ [
+ {
+ "Name": "Spinlock autotest",
+ "Command": "spinlock_autotest",
+ "Func": spinlock_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Byte order autotest",
+ "Command": "byteorder_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "TAILQ autotest",
+ "Command": "tailq_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Command-line autotest",
+ "Command": "cmdline_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Interrupts autotest",
+ "Command": "interrupt_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "group_6",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "Function reentrancy autotest",
+ "Command": "func_reentrancy_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Mempool autotest",
+ "Command": "mempool_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Atomics autotest",
+ "Command": "atomic_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Prefetch autotest",
+ "Command": "prefetch_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Red autotest",
+ "Command": "red_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "group_7",
+ "Memory": "64",
+ "Tests":
+ [
+ {
+ "Name": "PMD ring autotest",
+ "Command": "ring_pmd_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Access list control autotest",
+ "Command": "acl_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Sched autotest",
+ "Command": "sched_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
]
# tests that should not be run when any other tests are running
non_parallel_test_group_list = [
-{
- "Prefix" : "kni",
- "Memory" : "512",
- "Tests" :
- [
- {
- "Name" : "KNI autotest",
- "Command" : "kni_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "mempool_perf",
- "Memory" : per_sockets(256),
- "Tests" :
- [
- {
- "Name" : "Mempool performance autotest",
- "Command" : "mempool_perf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "memcpy_perf",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "Memcpy performance autotest",
- "Command" : "memcpy_perf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "hash_perf",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "Hash performance autotest",
- "Command" : "hash_perf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix" : "power",
- "Memory" : "16",
- "Tests" :
- [
- {
- "Name" : "Power autotest",
- "Command" : "power_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix" : "power_acpi_cpufreq",
- "Memory" : "16",
- "Tests" :
- [
- {
- "Name" : "Power ACPI cpufreq autotest",
- "Command" : "power_acpi_cpufreq_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix" : "power_kvm_vm",
- "Memory" : "16",
- "Tests" :
- [
- {
- "Name" : "Power KVM VM autotest",
- "Command" : "power_kvm_vm_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "timer_perf",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "Timer performance autotest",
- "Command" : "timer_perf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
+ {
+ "Prefix": "kni",
+ "Memory": "512",
+ "Tests":
+ [
+ {
+ "Name": "KNI autotest",
+ "Command": "kni_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "mempool_perf",
+ "Memory": per_sockets(256),
+ "Tests":
+ [
+ {
+ "Name": "Mempool performance autotest",
+ "Command": "mempool_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "memcpy_perf",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "Memcpy performance autotest",
+ "Command": "memcpy_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "hash_perf",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "Hash performance autotest",
+ "Command": "hash_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "power",
+ "Memory": "16",
+ "Tests":
+ [
+ {
+ "Name": "Power autotest",
+ "Command": "power_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "power_acpi_cpufreq",
+ "Memory": "16",
+ "Tests":
+ [
+ {
+ "Name": "Power ACPI cpufreq autotest",
+ "Command": "power_acpi_cpufreq_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "power_kvm_vm",
+ "Memory": "16",
+ "Tests":
+ [
+ {
+ "Name": "Power KVM VM autotest",
+ "Command": "power_kvm_vm_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "timer_perf",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "Timer performance autotest",
+ "Command": "timer_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
-#
-# Please always make sure that ring_perf is the last test!
-#
-{
- "Prefix": "ring_perf",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "Ring performance autotest",
- "Command" : "ring_perf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
+ #
+ # Please always make sure that ring_perf is the last test!
+ #
+ {
+ "Prefix": "ring_perf",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "Ring performance autotest",
+ "Command": "ring_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
]
diff --git a/app/test/autotest_runner.py b/app/test/autotest_runner.py
index 21d3be2..55b63a8 100644
--- a/app/test/autotest_runner.py
+++ b/app/test/autotest_runner.py
@@ -33,20 +33,29 @@
# The main logic behind running autotests in parallel
-import multiprocessing, subprocess, sys, pexpect, re, time, os, StringIO, csv
+import StringIO
+import csv
+import multiprocessing
+import pexpect
+import re
+import subprocess
+import sys
+import time
# wait for prompt
+
+
def wait_prompt(child):
- try:
- child.sendline()
- result = child.expect(["RTE>>", pexpect.TIMEOUT, pexpect.EOF],
- timeout = 120)
- except:
- return False
- if result == 0:
- return True
- else:
- return False
+ try:
+ child.sendline()
+ result = child.expect(["RTE>>", pexpect.TIMEOUT, pexpect.EOF],
+ timeout=120)
+ except:
+ return False
+ if result == 0:
+ return True
+ else:
+ return False
# run a test group
# each result tuple in results list consists of:
@@ -60,363 +69,363 @@ def wait_prompt(child):
# this function needs to be outside AutotestRunner class
# because otherwise Pool won't work (or rather it will require
# quite a bit of effort to make it work).
-def run_test_group(cmdline, test_group):
- results = []
- child = None
- start_time = time.time()
- startuplog = None
-
- # run test app
- try:
- # prepare logging of init
- startuplog = StringIO.StringIO()
-
- print >>startuplog, "\n%s %s\n" % ("="*20, test_group["Prefix"])
- print >>startuplog, "\ncmdline=%s" % cmdline
-
- child = pexpect.spawn(cmdline, logfile=startuplog)
-
- # wait for target to boot
- if not wait_prompt(child):
- child.close()
-
- results.append((-1, "Fail [No prompt]", "Start %s" % test_group["Prefix"],
- time.time() - start_time, startuplog.getvalue(), None))
-
- # mark all tests as failed
- for test in test_group["Tests"]:
- results.append((-1, "Fail [No prompt]", test["Name"],
- time.time() - start_time, "", None))
- # exit test
- return results
-
- except:
- results.append((-1, "Fail [Can't run]", "Start %s" % test_group["Prefix"],
- time.time() - start_time, startuplog.getvalue(), None))
-
- # mark all tests as failed
- for t in test_group["Tests"]:
- results.append((-1, "Fail [Can't run]", t["Name"],
- time.time() - start_time, "", None))
- # exit test
- return results
-
- # startup was successful
- results.append((0, "Success", "Start %s" % test_group["Prefix"],
- time.time() - start_time, startuplog.getvalue(), None))
-
- # parse the binary for available test commands
- binary = cmdline.split()[0]
- stripped = 'not stripped' not in subprocess.check_output(['file', binary])
- if not stripped:
- symbols = subprocess.check_output(['nm', binary]).decode('utf-8')
- avail_cmds = re.findall('test_register_(\w+)', symbols)
-
- # run all tests in test group
- for test in test_group["Tests"]:
-
- # create log buffer for each test
- # in multiprocessing environment, the logging would be
- # interleaved and will create a mess, hence the buffering
- logfile = StringIO.StringIO()
- child.logfile = logfile
-
- result = ()
-
- # make a note when the test started
- start_time = time.time()
-
- try:
- # print test name to log buffer
- print >>logfile, "\n%s %s\n" % ("-"*20, test["Name"])
-
- # run test function associated with the test
- if stripped or test["Command"] in avail_cmds:
- result = test["Func"](child, test["Command"])
- else:
- result = (0, "Skipped [Not Available]")
-
- # make a note when the test was finished
- end_time = time.time()
-
- # append test data to the result tuple
- result += (test["Name"], end_time - start_time,
- logfile.getvalue())
-
- # call report function, if any defined, and supply it with
- # target and complete log for test run
- if test["Report"]:
- report = test["Report"](self.target, log)
-
- # append report to results tuple
- result += (report,)
- else:
- # report is None
- result += (None,)
- except:
- # make a note when the test crashed
- end_time = time.time()
-
- # mark test as failed
- result = (-1, "Fail [Crash]", test["Name"],
- end_time - start_time, logfile.getvalue(), None)
- finally:
- # append the results to the results list
- results.append(result)
-
- # regardless of whether test has crashed, try quitting it
- try:
- child.sendline("quit")
- child.close()
- # if the test crashed, just do nothing instead
- except:
- # nop
- pass
-
- # return test results
- return results
-
+def run_test_group(cmdline, test_group):
+ results = []
+ child = None
+ start_time = time.time()
+ startuplog = None
+
+ # run test app
+ try:
+ # prepare logging of init
+ startuplog = StringIO.StringIO()
+
+ print >>startuplog, "\n%s %s\n" % ("=" * 20, test_group["Prefix"])
+ print >>startuplog, "\ncmdline=%s" % cmdline
+
+ child = pexpect.spawn(cmdline, logfile=startuplog)
+
+ # wait for target to boot
+ if not wait_prompt(child):
+ child.close()
+
+ results.append((-1,
+ "Fail [No prompt]",
+ "Start %s" % test_group["Prefix"],
+ time.time() - start_time,
+ startuplog.getvalue(),
+ None))
+
+ # mark all tests as failed
+ for test in test_group["Tests"]:
+ results.append((-1, "Fail [No prompt]", test["Name"],
+ time.time() - start_time, "", None))
+ # exit test
+ return results
+
+ except:
+ results.append((-1,
+ "Fail [Can't run]",
+ "Start %s" % test_group["Prefix"],
+ time.time() - start_time,
+ startuplog.getvalue(),
+ None))
+
+ # mark all tests as failed
+ for t in test_group["Tests"]:
+ results.append((-1, "Fail [Can't run]", t["Name"],
+ time.time() - start_time, "", None))
+ # exit test
+ return results
+
+ # startup was successful
+ results.append((0, "Success", "Start %s" % test_group["Prefix"],
+ time.time() - start_time, startuplog.getvalue(), None))
+
+ # parse the binary for available test commands
+ binary = cmdline.split()[0]
+ stripped = 'not stripped' not in subprocess.check_output(['file', binary])
+ if not stripped:
+ symbols = subprocess.check_output(['nm', binary]).decode('utf-8')
+ avail_cmds = re.findall('test_register_(\w+)', symbols)
+
+ # run all tests in test group
+ for test in test_group["Tests"]:
+
+ # create log buffer for each test
+ # in multiprocessing environment, the logging would be
+ # interleaved and will create a mess, hence the buffering
+ logfile = StringIO.StringIO()
+ child.logfile = logfile
+
+ result = ()
+
+ # make a note when the test started
+ start_time = time.time()
+
+ try:
+ # print test name to log buffer
+ print >>logfile, "\n%s %s\n" % ("-" * 20, test["Name"])
+
+ # run test function associated with the test
+ if stripped or test["Command"] in avail_cmds:
+ result = test["Func"](child, test["Command"])
+ else:
+ result = (0, "Skipped [Not Available]")
+
+ # make a note when the test was finished
+ end_time = time.time()
+
+ # append test data to the result tuple
+ result += (test["Name"], end_time - start_time,
+ logfile.getvalue())
+
+ # call report function, if any defined, and supply it with
+ # target and complete log for test run
+ if test["Report"]:
+ report = test["Report"](self.target, log)
+
+ # append report to results tuple
+ result += (report,)
+ else:
+ # report is None
+ result += (None,)
+ except:
+ # make a note when the test crashed
+ end_time = time.time()
+
+ # mark test as failed
+ result = (-1, "Fail [Crash]", test["Name"],
+ end_time - start_time, logfile.getvalue(), None)
+ finally:
+ # append the results to the results list
+ results.append(result)
+
+ # regardless of whether test has crashed, try quitting it
+ try:
+ child.sendline("quit")
+ child.close()
+ # if the test crashed, just do nothing instead
+ except:
+ # nop
+ pass
+
+ # return test results
+ return results
# class representing an instance of autotests run
class AutotestRunner:
- cmdline = ""
- parallel_test_groups = []
- non_parallel_test_groups = []
- logfile = None
- csvwriter = None
- target = ""
- start = None
- n_tests = 0
- fails = 0
- log_buffers = []
- blacklist = []
- whitelist = []
-
-
- def __init__(self, cmdline, target, blacklist, whitelist):
- self.cmdline = cmdline
- self.target = target
- self.blacklist = blacklist
- self.whitelist = whitelist
-
- # log file filename
- logfile = "%s.log" % target
- csvfile = "%s.csv" % target
-
- self.logfile = open(logfile, "w")
- csvfile = open(csvfile, "w")
- self.csvwriter = csv.writer(csvfile)
-
- # prepare results table
- self.csvwriter.writerow(["test_name","test_result","result_str"])
-
-
-
- # set up cmdline string
- def __get_cmdline(self, test):
- cmdline = self.cmdline
-
- # append memory limitations for each test
- # otherwise tests won't run in parallel
- if not "i686" in self.target:
- cmdline += " --socket-mem=%s"% test["Memory"]
- else:
- # affinitize startup so that tests don't fail on i686
- cmdline = "taskset 1 " + cmdline
- cmdline += " -m " + str(sum(map(int,test["Memory"].split(","))))
-
- # set group prefix for autotest group
- # otherwise they won't run in parallel
- cmdline += " --file-prefix=%s"% test["Prefix"]
-
- return cmdline
-
-
-
- def add_parallel_test_group(self,test_group):
- self.parallel_test_groups.append(test_group)
-
- def add_non_parallel_test_group(self,test_group):
- self.non_parallel_test_groups.append(test_group)
-
-
- def __process_results(self, results):
- # this iterates over individual test results
- for i, result in enumerate(results):
-
- # increase total number of tests that were run
- # do not include "start" test
- if i > 0:
- self.n_tests += 1
-
- # unpack result tuple
- test_result, result_str, test_name, \
- test_time, log, report = result
-
- # get total run time
- cur_time = time.time()
- total_time = int(cur_time - self.start)
-
- # print results, test run time and total time since start
- print ("%s:" % test_name).ljust(30),
- print result_str.ljust(29),
- print "[%02dm %02ds]" % (test_time / 60, test_time % 60),
-
- # don't print out total time every line, it's the same anyway
- if i == len(results) - 1:
- print "[%02dm %02ds]" % (total_time / 60, total_time % 60)
- else:
- print ""
-
- # if test failed and it wasn't a "start" test
- if test_result < 0 and not i == 0:
- self.fails += 1
-
- # collect logs
- self.log_buffers.append(log)
-
- # create report if it exists
- if report:
- try:
- f = open("%s_%s_report.rst" % (self.target,test_name), "w")
- except IOError:
- print "Report for %s could not be created!" % test_name
- else:
- with f:
- f.write(report)
-
- # write test result to CSV file
- if i != 0:
- self.csvwriter.writerow([test_name, test_result, result_str])
-
-
-
-
- # this function iterates over test groups and removes each
- # test that is not in whitelist/blacklist
- def __filter_groups(self, test_groups):
- groups_to_remove = []
-
- # filter out tests from parallel test groups
- for i, test_group in enumerate(test_groups):
-
- # iterate over a copy so that we could safely delete individual tests
- for test in test_group["Tests"][:]:
- test_id = test["Command"]
-
- # dump tests are specified in full e.g. "Dump_mempool"
- if "_autotest" in test_id:
- test_id = test_id[:-len("_autotest")]
-
- # filter out blacklisted/whitelisted tests
- if self.blacklist and test_id in self.blacklist:
- test_group["Tests"].remove(test)
- continue
- if self.whitelist and test_id not in self.whitelist:
- test_group["Tests"].remove(test)
- continue
-
- # modify or remove original group
- if len(test_group["Tests"]) > 0:
- test_groups[i] = test_group
- else:
- # remember which groups should be deleted
- # put the numbers backwards so that we start
- # deleting from the end, not from the beginning
- groups_to_remove.insert(0, i)
-
- # remove test groups that need to be removed
- for i in groups_to_remove:
- del test_groups[i]
-
- return test_groups
-
-
-
- # iterate over test groups and run tests associated with them
- def run_all_tests(self):
- # filter groups
- self.parallel_test_groups = \
- self.__filter_groups(self.parallel_test_groups)
- self.non_parallel_test_groups = \
- self.__filter_groups(self.non_parallel_test_groups)
-
- # create a pool of worker threads
- pool = multiprocessing.Pool(processes=1)
-
- results = []
-
- # whatever happens, try to save as much logs as possible
- try:
-
- # create table header
- print ""
- print "Test name".ljust(30),
- print "Test result".ljust(29),
- print "Test".center(9),
- print "Total".center(9)
- print "=" * 80
-
- # make a note of tests start time
- self.start = time.time()
-
- # assign worker threads to run test groups
- for test_group in self.parallel_test_groups:
- result = pool.apply_async(run_test_group,
- [self.__get_cmdline(test_group), test_group])
- results.append(result)
-
- # iterate while we have group execution results to get
- while len(results) > 0:
-
- # iterate over a copy to be able to safely delete results
- # this iterates over a list of group results
- for group_result in results[:]:
-
- # if the thread hasn't finished yet, continue
- if not group_result.ready():
- continue
-
- res = group_result.get()
-
- self.__process_results(res)
-
- # remove result from results list once we're done with it
- results.remove(group_result)
-
- # run non_parallel tests. they are run one by one, synchronously
- for test_group in self.non_parallel_test_groups:
- group_result = run_test_group(self.__get_cmdline(test_group), test_group)
-
- self.__process_results(group_result)
-
- # get total run time
- cur_time = time.time()
- total_time = int(cur_time - self.start)
-
- # print out summary
- print "=" * 80
- print "Total run time: %02dm %02ds" % (total_time / 60, total_time % 60)
- if self.fails != 0:
- print "Number of failed tests: %s" % str(self.fails)
-
- # write summary to logfile
- self.logfile.write("Summary\n")
- self.logfile.write("Target: ".ljust(15) + "%s\n" % self.target)
- self.logfile.write("Tests: ".ljust(15) + "%i\n" % self.n_tests)
- self.logfile.write("Failed tests: ".ljust(15) + "%i\n" % self.fails)
- except:
- print "Exception occured"
- print sys.exc_info()
- self.fails = 1
-
- # drop logs from all executions to a logfile
- for buf in self.log_buffers:
- self.logfile.write(buf.replace("\r",""))
-
- log_buffers = []
-
- return self.fails
+ cmdline = ""
+ parallel_test_groups = []
+ non_parallel_test_groups = []
+ logfile = None
+ csvwriter = None
+ target = ""
+ start = None
+ n_tests = 0
+ fails = 0
+ log_buffers = []
+ blacklist = []
+ whitelist = []
+
+ def __init__(self, cmdline, target, blacklist, whitelist):
+ self.cmdline = cmdline
+ self.target = target
+ self.blacklist = blacklist
+ self.whitelist = whitelist
+
+ # log file filename
+ logfile = "%s.log" % target
+ csvfile = "%s.csv" % target
+
+ self.logfile = open(logfile, "w")
+ csvfile = open(csvfile, "w")
+ self.csvwriter = csv.writer(csvfile)
+
+ # prepare results table
+ self.csvwriter.writerow(["test_name", "test_result", "result_str"])
+
+ # set up cmdline string
+ def __get_cmdline(self, test):
+ cmdline = self.cmdline
+
+ # append memory limitations for each test
+ # otherwise tests won't run in parallel
+ if "i686" not in self.target:
+ cmdline += " --socket-mem=%s" % test["Memory"]
+ else:
+ # affinitize startup so that tests don't fail on i686
+ cmdline = "taskset 1 " + cmdline
+ cmdline += " -m " + str(sum(map(int, test["Memory"].split(","))))
+
+ # set group prefix for autotest group
+ # otherwise they won't run in parallel
+ cmdline += " --file-prefix=%s" % test["Prefix"]
+
+ return cmdline
+
+ def add_parallel_test_group(self, test_group):
+ self.parallel_test_groups.append(test_group)
+
+ def add_non_parallel_test_group(self, test_group):
+ self.non_parallel_test_groups.append(test_group)
+
+ def __process_results(self, results):
+ # this iterates over individual test results
+ for i, result in enumerate(results):
+
+ # increase total number of tests that were run
+ # do not include "start" test
+ if i > 0:
+ self.n_tests += 1
+
+ # unpack result tuple
+ test_result, result_str, test_name, \
+ test_time, log, report = result
+
+ # get total run time
+ cur_time = time.time()
+ total_time = int(cur_time - self.start)
+
+ # print results, test run time and total time since start
+ print ("%s:" % test_name).ljust(30),
+ print result_str.ljust(29),
+ print "[%02dm %02ds]" % (test_time / 60, test_time % 60),
+
+ # don't print out total time every line, it's the same anyway
+ if i == len(results) - 1:
+ print "[%02dm %02ds]" % (total_time / 60, total_time % 60)
+ else:
+ print ""
+
+ # if test failed and it wasn't a "start" test
+ if test_result < 0 and not i == 0:
+ self.fails += 1
+
+ # collect logs
+ self.log_buffers.append(log)
+
+ # create report if it exists
+ if report:
+ try:
+ f = open("%s_%s_report.rst" %
+ (self.target, test_name), "w")
+ except IOError:
+ print "Report for %s could not be created!" % test_name
+ else:
+ with f:
+ f.write(report)
+
+ # write test result to CSV file
+ if i != 0:
+ self.csvwriter.writerow([test_name, test_result, result_str])
+
+ # this function iterates over test groups and removes each
+ # test that is not in whitelist/blacklist
+ def __filter_groups(self, test_groups):
+ groups_to_remove = []
+
+ # filter out tests from parallel test groups
+ for i, test_group in enumerate(test_groups):
+
+ # iterate over a copy so that we could safely delete individual
+ # tests
+ for test in test_group["Tests"][:]:
+ test_id = test["Command"]
+
+ # dump tests are specified in full e.g. "Dump_mempool"
+ if "_autotest" in test_id:
+ test_id = test_id[:-len("_autotest")]
+
+ # filter out blacklisted/whitelisted tests
+ if self.blacklist and test_id in self.blacklist:
+ test_group["Tests"].remove(test)
+ continue
+ if self.whitelist and test_id not in self.whitelist:
+ test_group["Tests"].remove(test)
+ continue
+
+ # modify or remove original group
+ if len(test_group["Tests"]) > 0:
+ test_groups[i] = test_group
+ else:
+ # remember which groups should be deleted
+ # put the numbers backwards so that we start
+ # deleting from the end, not from the beginning
+ groups_to_remove.insert(0, i)
+
+ # remove test groups that need to be removed
+ for i in groups_to_remove:
+ del test_groups[i]
+
+ return test_groups
+
+ # iterate over test groups and run tests associated with them
+ def run_all_tests(self):
+ # filter groups
+ self.parallel_test_groups = \
+ self.__filter_groups(self.parallel_test_groups)
+ self.non_parallel_test_groups = \
+ self.__filter_groups(self.non_parallel_test_groups)
+
+ # create a pool of worker threads
+ pool = multiprocessing.Pool(processes=1)
+
+ results = []
+
+ # whatever happens, try to save as much logs as possible
+ try:
+
+ # create table header
+ print ""
+ print "Test name".ljust(30),
+ print "Test result".ljust(29),
+ print "Test".center(9),
+ print "Total".center(9)
+ print "=" * 80
+
+ # make a note of tests start time
+ self.start = time.time()
+
+ # assign worker threads to run test groups
+ for test_group in self.parallel_test_groups:
+ result = pool.apply_async(run_test_group,
+ [self.__get_cmdline(test_group),
+ test_group])
+ results.append(result)
+
+ # iterate while we have group execution results to get
+ while len(results) > 0:
+
+ # iterate over a copy to be able to safely delete results
+ # this iterates over a list of group results
+ for group_result in results[:]:
+
+ # if the thread hasn't finished yet, continue
+ if not group_result.ready():
+ continue
+
+ res = group_result.get()
+
+ self.__process_results(res)
+
+ # remove result from results list once we're done with it
+ results.remove(group_result)
+
+ # run non_parallel tests. they are run one by one, synchronously
+ for test_group in self.non_parallel_test_groups:
+ group_result = run_test_group(
+ self.__get_cmdline(test_group), test_group)
+
+ self.__process_results(group_result)
+
+ # get total run time
+ cur_time = time.time()
+ total_time = int(cur_time - self.start)
+
+ # print out summary
+ print "=" * 80
+ print "Total run time: %02dm %02ds" % (total_time / 60,
+ total_time % 60)
+ if self.fails != 0:
+ print "Number of failed tests: %s" % str(self.fails)
+
+ # write summary to logfile
+ self.logfile.write("Summary\n")
+ self.logfile.write("Target: ".ljust(15) + "%s\n" % self.target)
+ self.logfile.write("Tests: ".ljust(15) + "%i\n" % self.n_tests)
+ self.logfile.write("Failed tests: ".ljust(
+ 15) + "%i\n" % self.fails)
+ except:
+ print "Exception occurred"
+ print sys.exc_info()
+ self.fails = 1
+
+ # drop logs from all executions to a logfile
+ for buf in self.log_buffers:
+ self.logfile.write(buf.replace("\r", ""))
+
+ return self.fails
diff --git a/app/test/autotest_test_funcs.py b/app/test/autotest_test_funcs.py
index 14cffd0..c482ea8 100644
--- a/app/test/autotest_test_funcs.py
+++ b/app/test/autotest_test_funcs.py
@@ -33,257 +33,272 @@
# Test functions
-import sys, pexpect, time, os, re
+import pexpect
# default autotest, used to run most tests
# waits for "Test OK"
+
+
def default_autotest(child, test_name):
- child.sendline(test_name)
- result = child.expect(["Test OK", "Test Failed",
- "Command not found", pexpect.TIMEOUT], timeout = 900)
- if result == 1:
- return -1, "Fail"
- elif result == 2:
- return -1, "Fail [Not found]"
- elif result == 3:
- return -1, "Fail [Timeout]"
- return 0, "Success"
+ child.sendline(test_name)
+ result = child.expect(["Test OK", "Test Failed",
+ "Command not found", pexpect.TIMEOUT], timeout=900)
+ if result == 1:
+ return -1, "Fail"
+ elif result == 2:
+ return -1, "Fail [Not found]"
+ elif result == 3:
+ return -1, "Fail [Timeout]"
+ return 0, "Success"
# autotest used to run dump commands
# just fires the command
+
+
def dump_autotest(child, test_name):
- child.sendline(test_name)
- return 0, "Success"
+ child.sendline(test_name)
+ return 0, "Success"
# memory autotest
# reads output and waits for Test OK
+
+
def memory_autotest(child, test_name):
- child.sendline(test_name)
- regexp = "phys:0x[0-9a-f]*, len:([0-9]*), virt:0x[0-9a-f]*, socket_id:[0-9]*"
- index = child.expect([regexp, pexpect.TIMEOUT], timeout = 180)
- if index != 0:
- return -1, "Fail [Timeout]"
- size = int(child.match.groups()[0], 16)
- if size <= 0:
- return -1, "Fail [Bad size]"
- index = child.expect(["Test OK", "Test Failed",
- pexpect.TIMEOUT], timeout = 10)
- if index == 1:
- return -1, "Fail"
- elif index == 2:
- return -1, "Fail [Timeout]"
- return 0, "Success"
+ child.sendline(test_name)
+ regexp = "phys:0x[0-9a-f]*, len:([0-9]*), virt:0x[0-9a-f]*, " \
+ "socket_id:[0-9]*"
+ index = child.expect([regexp, pexpect.TIMEOUT], timeout=180)
+ if index != 0:
+ return -1, "Fail [Timeout]"
+ size = int(child.match.groups()[0], 16)
+ if size <= 0:
+ return -1, "Fail [Bad size]"
+ index = child.expect(["Test OK", "Test Failed",
+ pexpect.TIMEOUT], timeout=10)
+ if index == 1:
+ return -1, "Fail"
+ elif index == 2:
+ return -1, "Fail [Timeout]"
+ return 0, "Success"
+
def spinlock_autotest(child, test_name):
- i = 0
- ir = 0
- child.sendline(test_name)
- while True:
- index = child.expect(["Test OK",
- "Test Failed",
- "Hello from core ([0-9]*) !",
- "Hello from within recursive locks from ([0-9]*) !",
- pexpect.TIMEOUT], timeout = 5)
- # ok
- if index == 0:
- break
-
- # message, check ordering
- elif index == 2:
- if int(child.match.groups()[0]) < i:
- return -1, "Fail [Bad order]"
- i = int(child.match.groups()[0])
- elif index == 3:
- if int(child.match.groups()[0]) < ir:
- return -1, "Fail [Bad order]"
- ir = int(child.match.groups()[0])
-
- # fail
- elif index == 4:
- return -1, "Fail [Timeout]"
- elif index == 1:
- return -1, "Fail"
-
- return 0, "Success"
+ i = 0
+ ir = 0
+ child.sendline(test_name)
+ while True:
+ index = child.expect(["Test OK",
+ "Test Failed",
+ "Hello from core ([0-9]*) !",
+ "Hello from within recursive locks "
+ "from ([0-9]*) !",
+ pexpect.TIMEOUT], timeout=5)
+ # ok
+ if index == 0:
+ break
+
+ # message, check ordering
+ elif index == 2:
+ if int(child.match.groups()[0]) < i:
+ return -1, "Fail [Bad order]"
+ i = int(child.match.groups()[0])
+ elif index == 3:
+ if int(child.match.groups()[0]) < ir:
+ return -1, "Fail [Bad order]"
+ ir = int(child.match.groups()[0])
+
+ # fail
+ elif index == 4:
+ return -1, "Fail [Timeout]"
+ elif index == 1:
+ return -1, "Fail"
+
+ return 0, "Success"
+
def rwlock_autotest(child, test_name):
- i = 0
- child.sendline(test_name)
- while True:
- index = child.expect(["Test OK",
- "Test Failed",
- "Hello from core ([0-9]*) !",
- "Global write lock taken on master core ([0-9]*)",
- pexpect.TIMEOUT], timeout = 10)
- # ok
- if index == 0:
- if i != 0xffff:
- return -1, "Fail [Message is missing]"
- break
-
- # message, check ordering
- elif index == 2:
- if int(child.match.groups()[0]) < i:
- return -1, "Fail [Bad order]"
- i = int(child.match.groups()[0])
-
- # must be the last message, check ordering
- elif index == 3:
- i = 0xffff
-
- elif index == 4:
- return -1, "Fail [Timeout]"
-
- # fail
- else:
- return -1, "Fail"
-
- return 0, "Success"
+ i = 0
+ child.sendline(test_name)
+ while True:
+ index = child.expect(["Test OK",
+ "Test Failed",
+ "Hello from core ([0-9]*) !",
+ "Global write lock taken on master "
+ "core ([0-9]*)",
+ pexpect.TIMEOUT], timeout=10)
+ # ok
+ if index == 0:
+ if i != 0xffff:
+ return -1, "Fail [Message is missing]"
+ break
+
+ # message, check ordering
+ elif index == 2:
+ if int(child.match.groups()[0]) < i:
+ return -1, "Fail [Bad order]"
+ i = int(child.match.groups()[0])
+
+ # must be the last message, check ordering
+ elif index == 3:
+ i = 0xffff
+
+ elif index == 4:
+ return -1, "Fail [Timeout]"
+
+ # fail
+ else:
+ return -1, "Fail"
+
+ return 0, "Success"
+
def logs_autotest(child, test_name):
- i = 0
- child.sendline(test_name)
-
- log_list = [
- "TESTAPP1: error message",
- "TESTAPP1: critical message",
- "TESTAPP2: critical message",
- "TESTAPP1: error message",
- ]
-
- for log_msg in log_list:
- index = child.expect([log_msg,
- "Test OK",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 10)
-
- if index == 3:
- return -1, "Fail [Timeout]"
- # not ok
- elif index != 0:
- return -1, "Fail"
-
- index = child.expect(["Test OK",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 10)
-
- return 0, "Success"
+ child.sendline(test_name)
+
+ log_list = [
+ "TESTAPP1: error message",
+ "TESTAPP1: critical message",
+ "TESTAPP2: critical message",
+ "TESTAPP1: error message",
+ ]
+
+ for log_msg in log_list:
+ index = child.expect([log_msg,
+ "Test OK",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=10)
+
+ if index == 3:
+ return -1, "Fail [Timeout]"
+ # not ok
+ elif index != 0:
+ return -1, "Fail"
+
+ index = child.expect(["Test OK",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=10)
+
+ return 0, "Success"
+
def timer_autotest(child, test_name):
- i = 0
- child.sendline(test_name)
-
- index = child.expect(["Start timer stress tests",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 5)
-
- if index == 1:
- return -1, "Fail"
- elif index == 2:
- return -1, "Fail [Timeout]"
-
- index = child.expect(["Start timer stress tests 2",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 5)
-
- if index == 1:
- return -1, "Fail"
- elif index == 2:
- return -1, "Fail [Timeout]"
-
- index = child.expect(["Start timer basic tests",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 5)
-
- if index == 1:
- return -1, "Fail"
- elif index == 2:
- return -1, "Fail [Timeout]"
-
- prev_lcore_timer1 = -1
-
- lcore_tim0 = -1
- lcore_tim1 = -1
- lcore_tim2 = -1
- lcore_tim3 = -1
-
- while True:
- index = child.expect(["TESTTIMER: ([0-9]*): callback id=([0-9]*) count=([0-9]*) on core ([0-9]*)",
- "Test OK",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 10)
-
- if index == 1:
- break
-
- if index == 2:
- return -1, "Fail"
- elif index == 3:
- return -1, "Fail [Timeout]"
-
- try:
- t = int(child.match.groups()[0])
- id = int(child.match.groups()[1])
- cnt = int(child.match.groups()[2])
- lcore = int(child.match.groups()[3])
- except:
- return -1, "Fail [Cannot parse]"
-
- # timer0 always expires on the same core when cnt < 20
- if id == 0:
- if lcore_tim0 == -1:
- lcore_tim0 = lcore
- elif lcore != lcore_tim0 and cnt < 20:
- return -1, "Fail [lcore != lcore_tim0 (%d, %d)]"%(lcore, lcore_tim0)
- if cnt > 21:
- return -1, "Fail [tim0 cnt > 21]"
-
- # timer1 each time expires on a different core
- if id == 1:
- if lcore == lcore_tim1:
- return -1, "Fail [lcore == lcore_tim1 (%d, %d)]"%(lcore, lcore_tim1)
- lcore_tim1 = lcore
- if cnt > 10:
- return -1, "Fail [tim1 cnt > 30]"
-
- # timer0 always expires on the same core
- if id == 2:
- if lcore_tim2 == -1:
- lcore_tim2 = lcore
- elif lcore != lcore_tim2:
- return -1, "Fail [lcore != lcore_tim2 (%d, %d)]"%(lcore, lcore_tim2)
- if cnt > 30:
- return -1, "Fail [tim2 cnt > 30]"
-
- # timer0 always expires on the same core
- if id == 3:
- if lcore_tim3 == -1:
- lcore_tim3 = lcore
- elif lcore != lcore_tim3:
- return -1, "Fail [lcore_tim3 changed (%d -> %d)]"%(lcore, lcore_tim3)
- if cnt > 30:
- return -1, "Fail [tim3 cnt > 30]"
-
- # must be 2 different cores
- if lcore_tim0 == lcore_tim3:
- return -1, "Fail [lcore_tim0 (%d) == lcore_tim3 (%d)]"%(lcore_tim0, lcore_tim3)
-
- return 0, "Success"
+ child.sendline(test_name)
+
+ index = child.expect(["Start timer stress tests",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=5)
+
+ if index == 1:
+ return -1, "Fail"
+ elif index == 2:
+ return -1, "Fail [Timeout]"
+
+ index = child.expect(["Start timer stress tests 2",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=5)
+
+ if index == 1:
+ return -1, "Fail"
+ elif index == 2:
+ return -1, "Fail [Timeout]"
+
+ index = child.expect(["Start timer basic tests",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=5)
+
+ if index == 1:
+ return -1, "Fail"
+ elif index == 2:
+ return -1, "Fail [Timeout]"
+
+ lcore_tim0 = -1
+ lcore_tim1 = -1
+ lcore_tim2 = -1
+ lcore_tim3 = -1
+
+ while True:
+ index = child.expect(["TESTTIMER: ([0-9]*): callback id=([0-9]*) "
+ "count=([0-9]*) on core ([0-9]*)",
+ "Test OK",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=10)
+
+ if index == 1:
+ break
+
+ if index == 2:
+ return -1, "Fail"
+ elif index == 3:
+ return -1, "Fail [Timeout]"
+
+ try:
+ id = int(child.match.groups()[1])
+ cnt = int(child.match.groups()[2])
+ lcore = int(child.match.groups()[3])
+ except:
+ return -1, "Fail [Cannot parse]"
+
+ # timer0 always expires on the same core when cnt < 20
+ if id == 0:
+ if lcore_tim0 == -1:
+ lcore_tim0 = lcore
+ elif lcore != lcore_tim0 and cnt < 20:
+ return -1, "Fail [lcore != lcore_tim0 (%d, %d)]" \
+ % (lcore, lcore_tim0)
+ if cnt > 21:
+ return -1, "Fail [tim0 cnt > 21]"
+
+ # timer1 each time expires on a different core
+ if id == 1:
+ if lcore == lcore_tim1:
+ return -1, "Fail [lcore == lcore_tim1 (%d, %d)]" \
+ % (lcore, lcore_tim1)
+ lcore_tim1 = lcore
+ if cnt > 10:
+ return -1, "Fail [tim1 cnt > 30]"
+
+ # timer0 always expires on the same core
+ if id == 2:
+ if lcore_tim2 == -1:
+ lcore_tim2 = lcore
+ elif lcore != lcore_tim2:
+ return -1, "Fail [lcore != lcore_tim2 (%d, %d)]" \
+ % (lcore, lcore_tim2)
+ if cnt > 30:
+ return -1, "Fail [tim2 cnt > 30]"
+
+ # timer0 always expires on the same core
+ if id == 3:
+ if lcore_tim3 == -1:
+ lcore_tim3 = lcore
+ elif lcore != lcore_tim3:
+ return -1, "Fail [lcore_tim3 changed (%d -> %d)]" \
+ % (lcore, lcore_tim3)
+ if cnt > 30:
+ return -1, "Fail [tim3 cnt > 30]"
+
+ # must be 2 different cores
+ if lcore_tim0 == lcore_tim3:
+ return -1, "Fail [lcore_tim0 (%d) == lcore_tim3 (%d)]" \
+ % (lcore_tim0, lcore_tim3)
+
+ return 0, "Success"
+
def ring_autotest(child, test_name):
- child.sendline(test_name)
- index = child.expect(["Test OK", "Test Failed",
- pexpect.TIMEOUT], timeout = 2)
- if index == 1:
- return -1, "Fail"
- elif index == 2:
- return -1, "Fail [Timeout]"
-
- child.sendline("set_watermark test 100")
- child.sendline("dump_ring test")
- index = child.expect([" watermark=100",
- pexpect.TIMEOUT], timeout = 1)
- if index != 0:
- return -1, "Fail [Bad watermark]"
-
- return 0, "Success"
+ child.sendline(test_name)
+ index = child.expect(["Test OK", "Test Failed",
+ pexpect.TIMEOUT], timeout=2)
+ if index == 1:
+ return -1, "Fail"
+ elif index == 2:
+ return -1, "Fail [Timeout]"
+
+ child.sendline("set_watermark test 100")
+ child.sendline("dump_ring test")
+ index = child.expect([" watermark=100",
+ pexpect.TIMEOUT], timeout=1)
+ if index != 0:
+ return -1, "Fail [Bad watermark]"
+
+ return 0, "Success"
diff --git a/doc/guides/conf.py b/doc/guides/conf.py
index 29e8efb..34c62de 100644
--- a/doc/guides/conf.py
+++ b/doc/guides/conf.py
@@ -58,7 +58,8 @@
html_show_copyright = False
highlight_language = 'none'
-version = subprocess.check_output(['make', '-sRrC', '../../', 'showversion']).decode('utf-8').rstrip()
+version = subprocess.check_output(['make', '-sRrC', '../../', 'showversion'])
+version = version.decode('utf-8').rstrip()
release = version
master_doc = 'index'
@@ -94,6 +95,7 @@
'preamble': latex_preamble
}
+
# Override the default Latex formatter in order to modify the
# code/verbatim blocks.
class CustomLatexFormatter(LatexFormatter):
@@ -117,12 +119,12 @@ def __init__(self, **options):
("tools/devbind", "dpdk-devbind",
"check device status and bind/unbind them from drivers", "", 8)]
-######## :numref: fallback ########
+
+# ####### :numref: fallback ########
# The following hook functions add some simple handling for the :numref:
# directive for Sphinx versions prior to 1.3.1. The functions replace the
# :numref: reference with a link to the target (for all Sphinx doc types).
# It doesn't try to label figures/tables.
-
def numref_role(reftype, rawtext, text, lineno, inliner):
"""
Add a Sphinx role to handle numref references. Note, we can't convert
@@ -136,6 +138,7 @@ def numref_role(reftype, rawtext, text, lineno, inliner):
internal=True)
return [newnode], []
+
def process_numref(app, doctree, from_docname):
"""
Process the numref nodes once the doctree has been built and prior to
diff --git a/examples/ip_pipeline/config/diagram-generator.py b/examples/ip_pipeline/config/diagram-generator.py
index 6b7170b..1748833 100755
--- a/examples/ip_pipeline/config/diagram-generator.py
+++ b/examples/ip_pipeline/config/diagram-generator.py
@@ -36,7 +36,8 @@
# the DPDK ip_pipeline application.
#
# The input configuration file is translated to an output file in DOT syntax,
-# which is then used to create the image file using graphviz (www.graphviz.org).
+# which is then used to create the image file using graphviz
+# (www.graphviz.org).
#
from __future__ import print_function
@@ -94,6 +95,7 @@
# SOURCEx | SOURCEx | SOURCEx | PIPELINEy | SOURCEx
# SINKx | SINKx | PIPELINEy | SINKx | SINKx
+
#
# Parse the input configuration file to detect the graph nodes and edges
#
@@ -321,16 +323,17 @@ def process_config_file(cfgfile):
#
print('Creating image file "%s" ...' % imgfile)
if os.system('which dot > /dev/null'):
- print('Error: Unable to locate "dot" executable.' \
- 'Please install the "graphviz" package (www.graphviz.org).')
+ print('Error: Unable to locate "dot" executable.'
+ 'Please install the "graphviz" package (www.graphviz.org).')
return
os.system(dot_cmd)
if __name__ == '__main__':
- parser = argparse.ArgumentParser(description=\
- 'Create diagram for IP pipeline configuration file.')
+ parser = argparse.ArgumentParser(description='Create diagram for IP '
+ 'pipeline configuration '
+ 'file.')
parser.add_argument(
'-f',
diff --git a/examples/ip_pipeline/config/pipeline-to-core-mapping.py b/examples/ip_pipeline/config/pipeline-to-core-mapping.py
index c2050b8..7a4eaa2 100755
--- a/examples/ip_pipeline/config/pipeline-to-core-mapping.py
+++ b/examples/ip_pipeline/config/pipeline-to-core-mapping.py
@@ -39,15 +39,14 @@
#
from __future__ import print_function
-import sys
-import errno
-import os
-import re
+from collections import namedtuple
+import argparse
import array
+import errno
import itertools
+import os
import re
-import argparse
-from collections import namedtuple
+import sys
# default values
enable_stage0_traceout = 1
diff --git a/tools/cpu_layout.py b/tools/cpu_layout.py
index d38d0b5..ccc22ec 100755
--- a/tools/cpu_layout.py
+++ b/tools/cpu_layout.py
@@ -38,40 +38,40 @@
cores = []
core_map = {}
-fd=open("/proc/cpuinfo")
+fd = open("/proc/cpuinfo")
lines = fd.readlines()
fd.close()
core_details = []
core_lines = {}
for line in lines:
- if len(line.strip()) != 0:
- name, value = line.split(":", 1)
- core_lines[name.strip()] = value.strip()
- else:
- core_details.append(core_lines)
- core_lines = {}
+ if len(line.strip()) != 0:
+ name, value = line.split(":", 1)
+ core_lines[name.strip()] = value.strip()
+ else:
+ core_details.append(core_lines)
+ core_lines = {}
for core in core_details:
- for field in ["processor", "core id", "physical id"]:
- if field not in core:
- print "Error getting '%s' value from /proc/cpuinfo" % field
- sys.exit(1)
- core[field] = int(core[field])
+ for field in ["processor", "core id", "physical id"]:
+ if field not in core:
+ print "Error getting '%s' value from /proc/cpuinfo" % field
+ sys.exit(1)
+ core[field] = int(core[field])
- if core["core id"] not in cores:
- cores.append(core["core id"])
- if core["physical id"] not in sockets:
- sockets.append(core["physical id"])
- key = (core["physical id"], core["core id"])
- if key not in core_map:
- core_map[key] = []
- core_map[key].append(core["processor"])
+ if core["core id"] not in cores:
+ cores.append(core["core id"])
+ if core["physical id"] not in sockets:
+ sockets.append(core["physical id"])
+ key = (core["physical id"], core["core id"])
+ if key not in core_map:
+ core_map[key] = []
+ core_map[key].append(core["processor"])
print "============================================================"
print "Core and Socket Information (as reported by '/proc/cpuinfo')"
print "============================================================\n"
-print "cores = ",cores
+print "cores = ", cores
print "sockets = ", sockets
print ""
@@ -81,15 +81,16 @@
print " ".ljust(max_core_id_len + len('Core ')),
for s in sockets:
- print "Socket %s" % str(s).ljust(max_core_map_len - len('Socket ')),
+ print "Socket %s" % str(s).ljust(max_core_map_len - len('Socket ')),
print ""
+
print " ".ljust(max_core_id_len + len('Core ')),
for s in sockets:
- print "--------".ljust(max_core_map_len),
+ print "--------".ljust(max_core_map_len),
print ""
for c in cores:
- print "Core %s" % str(c).ljust(max_core_id_len),
- for s in sockets:
- print str(core_map[(s,c)]).ljust(max_core_map_len),
- print ""
+ print "Core %s" % str(c).ljust(max_core_id_len),
+ for s in sockets:
+ print str(core_map[(s, c)]).ljust(max_core_map_len),
+ print ""
diff --git a/tools/dpdk-devbind.py b/tools/dpdk-devbind.py
index f1d374d..4f51a4b 100755
--- a/tools/dpdk-devbind.py
+++ b/tools/dpdk-devbind.py
@@ -93,10 +93,10 @@ def usage():
Unbind a device (Equivalent to \"-b none\")
--force:
- By default, network devices which are used by Linux - as indicated by having
- routes in the routing table - cannot be modified. Using the --force
- flag overrides this behavior, allowing active links to be forcibly
- unbound.
+ By default, network devices which are used by Linux - as indicated by
+ having routes in the routing table - cannot be modified. Using the
+ --force flag overrides this behavior, allowing active links to be
+ forcibly unbound.
WARNING: This can lead to loss of network connection and should be used
with caution.
@@ -151,7 +151,7 @@ def find_module(mod):
# check for a copy based off current path
tools_dir = dirname(abspath(sys.argv[0]))
- if (tools_dir.endswith("tools")):
+ if tools_dir.endswith("tools"):
base_dir = dirname(tools_dir)
find_out = check_output(["find", base_dir, "-name", mod + ".ko"])
if len(find_out) > 0: # something matched
@@ -249,7 +249,7 @@ def get_nic_details():
dev = {}
dev_lines = check_output(["lspci", "-Dvmmn"]).splitlines()
for dev_line in dev_lines:
- if (len(dev_line) == 0):
+ if len(dev_line) == 0:
if dev["Class"][0:2] == NETWORK_BASE_CLASS:
# convert device and vendor ids to numbers, then add to global
dev["Vendor"] = int(dev["Vendor"], 16)
@@ -315,8 +315,8 @@ def get_crypto_details():
dev = {}
dev_lines = check_output(["lspci", "-Dvmmn"]).splitlines()
for dev_line in dev_lines:
- if (len(dev_line) == 0):
- if (dev["Class"][0:2] == CRYPTO_BASE_CLASS):
+ if len(dev_line) == 0:
+ if dev["Class"][0:2] == CRYPTO_BASE_CLASS:
# convert device and vendor ids to numbers, then add to global
dev["Vendor"] = int(dev["Vendor"], 16)
dev["Device"] = int(dev["Device"], 16)
@@ -513,7 +513,8 @@ def display_devices(title, dev_list, extra_params=None):
for dev in dev_list:
if extra_params is not None:
strings.append("%s '%s' %s" % (dev["Slot"],
- dev["Device_str"], extra_params % dev))
+ dev["Device_str"],
+ extra_params % dev))
else:
strings.append("%s '%s'" % (dev["Slot"], dev["Device_str"]))
# sort before printing, so that the entries appear in PCI order
@@ -532,7 +533,7 @@ def show_status():
# split our list of network devices into the three categories above
for d in devices.keys():
- if (NETWORK_BASE_CLASS in devices[d]["Class"]):
+ if NETWORK_BASE_CLASS in devices[d]["Class"]:
if not has_driver(d):
no_drv.append(devices[d])
continue
@@ -555,7 +556,7 @@ def show_status():
no_drv = []
for d in devices.keys():
- if (CRYPTO_BASE_CLASS in devices[d]["Class"]):
+ if CRYPTO_BASE_CLASS in devices[d]["Class"]:
if not has_driver(d):
no_drv.append(devices[d])
continue
diff --git a/tools/dpdk-pmdinfo.py b/tools/dpdk-pmdinfo.py
index 3db9819..3d3ad7d 100755
--- a/tools/dpdk-pmdinfo.py
+++ b/tools/dpdk-pmdinfo.py
@@ -4,52 +4,20 @@
# Utility to dump PMD_INFO_STRING support from an object file
#
# -------------------------------------------------------------------------
+import json
import os
+import platform
+import string
import sys
+from elftools.common.exceptions import ELFError
+from elftools.common.py3compat import (byte2int, bytes2str, str2bytes)
+from elftools.elf.elffile import ELFFile
from optparse import OptionParser
-import string
-import json
-import platform
# For running from development directory. It should take precedence over the
# installed pyelftools.
sys.path.insert(0, '.')
-
-from elftools import __version__
-from elftools.common.exceptions import ELFError
-from elftools.common.py3compat import (
- ifilter, byte2int, bytes2str, itervalues, str2bytes)
-from elftools.elf.elffile import ELFFile
-from elftools.elf.dynamic import DynamicSection, DynamicSegment
-from elftools.elf.enums import ENUM_D_TAG
-from elftools.elf.segments import InterpSegment
-from elftools.elf.sections import SymbolTableSection
-from elftools.elf.gnuversions import (
- GNUVerSymSection, GNUVerDefSection,
- GNUVerNeedSection,
-)
-from elftools.elf.relocation import RelocationSection
-from elftools.elf.descriptions import (
- describe_ei_class, describe_ei_data, describe_ei_version,
- describe_ei_osabi, describe_e_type, describe_e_machine,
- describe_e_version_numeric, describe_p_type, describe_p_flags,
- describe_sh_type, describe_sh_flags,
- describe_symbol_type, describe_symbol_bind, describe_symbol_visibility,
- describe_symbol_shndx, describe_reloc_type, describe_dyn_tag,
- describe_ver_flags,
-)
-from elftools.elf.constants import E_FLAGS
-from elftools.dwarf.dwarfinfo import DWARFInfo
-from elftools.dwarf.descriptions import (
- describe_reg_name, describe_attr_value, set_global_machine_arch,
- describe_CFI_instructions, describe_CFI_register_rule,
- describe_CFI_CFA_rule,
-)
-from elftools.dwarf.constants import (
- DW_LNS_copy, DW_LNS_set_file, DW_LNE_define_file)
-from elftools.dwarf.callframe import CIE, FDE
-
raw_output = False
pcidb = None
@@ -326,7 +294,7 @@ def parse_pmd_info_string(self, mystring):
for i in optional_pmd_info:
try:
print("%s: %s" % (i['tag'], pmdinfo[i['id']]))
- except KeyError as e:
+ except KeyError:
continue
if (len(pmdinfo["pci_ids"]) != 0):
@@ -475,7 +443,7 @@ def process_dt_needed_entries(self):
with open(library, 'rb') as file:
try:
libelf = ReadElf(file, sys.stdout)
- except ELFError as e:
+ except ELFError:
print("%s is no an ELF file" % library)
continue
libelf.process_dt_needed_entries()
@@ -491,7 +459,7 @@ def scan_autoload_path(autoload_path):
try:
dirs = os.listdir(autoload_path)
- except OSError as e:
+ except OSError:
# Couldn't read the directory, give up
return
@@ -503,10 +471,10 @@ def scan_autoload_path(autoload_path):
try:
file = open(dpath, 'rb')
readelf = ReadElf(file, sys.stdout)
- except ELFError as e:
+ except ELFError:
# this is likely not an elf file, skip it
continue
- except IOError as e:
+ except IOError:
# No permission to read the file, skip it
continue
@@ -531,7 +499,7 @@ def scan_for_autoload_pmds(dpdk_path):
file = open(dpdk_path, 'rb')
try:
readelf = ReadElf(file, sys.stdout)
- except ElfError as e:
+ except ElfError:
if raw_output is False:
print("Unable to parse %s" % file)
return
@@ -557,7 +525,7 @@ def main(stream=None):
global raw_output
global pcidb
- pcifile_default = "./pci.ids" # for unknown OS's assume local file
+ pcifile_default = "./pci.ids" # For unknown OS's assume local file
if platform.system() == 'Linux':
pcifile_default = "/usr/share/hwdata/pci.ids"
elif platform.system() == 'FreeBSD':
@@ -577,7 +545,8 @@ def main(stream=None):
"to get vendor names from",
default=pcifile_default, metavar="FILE")
optparser.add_option("-t", "--table", dest="tblout",
- help="output information on hw support as a hex table",
+ help="output information on hw support as a "
+ "hex table",
action='store_true')
optparser.add_option("-p", "--plugindir", dest="pdir",
help="scan dpdk for autoload plugins",
--
2.7.4
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 2/4] app: make python apps python2/3 compliant
2016-12-08 15:51 [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant John McNamara
` (4 preceding siblings ...)
2016-12-08 16:03 ` [dpdk-dev] [PATCH v2 1/4] app: make python apps pep8 compliant John McNamara
@ 2016-12-08 16:03 ` John McNamara
2016-12-08 16:03 ` [dpdk-dev] [PATCH v2 3/4] app: give python apps a consistent shebang line John McNamara
` (14 subsequent siblings)
20 siblings, 0 replies; 28+ messages in thread
From: John McNamara @ 2016-12-08 16:03 UTC (permalink / raw)
To: dev; +Cc: mkletzan, John McNamara
Make all the DPDK Python apps work with Python 2 or 3 to
allow them to work with whatever is the system default.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
app/cmdline_test/cmdline_test.py | 26 ++++++++++++------------
app/cmdline_test/cmdline_test_data.py | 2 +-
app/test/autotest.py | 10 ++++-----
app/test/autotest_runner.py | 37 +++++++++++++++++-----------------
tools/cpu_layout.py | 38 ++++++++++++++++++-----------------
tools/dpdk-pmdinfo.py | 12 ++++++-----
6 files changed, 64 insertions(+), 61 deletions(-)
diff --git a/app/cmdline_test/cmdline_test.py b/app/cmdline_test/cmdline_test.py
index 4729987..229f71f 100755
--- a/app/cmdline_test/cmdline_test.py
+++ b/app/cmdline_test/cmdline_test.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# BSD LICENSE
#
@@ -32,7 +32,7 @@
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# Script that runs cmdline_test app and feeds keystrokes into it.
-
+from __future__ import print_function
import cmdline_test_data
import os
import pexpect
@@ -81,38 +81,38 @@ def runHistoryTest(child):
# the path to cmdline_test executable is supplied via command-line.
if len(sys.argv) < 2:
- print "Error: please supply cmdline_test app path"
+ print("Error: please supply cmdline_test app path")
sys.exit(1)
test_app_path = sys.argv[1]
if not os.path.exists(test_app_path):
- print "Error: please supply cmdline_test app path"
+ print("Error: please supply cmdline_test app path")
sys.exit(1)
child = pexpect.spawn(test_app_path)
-print "Running command-line tests..."
+print("Running command-line tests...")
for test in cmdline_test_data.tests:
- print (test["Name"] + ":").ljust(30),
+ testname = (test["Name"] + ":").ljust(30)
try:
runTest(child, test)
- print "PASS"
+ print(testname, "PASS")
except:
- print "FAIL"
- print child
+ print(testname, "FAIL")
+ print(child)
sys.exit(1)
# since last test quits the app, run new instance
child = pexpect.spawn(test_app_path)
-print ("History fill test:").ljust(30),
+testname = ("History fill test:").ljust(30)
try:
runHistoryTest(child)
- print "PASS"
+ print(testname, "PASS")
except:
- print "FAIL"
- print child
+ print(testname, "FAIL")
+ print(child)
sys.exit(1)
child.close()
sys.exit(0)
diff --git a/app/cmdline_test/cmdline_test_data.py b/app/cmdline_test/cmdline_test_data.py
index 3ce6cbc..9cc966b 100644
--- a/app/cmdline_test/cmdline_test_data.py
+++ b/app/cmdline_test/cmdline_test_data.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# BSD LICENSE
#
diff --git a/app/test/autotest.py b/app/test/autotest.py
index 3a00538..5c19a02 100644
--- a/app/test/autotest.py
+++ b/app/test/autotest.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# BSD LICENSE
#
@@ -32,15 +32,15 @@
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# Script that uses either test app or qemu controlled by python-pexpect
-
+from __future__ import print_function
import autotest_data
import autotest_runner
import sys
def usage():
- print"Usage: autotest.py [test app|test iso image]",
- print "[target] [whitelist|-blacklist]"
+ print("Usage: autotest.py [test app|test iso image] ",
+ "[target] [whitelist|-blacklist]")
if len(sys.argv) < 3:
usage()
@@ -63,7 +63,7 @@ def usage():
cmdline = "%s -c f -n 4" % (sys.argv[1])
-print cmdline
+print(cmdline)
runner = autotest_runner.AutotestRunner(cmdline, target, test_blacklist,
test_whitelist)
diff --git a/app/test/autotest_runner.py b/app/test/autotest_runner.py
index 55b63a8..7aeb0bd 100644
--- a/app/test/autotest_runner.py
+++ b/app/test/autotest_runner.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# BSD LICENSE
#
@@ -271,15 +271,16 @@ def __process_results(self, results):
total_time = int(cur_time - self.start)
# print results, test run time and total time since start
- print ("%s:" % test_name).ljust(30),
- print result_str.ljust(29),
- print "[%02dm %02ds]" % (test_time / 60, test_time % 60),
+ result = ("%s:" % test_name).ljust(30)
+ result += result_str.ljust(29)
+ result += "[%02dm %02ds]" % (test_time / 60, test_time % 60)
# don't print out total time every line, it's the same anyway
if i == len(results) - 1:
- print "[%02dm %02ds]" % (total_time / 60, total_time % 60)
+ print(result,
+ "[%02dm %02ds]" % (total_time / 60, total_time % 60))
else:
- print ""
+ print(result)
# if test failed and it wasn't a "start" test
if test_result < 0 and not i == 0:
@@ -294,7 +295,7 @@ def __process_results(self, results):
f = open("%s_%s_report.rst" %
(self.target, test_name), "w")
except IOError:
- print "Report for %s could not be created!" % test_name
+ print("Report for %s could not be created!" % test_name)
else:
with f:
f.write(report)
@@ -360,12 +361,10 @@ def run_all_tests(self):
try:
# create table header
- print ""
- print "Test name".ljust(30),
- print "Test result".ljust(29),
- print "Test".center(9),
- print "Total".center(9)
- print "=" * 80
+ print("")
+ print("Test name".ljust(30), "Test result".ljust(29),
+ "Test".center(9), "Total".center(9))
+ print("=" * 80)
# make a note of tests start time
self.start = time.time()
@@ -407,11 +406,11 @@ def run_all_tests(self):
total_time = int(cur_time - self.start)
# print out summary
- print "=" * 80
- print "Total run time: %02dm %02ds" % (total_time / 60,
- total_time % 60)
+ print("=" * 80)
+ print("Total run time: %02dm %02ds" % (total_time / 60,
+ total_time % 60))
if self.fails != 0:
- print "Number of failed tests: %s" % str(self.fails)
+ print("Number of failed tests: %s" % str(self.fails))
# write summary to logfile
self.logfile.write("Summary\n")
@@ -420,8 +419,8 @@ def run_all_tests(self):
self.logfile.write("Failed tests: ".ljust(
15) + "%i\n" % self.fails)
except:
- print "Exception occurred"
- print sys.exc_info()
+ print("Exception occurred")
+ print(sys.exc_info())
self.fails = 1
# drop logs from all executions to a logfile
diff --git a/tools/cpu_layout.py b/tools/cpu_layout.py
index ccc22ec..0e049a6 100755
--- a/tools/cpu_layout.py
+++ b/tools/cpu_layout.py
@@ -1,4 +1,5 @@
-#! /usr/bin/python
+#!/usr/bin/env python
+
#
# BSD LICENSE
#
@@ -31,7 +32,7 @@
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
-
+from __future__ import print_function
import sys
sockets = []
@@ -55,7 +56,7 @@
for core in core_details:
for field in ["processor", "core id", "physical id"]:
if field not in core:
- print "Error getting '%s' value from /proc/cpuinfo" % field
+ print("Error getting '%s' value from /proc/cpuinfo" % field)
sys.exit(1)
core[field] = int(core[field])
@@ -68,29 +69,30 @@
core_map[key] = []
core_map[key].append(core["processor"])
-print "============================================================"
-print "Core and Socket Information (as reported by '/proc/cpuinfo')"
-print "============================================================\n"
-print "cores = ", cores
-print "sockets = ", sockets
-print ""
+print("============================================================")
+print("Core and Socket Information (as reported by '/proc/cpuinfo')")
+print("============================================================\n")
+print("cores = ", cores)
+print("sockets = ", sockets)
+print("")
max_processor_len = len(str(len(cores) * len(sockets) * 2 - 1))
max_core_map_len = max_processor_len * 2 + len('[, ]') + len('Socket ')
max_core_id_len = len(str(max(cores)))
-print " ".ljust(max_core_id_len + len('Core ')),
+output = " ".ljust(max_core_id_len + len('Core '))
for s in sockets:
- print "Socket %s" % str(s).ljust(max_core_map_len - len('Socket ')),
-print ""
+ output += " Socket %s" % str(s).ljust(max_core_map_len - len('Socket '))
+print(output)
-print " ".ljust(max_core_id_len + len('Core ')),
+output = " ".ljust(max_core_id_len + len('Core '))
for s in sockets:
- print "--------".ljust(max_core_map_len),
-print ""
+ output += " --------".ljust(max_core_map_len)
+ output += " "
+print(output)
for c in cores:
- print "Core %s" % str(c).ljust(max_core_id_len),
+ output = "Core %s" % str(c).ljust(max_core_id_len)
for s in sockets:
- print str(core_map[(s, c)]).ljust(max_core_map_len),
- print ""
+ output += " " + str(core_map[(s, c)]).ljust(max_core_map_len)
+ print(output)
diff --git a/tools/dpdk-pmdinfo.py b/tools/dpdk-pmdinfo.py
index 3d3ad7d..097982e 100755
--- a/tools/dpdk-pmdinfo.py
+++ b/tools/dpdk-pmdinfo.py
@@ -1,9 +1,11 @@
#!/usr/bin/env python
+
# -------------------------------------------------------------------------
#
# Utility to dump PMD_INFO_STRING support from an object file
#
# -------------------------------------------------------------------------
+from __future__ import print_function
import json
import os
import platform
@@ -54,7 +56,7 @@ def addDevice(self, deviceStr):
self.devices[devID] = Device(deviceStr)
def report(self):
- print self.ID, self.name
+ print(self.ID, self.name)
for id, dev in self.devices.items():
dev.report()
@@ -80,7 +82,7 @@ def __init__(self, deviceStr):
self.subdevices = {}
def report(self):
- print "\t%s\t%s" % (self.ID, self.name)
+ print("\t%s\t%s" % (self.ID, self.name))
for subID, subdev in self.subdevices.items():
subdev.report()
@@ -126,7 +128,7 @@ def __init__(self, vendor, device, name):
self.name = name
def report(self):
- print "\t\t%s\t%s\t%s" % (self.vendorID, self.deviceID, self.name)
+ print("\t\t%s\t%s\t%s" % (self.vendorID, self.deviceID, self.name))
class PCIIds:
@@ -154,7 +156,7 @@ def reportVendors(self):
"""Reports the vendors
"""
for vid, v in self.vendors.items():
- print v.ID, v.name
+ print(v.ID, v.name)
def report(self, vendor=None):
"""
@@ -185,7 +187,7 @@ def findDate(self, content):
def parse(self):
if len(self.contents) < 1:
- print "data/%s-pci.ids not found" % self.date
+ print("data/%s-pci.ids not found" % self.date)
else:
vendorID = ""
deviceID = ""
--
2.7.4
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 3/4] app: give python apps a consistent shebang line
2016-12-08 15:51 [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant John McNamara
` (5 preceding siblings ...)
2016-12-08 16:03 ` [dpdk-dev] [PATCH v2 2/4] app: make python apps python2/3 compliant John McNamara
@ 2016-12-08 16:03 ` John McNamara
2016-12-08 16:20 ` Thomas Monjalon
2016-12-08 16:03 ` [dpdk-dev] [PATCH v2 4/4] doc: add required python versions to coding guidelines John McNamara
` (13 subsequent siblings)
20 siblings, 1 reply; 28+ messages in thread
From: John McNamara @ 2016-12-08 16:03 UTC (permalink / raw)
To: dev; +Cc: mkletzan, John McNamara
Add a consistent "env python" shebang line to the DPDK Python
apps so that they can call the default system python.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
app/test/autotest_test_funcs.py | 2 +-
doc/guides/conf.py | 2 ++
tools/dpdk-devbind.py | 3 ++-
3 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/app/test/autotest_test_funcs.py b/app/test/autotest_test_funcs.py
index c482ea8..1fa8cf0 100644
--- a/app/test/autotest_test_funcs.py
+++ b/app/test/autotest_test_funcs.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# BSD LICENSE
#
diff --git a/doc/guides/conf.py b/doc/guides/conf.py
index 34c62de..97c5d0e 100644
--- a/doc/guides/conf.py
+++ b/doc/guides/conf.py
@@ -1,3 +1,5 @@
+#!/usr/bin/env python
+
# BSD LICENSE
# Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
# All rights reserved.
diff --git a/tools/dpdk-devbind.py b/tools/dpdk-devbind.py
index 4f51a4b..a5b2af5 100755
--- a/tools/dpdk-devbind.py
+++ b/tools/dpdk-devbind.py
@@ -1,4 +1,5 @@
-#! /usr/bin/python
+#!/usr/bin/env python
+
#
# BSD LICENSE
#
--
2.7.4
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/4] app: give python apps a consistent shebang line
2016-12-08 16:03 ` [dpdk-dev] [PATCH v2 3/4] app: give python apps a consistent shebang line John McNamara
@ 2016-12-08 16:20 ` Thomas Monjalon
2016-12-08 20:44 ` Mcnamara, John
0 siblings, 1 reply; 28+ messages in thread
From: Thomas Monjalon @ 2016-12-08 16:20 UTC (permalink / raw)
To: John McNamara; +Cc: dev, mkletzan
2016-12-08 16:03, John McNamara:
> Add a consistent "env python" shebang line to the DPDK Python
> apps so that they can call the default system python.
The shebang is only useful for executable scripts.
Those included by other python scripts should not have this line.
Please could you remove the shebang for conf.py and data files?
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/4] app: give python apps a consistent shebang line
2016-12-08 16:20 ` Thomas Monjalon
@ 2016-12-08 20:44 ` Mcnamara, John
0 siblings, 0 replies; 28+ messages in thread
From: Mcnamara, John @ 2016-12-08 20:44 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev, mkletzan
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Thursday, December 8, 2016 4:21 PM
> To: Mcnamara, John <john.mcnamara@intel.com>
> Cc: dev@dpdk.org; mkletzan@redhat.com
> Subject: Re: [dpdk-dev] [PATCH v2 3/4] app: give python apps a consistent
> shebang line
>
> 2016-12-08 16:03, John McNamara:
> > Add a consistent "env python" shebang line to the DPDK Python apps so
> > that they can call the default system python.
>
> The shebang is only useful for executable scripts.
> Those included by other python scripts should not have this line.
> Please could you remove the shebang for conf.py and data files?
Good point. In that case I'll squash 3/4 into 2/4 since the shebang change
only affects one executable file, even though it isn't strictly a Python 3
change.
John
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 4/4] doc: add required python versions to coding guidelines
2016-12-08 15:51 [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant John McNamara
` (6 preceding siblings ...)
2016-12-08 16:03 ` [dpdk-dev] [PATCH v2 3/4] app: give python apps a consistent shebang line John McNamara
@ 2016-12-08 16:03 ` John McNamara
2016-12-09 15:28 ` [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant Neil Horman
` (12 subsequent siblings)
20 siblings, 0 replies; 28+ messages in thread
From: John McNamara @ 2016-12-08 16:03 UTC (permalink / raw)
To: dev; +Cc: mkletzan, John McNamara
Add a requirement to support both Python 2 and 3 to the
DPDK Python Coding Standards.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
doc/guides/contributing/coding_style.rst | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/doc/guides/contributing/coding_style.rst b/doc/guides/contributing/coding_style.rst
index 1eb67f3..4163960 100644
--- a/doc/guides/contributing/coding_style.rst
+++ b/doc/guides/contributing/coding_style.rst
@@ -690,6 +690,7 @@ Control Statements
Python Code
-----------
-All python code should be compliant with `PEP8 (Style Guide for Python Code) <https://www.python.org/dev/peps/pep-0008/>`_.
+All Python code should work with Python 2.7+ and 3.2+ and be compliant with
+`PEP8 (Style Guide for Python Code) <https://www.python.org/dev/peps/pep-0008/>`_.
The ``pep8`` tool can be used for testing compliance with the guidelines.
--
2.7.4
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant
2016-12-08 15:51 [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant John McNamara
` (7 preceding siblings ...)
2016-12-08 16:03 ` [dpdk-dev] [PATCH v2 4/4] doc: add required python versions to coding guidelines John McNamara
@ 2016-12-09 15:28 ` Neil Horman
2016-12-09 17:00 ` Mcnamara, John
2016-12-18 14:25 ` [dpdk-dev] [PATCH v2 0/3] " John McNamara
` (11 subsequent siblings)
20 siblings, 1 reply; 28+ messages in thread
From: Neil Horman @ 2016-12-09 15:28 UTC (permalink / raw)
To: John McNamara; +Cc: dev, mkletzan
On Thu, Dec 08, 2016 at 03:51:01PM +0000, John McNamara wrote:
> These patches refactor the DPDK Python applications to make them Python 2/3
> compatible.
>
> In order to do this the patchset starts by making the apps PEP8 compliant in
> accordance with the DPDK Coding guidelines:
>
> http://dpdk.org/doc/guides/contributing/coding_style.html#python-code
>
> Implementing PEP8 and Python 2/3 compliance means that we can check all future
> Python patches for consistency. Python 2/3 support also makes downstream
> packaging easier as more distros move to Python 3 as the system python.
>
> See the previous discussion about Python2/3 compatibilty here:
>
> http://dpdk.org/ml/archives/dev/2016-December/051683.html
>
> I've tested that the apps compile with python 2 and 3 and I've tested some
> of the apps for consistent output but it needs additional testing.
>
> John McNamara (4):
> app: make python apps pep8 compliant
> app: make python apps python2/3 compliant
> app: give python apps a consistent shebang line
> doc: add required python versions to coding guidelines
>
> app/cmdline_test/cmdline_test.py | 87 ++-
> app/cmdline_test/cmdline_test_data.py | 403 +++++-----
> app/test/autotest.py | 46 +-
> app/test/autotest_data.py | 831 +++++++++++----------
> app/test/autotest_runner.py | 740 +++++++++---------
> app/test/autotest_test_funcs.py | 481 ++++++------
> doc/guides/conf.py | 11 +-
> doc/guides/contributing/coding_style.rst | 3 +-
> examples/ip_pipeline/config/diagram-generator.py | 13 +-
> .../ip_pipeline/config/pipeline-to-core-mapping.py | 11 +-
> tools/cpu_layout.py | 79 +-
> tools/dpdk-devbind.py | 26 +-
> tools/dpdk-pmdinfo.py | 73 +-
> 13 files changed, 1410 insertions(+), 1394 deletions(-)
>
> --
> 2.7.4
>
I think the changelog is deceptive. It claims to make all the utilities python2
and 3 compliant. But compliance with python3 is more than just stylistic
formatting. After this series several of these apps continue to fail under
python3. dpdk-pmdinfo as an example:
[nhorman@hmsreliant dpdk]$ ./tools/dpdk-pmdinfo.py ./build/app/testacl
Traceback (most recent call last):
File "./tools/dpdk-pmdinfo.py", line 607, in <module>
main()
File "./tools/dpdk-pmdinfo.py", line 596, in main
readelf.process_dt_needed_entries()
File "./tools/dpdk-pmdinfo.py", line 437, in process_dt_needed_entries
rc = tag.needed.find("librte_pmd")
TypeError: a bytes-like object is required, not 'str'
I'm not saying its a bad patchset, but the changelog should reflect that the
change is purely stylistic, not functional.
Neil
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant
2016-12-09 15:28 ` [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant Neil Horman
@ 2016-12-09 17:00 ` Mcnamara, John
2016-12-09 17:06 ` Neil Horman
0 siblings, 1 reply; 28+ messages in thread
From: Mcnamara, John @ 2016-12-09 17:00 UTC (permalink / raw)
To: Neil Horman; +Cc: dev, mkletzan
> -----Original Message-----
> From: Neil Horman [mailto:nhorman@tuxdriver.com]
> Sent: Friday, December 9, 2016 3:29 PM
> To: Mcnamara, John <john.mcnamara@intel.com>
> Cc: dev@dpdk.org; mkletzan@redhat.com
> Subject: Re: [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3
> compliant
>
> On Thu, Dec 08, 2016 at 03:51:01PM +0000, John McNamara wrote:
> > These patches refactor the DPDK Python applications to make them
> > Python 2/3 compatible.
> >
> > In order to do this the patchset starts by making the apps PEP8
> > compliant in accordance with the DPDK Coding guidelines:
> >
> >
> > http://dpdk.org/doc/guides/contributing/coding_style.html#python-code
> >
> > Implementing PEP8 and Python 2/3 compliance means that we can check
> > all future Python patches for consistency. Python 2/3 support also
> > makes downstream packaging easier as more distros move to Python 3 as
> the system python.
> >
> > See the previous discussion about Python2/3 compatibilty here:
> >
> > http://dpdk.org/ml/archives/dev/2016-December/051683.html
> >
> > I've tested that the apps compile with python 2 and 3 and I've tested
> > some of the apps for consistent output but it needs additional testing.
> >
> > John McNamara (4):
> > app: make python apps pep8 compliant
> > app: make python apps python2/3 compliant
> > app: give python apps a consistent shebang line
> > doc: add required python versions to coding guidelines
> >
> > app/cmdline_test/cmdline_test.py | 87 ++-
> > app/cmdline_test/cmdline_test_data.py | 403 +++++-----
> > app/test/autotest.py | 46 +-
> > app/test/autotest_data.py | 831 +++++++++++---
> -------
> > app/test/autotest_runner.py | 740 +++++++++-----
> ----
> > app/test/autotest_test_funcs.py | 481 ++++++------
> > doc/guides/conf.py | 11 +-
> > doc/guides/contributing/coding_style.rst | 3 +-
> > examples/ip_pipeline/config/diagram-generator.py | 13 +-
> > .../ip_pipeline/config/pipeline-to-core-mapping.py | 11 +-
> > tools/cpu_layout.py | 79 +-
> > tools/dpdk-devbind.py | 26 +-
> > tools/dpdk-pmdinfo.py | 73 +-
> > 13 files changed, 1410 insertions(+), 1394 deletions(-)
> >
> > --
> > 2.7.4
> >
> I think the changelog is deceptive. It claims to make all the utilities
> python2 and 3 compliant. But compliance with python3 is more than just
> stylistic formatting. After this series several of these apps continue to
> fail under python3. dpdk-pmdinfo as an example:
>
> [nhorman@hmsreliant dpdk]$ ./tools/dpdk-pmdinfo.py ./build/app/testacl
> Traceback (most recent call last):
> File "./tools/dpdk-pmdinfo.py", line 607, in <module>
> main()
> File "./tools/dpdk-pmdinfo.py", line 596, in main
> readelf.process_dt_needed_entries()
> File "./tools/dpdk-pmdinfo.py", line 437, in process_dt_needed_entries
> rc = tag.needed.find("librte_pmd")
> TypeError: a bytes-like object is required, not 'str'
>
>
> I'm not saying its a bad patchset, but the changelog should reflect that
> the change is purely stylistic, not functional.
>
Hi Neil,
Mea cupla. In my defense I did say in the cover letter that I'd tested that the apps compiled but that they needed extra testing. I did functionally test some of the apps that I was more familiar with, but not all of them. In particular the test apps need functional testing.
However, the changes need to be functional rather than just cosmetic so I'll look into fixing pmdinfo with Python 3, unless you'd prefer to do that ;-). Since pmdinfo is dealing with binary data it may be tricky. That is often one of the real challenges of porting Python 2 code to Python 3. Hopefully elftools is compatible. Anyway I'll look into it.
And just to be clear, I don't think this patchset should be merged until all of the apps have been functionally tested. I'll put something in the final patchset to indicate that the modified apps have been tested.
John
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant
2016-12-09 17:00 ` Mcnamara, John
@ 2016-12-09 17:06 ` Neil Horman
2016-12-09 17:41 ` Mcnamara, John
0 siblings, 1 reply; 28+ messages in thread
From: Neil Horman @ 2016-12-09 17:06 UTC (permalink / raw)
To: Mcnamara, John; +Cc: dev, mkletzan
On Fri, Dec 09, 2016 at 05:00:19PM +0000, Mcnamara, John wrote:
>
>
> > -----Original Message-----
> > From: Neil Horman [mailto:nhorman@tuxdriver.com]
> > Sent: Friday, December 9, 2016 3:29 PM
> > To: Mcnamara, John <john.mcnamara@intel.com>
> > Cc: dev@dpdk.org; mkletzan@redhat.com
> > Subject: Re: [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3
> > compliant
> >
> > On Thu, Dec 08, 2016 at 03:51:01PM +0000, John McNamara wrote:
> > > These patches refactor the DPDK Python applications to make them
> > > Python 2/3 compatible.
> > >
> > > In order to do this the patchset starts by making the apps PEP8
> > > compliant in accordance with the DPDK Coding guidelines:
> > >
> > >
> > > http://dpdk.org/doc/guides/contributing/coding_style.html#python-code
> > >
> > > Implementing PEP8 and Python 2/3 compliance means that we can check
> > > all future Python patches for consistency. Python 2/3 support also
> > > makes downstream packaging easier as more distros move to Python 3 as
> > the system python.
> > >
> > > See the previous discussion about Python2/3 compatibilty here:
> > >
> > > http://dpdk.org/ml/archives/dev/2016-December/051683.html
> > >
> > > I've tested that the apps compile with python 2 and 3 and I've tested
> > > some of the apps for consistent output but it needs additional testing.
> > >
> > > John McNamara (4):
> > > app: make python apps pep8 compliant
> > > app: make python apps python2/3 compliant
> > > app: give python apps a consistent shebang line
> > > doc: add required python versions to coding guidelines
> > >
> > > app/cmdline_test/cmdline_test.py | 87 ++-
> > > app/cmdline_test/cmdline_test_data.py | 403 +++++-----
> > > app/test/autotest.py | 46 +-
> > > app/test/autotest_data.py | 831 +++++++++++---
> > -------
> > > app/test/autotest_runner.py | 740 +++++++++-----
> > ----
> > > app/test/autotest_test_funcs.py | 481 ++++++------
> > > doc/guides/conf.py | 11 +-
> > > doc/guides/contributing/coding_style.rst | 3 +-
> > > examples/ip_pipeline/config/diagram-generator.py | 13 +-
> > > .../ip_pipeline/config/pipeline-to-core-mapping.py | 11 +-
> > > tools/cpu_layout.py | 79 +-
> > > tools/dpdk-devbind.py | 26 +-
> > > tools/dpdk-pmdinfo.py | 73 +-
> > > 13 files changed, 1410 insertions(+), 1394 deletions(-)
> > >
> > > --
> > > 2.7.4
> > >
> > I think the changelog is deceptive. It claims to make all the utilities
> > python2 and 3 compliant. But compliance with python3 is more than just
> > stylistic formatting. After this series several of these apps continue to
> > fail under python3. dpdk-pmdinfo as an example:
> >
> > [nhorman@hmsreliant dpdk]$ ./tools/dpdk-pmdinfo.py ./build/app/testacl
> > Traceback (most recent call last):
> > File "./tools/dpdk-pmdinfo.py", line 607, in <module>
> > main()
> > File "./tools/dpdk-pmdinfo.py", line 596, in main
> > readelf.process_dt_needed_entries()
> > File "./tools/dpdk-pmdinfo.py", line 437, in process_dt_needed_entries
> > rc = tag.needed.find("librte_pmd")
> > TypeError: a bytes-like object is required, not 'str'
> >
> >
> > I'm not saying its a bad patchset, but the changelog should reflect that
> > the change is purely stylistic, not functional.
> >
>
> Hi Neil,
>
> Mea cupla. In my defense I did say in the cover letter that I'd tested that the apps compiled but that they needed extra testing. I did functionally test some of the apps that I was more familiar with, but not all of them. In particular the test apps need functional testing.
>
> However, the changes need to be functional rather than just cosmetic so I'll look into fixing pmdinfo with Python 3, unless you'd prefer to do that ;-). Since pmdinfo is dealing with binary data it may be tricky. That is often one of the real challenges of porting Python 2 code to Python 3. Hopefully elftools is compatible. Anyway I'll look into it.
>
> And just to be clear, I don't think this patchset should be merged until all of the apps have been functionally tested. I'll put something in the final patchset to indicate that the modified apps have been tested.
>
I completely agree that the utilities should be functionally compatible with
python 3, but in regards to this patch set, all I'm really asking for is that
the changelog reflect that its just making stylistic changes to comply with pep8
(i.e. not fixing python 2/3 compatibility issues). The latter can be handled in
subsequent patches piecemeal.
Neil
> John
>
>
>
>
>
>
>
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant
2016-12-09 17:06 ` Neil Horman
@ 2016-12-09 17:41 ` Mcnamara, John
0 siblings, 0 replies; 28+ messages in thread
From: Mcnamara, John @ 2016-12-09 17:41 UTC (permalink / raw)
To: Neil Horman; +Cc: dev, mkletzan
> -----Original Message-----
> From: Neil Horman [mailto:nhorman@tuxdriver.com]
> Sent: Friday, December 9, 2016 5:06 PM
> To: Mcnamara, John <john.mcnamara@intel.com>
> Cc: dev@dpdk.org; mkletzan@redhat.com
> Subject: Re: [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3
> compliant
>
> On Fri, Dec 09, 2016 at 05:00:19PM +0000, Mcnamara, John wrote:
> >
> >
> > > -----Original Message-----
> > > From: Neil Horman [mailto:nhorman@tuxdriver.com]
> > > Sent: Friday, December 9, 2016 3:29 PM
> > > To: Mcnamara, John <john.mcnamara@intel.com>
> > > Cc: dev@dpdk.org; mkletzan@redhat.com
> > > Subject: Re: [dpdk-dev] [PATCH v1 0/4] app: make python apps
> > > python2/3 compliant
> > >
> > > On Thu, Dec 08, 2016 at 03:51:01PM +0000, John McNamara wrote:
> > > > These patches refactor the DPDK Python applications to make them
> > > > Python 2/3 compatible.
> > > >
> > > > In order to do this the patchset starts by making the apps PEP8
> > > > compliant in accordance with the DPDK Coding guidelines:
> > > >
> > > >
> > > > http://dpdk.org/doc/guides/contributing/coding_style.html#python-c
> > > > ode
> > > >
> > > > Implementing PEP8 and Python 2/3 compliance means that we can
> > > > check all future Python patches for consistency. Python 2/3
> > > > support also makes downstream packaging easier as more distros
> > > > move to Python 3 as
> > > the system python.
> > > >
> > > > See the previous discussion about Python2/3 compatibilty here:
> > > >
> > > > http://dpdk.org/ml/archives/dev/2016-December/051683.html
> > > >
> > > > I've tested that the apps compile with python 2 and 3 and I've
> > > > tested some of the apps for consistent output but it needs
> additional testing.
> > > >
> > > > John McNamara (4):
> > > > app: make python apps pep8 compliant
> > > > app: make python apps python2/3 compliant
> > > > app: give python apps a consistent shebang line
> > > > doc: add required python versions to coding guidelines
> > > >
> > > ...
> >
> > And just to be clear, I don't think this patchset should be merged until
> all of the apps have been functionally tested. I'll put something in the
> final patchset to indicate that the modified apps have been tested.
> >
> I completely agree that the utilities should be functionally compatible
> with python 3, but in regards to this patch set, all I'm really asking for
> is that the changelog reflect that its just making stylistic changes to
> comply with pep8 (i.e. not fixing python 2/3 compatibility issues). The
> latter can be handled in subsequent patches piecemeal.
>
Hi Neil,
Sorry if I'm missing something, but isn't that how the changelog is structured?
>From above:
John McNamara (4):
app: make python apps pep8 compliant
app: make python apps python2/3 compliant
app: give python apps a consistent shebang line
doc: add required python versions to coding guidelines
The PEP8 changes are made in 1/4. The Python 2/3 changes are made in 2/4.
Also, the pmdinfo issue was easier to resolve than I feared. The following should fix the exception (on top of the other patches). I tested the output with a couple of pmds but maybe
you could verify it as well.
John
$ git diff
diff --git a/tools/dpdk-pmdinfo.py b/tools/dpdk-pmdinfo.py
index 097982e..d4e51aa 100755
--- a/tools/dpdk-pmdinfo.py
+++ b/tools/dpdk-pmdinfo.py
@@ -434,7 +434,7 @@ def process_dt_needed_entries(self):
for tag in dynsec.iter_tags():
if tag.entry.d_tag == 'DT_NEEDED':
- rc = tag.needed.find("librte_pmd")
+ rc = tag.needed.find(b"librte_pmd")
if (rc != -1):
library = search_file(tag.needed,
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 0/3] app: make python apps python2/3 compliant
2016-12-08 15:51 [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant John McNamara
` (8 preceding siblings ...)
2016-12-09 15:28 ` [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant Neil Horman
@ 2016-12-18 14:25 ` John McNamara
2016-12-18 14:25 ` [dpdk-dev] [PATCH v2 1/3] app: make python apps pep8 compliant John McNamara
` (10 subsequent siblings)
20 siblings, 0 replies; 28+ messages in thread
From: John McNamara @ 2016-12-18 14:25 UTC (permalink / raw)
To: dev; +Cc: mkletzan, thomas.monjalon, nhorman, John McNamara
These patches refactor the DPDK Python applications to make them Python 2/3
compatible.
In order to do this the patchset starts by making the apps PEP8 compliant in
accordance with the DPDK Coding guidelines:
http://dpdk.org/doc/guides/contributing/coding_style.html#python-code
Implementing PEP8 and Python 2/3 compliance means that we can check all future
Python patches for consistency. Python 2/3 support also makes downstream
packaging easier as more distros move to Python 3 as the system python.
See the previous discussion about Python2/3 compatibilty here:
http://dpdk.org/ml/archives/dev/2016-December/051683.html
V2: * Squash shebang patch into Python 3 patch.
* Only add /usr/bin/env shebang line to code that is executable.
John McNamara (3):
app: make python apps pep8 compliant
app: make python apps python2/3 compliant
doc: add required python versions to docs
app/cmdline_test/cmdline_test.py | 87 ++-
app/cmdline_test/cmdline_test_data.py | 403 +++++-----
app/test/autotest.py | 46 +-
app/test/autotest_data.py | 831 ++++++++++-----------
app/test/autotest_runner.py | 740 +++++++++---------
app/test/autotest_test_funcs.py | 481 ++++++------
doc/guides/conf.py | 9 +-
doc/guides/contributing/coding_style.rst | 3 +-
doc/guides/linux_gsg/sys_reqs.rst | 2 +-
examples/ip_pipeline/config/diagram-generator.py | 13 +-
.../ip_pipeline/config/pipeline-to-core-mapping.py | 11 +-
tools/cpu_layout.py | 79 +-
tools/dpdk-devbind.py | 25 +-
tools/dpdk-pmdinfo.py | 75 +-
14 files changed, 1405 insertions(+), 1400 deletions(-)
--
2.7.4
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 1/3] app: make python apps pep8 compliant
2016-12-08 15:51 [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant John McNamara
` (9 preceding siblings ...)
2016-12-18 14:25 ` [dpdk-dev] [PATCH v2 0/3] " John McNamara
@ 2016-12-18 14:25 ` John McNamara
2016-12-18 14:25 ` [dpdk-dev] [PATCH v2 2/3] app: make python apps python2/3 compliant John McNamara
` (9 subsequent siblings)
20 siblings, 0 replies; 28+ messages in thread
From: John McNamara @ 2016-12-18 14:25 UTC (permalink / raw)
To: dev; +Cc: mkletzan, thomas.monjalon, nhorman, John McNamara
Make all DPDK python application compliant with the PEP8 standard
to allow for consistency checking of patches and to allow further
refactoring.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
app/cmdline_test/cmdline_test.py | 81 +-
app/cmdline_test/cmdline_test_data.py | 401 +++++-----
app/test/autotest.py | 40 +-
app/test/autotest_data.py | 831 +++++++++++----------
app/test/autotest_runner.py | 739 +++++++++---------
app/test/autotest_test_funcs.py | 479 ++++++------
doc/guides/conf.py | 9 +-
examples/ip_pipeline/config/diagram-generator.py | 13 +-
.../ip_pipeline/config/pipeline-to-core-mapping.py | 11 +-
tools/cpu_layout.py | 55 +-
tools/dpdk-devbind.py | 23 +-
tools/dpdk-pmdinfo.py | 61 +-
12 files changed, 1376 insertions(+), 1367 deletions(-)
diff --git a/app/cmdline_test/cmdline_test.py b/app/cmdline_test/cmdline_test.py
index 8efc5ea..4729987 100755
--- a/app/cmdline_test/cmdline_test.py
+++ b/app/cmdline_test/cmdline_test.py
@@ -33,16 +33,21 @@
# Script that runs cmdline_test app and feeds keystrokes into it.
-import sys, pexpect, string, os, cmdline_test_data
+import cmdline_test_data
+import os
+import pexpect
+import sys
+
#
# function to run test
#
-def runTest(child,test):
- child.send(test["Sequence"])
- if test["Result"] == None:
- return 0
- child.expect(test["Result"],1)
+def runTest(child, test):
+ child.send(test["Sequence"])
+ if test["Result"] is None:
+ return 0
+ child.expect(test["Result"], 1)
+
#
# history test is a special case
@@ -57,57 +62,57 @@ def runTest(child,test):
# This is a self-contained test, it needs only a pexpect child
#
def runHistoryTest(child):
- # find out history size
- child.sendline(cmdline_test_data.CMD_GET_BUFSIZE)
- child.expect("History buffer size: \\d+", timeout=1)
- history_size = int(child.after[len(cmdline_test_data.BUFSIZE_TEMPLATE):])
- i = 0
+ # find out history size
+ child.sendline(cmdline_test_data.CMD_GET_BUFSIZE)
+ child.expect("History buffer size: \\d+", timeout=1)
+ history_size = int(child.after[len(cmdline_test_data.BUFSIZE_TEMPLATE):])
+ i = 0
- # fill the history with numbers
- while i < history_size / 10:
- # add 1 to prevent from parsing as octals
- child.send("1" + str(i).zfill(8) + cmdline_test_data.ENTER)
- # the app will simply print out the number
- child.expect(str(i + 100000000), timeout=1)
- i += 1
- # scroll back history
- child.send(cmdline_test_data.UP * (i + 2) + cmdline_test_data.ENTER)
- child.expect("100000000", timeout=1)
+ # fill the history with numbers
+ while i < history_size / 10:
+ # add 1 to prevent from parsing as octals
+ child.send("1" + str(i).zfill(8) + cmdline_test_data.ENTER)
+ # the app will simply print out the number
+ child.expect(str(i + 100000000), timeout=1)
+ i += 1
+ # scroll back history
+ child.send(cmdline_test_data.UP * (i + 2) + cmdline_test_data.ENTER)
+ child.expect("100000000", timeout=1)
# the path to cmdline_test executable is supplied via command-line.
if len(sys.argv) < 2:
- print "Error: please supply cmdline_test app path"
- sys.exit(1)
+ print "Error: please supply cmdline_test app path"
+ sys.exit(1)
test_app_path = sys.argv[1]
if not os.path.exists(test_app_path):
- print "Error: please supply cmdline_test app path"
- sys.exit(1)
+ print "Error: please supply cmdline_test app path"
+ sys.exit(1)
child = pexpect.spawn(test_app_path)
print "Running command-line tests..."
for test in cmdline_test_data.tests:
- print (test["Name"] + ":").ljust(30),
- try:
- runTest(child,test)
- print "PASS"
- except:
- print "FAIL"
- print child
- sys.exit(1)
+ print (test["Name"] + ":").ljust(30),
+ try:
+ runTest(child, test)
+ print "PASS"
+ except:
+ print "FAIL"
+ print child
+ sys.exit(1)
# since last test quits the app, run new instance
child = pexpect.spawn(test_app_path)
print ("History fill test:").ljust(30),
try:
- runHistoryTest(child)
- print "PASS"
+ runHistoryTest(child)
+ print "PASS"
except:
- print "FAIL"
- print child
- sys.exit(1)
+ print "FAIL"
+ print child
+ sys.exit(1)
child.close()
sys.exit(0)
diff --git a/app/cmdline_test/cmdline_test_data.py b/app/cmdline_test/cmdline_test_data.py
index b1945a5..3ce6cbc 100644
--- a/app/cmdline_test/cmdline_test_data.py
+++ b/app/cmdline_test/cmdline_test_data.py
@@ -33,8 +33,6 @@
# collection of static data
-import sys
-
# keycode constants
CTRL_A = chr(1)
CTRL_B = chr(2)
@@ -95,217 +93,220 @@
# and expected output (if any).
tests = [
-# test basic commands
- {"Name" : "command test 1",
- "Sequence" : "ambiguous first" + ENTER,
- "Result" : CMD1},
- {"Name" : "command test 2",
- "Sequence" : "ambiguous second" + ENTER,
- "Result" : CMD2},
- {"Name" : "command test 3",
- "Sequence" : "ambiguous ambiguous" + ENTER,
- "Result" : AMBIG},
- {"Name" : "command test 4",
- "Sequence" : "ambiguous ambiguous2" + ENTER,
- "Result" : AMBIG},
+ # test basic commands
+ {"Name": "command test 1",
+ "Sequence": "ambiguous first" + ENTER,
+ "Result": CMD1},
+ {"Name": "command test 2",
+ "Sequence": "ambiguous second" + ENTER,
+ "Result": CMD2},
+ {"Name": "command test 3",
+ "Sequence": "ambiguous ambiguous" + ENTER,
+ "Result": AMBIG},
+ {"Name": "command test 4",
+ "Sequence": "ambiguous ambiguous2" + ENTER,
+ "Result": AMBIG},
- {"Name" : "invalid command test 1",
- "Sequence" : "ambiguous invalid" + ENTER,
- "Result" : BAD_ARG},
-# test invalid commands
- {"Name" : "invalid command test 2",
- "Sequence" : "invalid" + ENTER,
- "Result" : NOT_FOUND},
- {"Name" : "invalid command test 3",
- "Sequence" : "ambiguousinvalid" + ENTER2,
- "Result" : NOT_FOUND},
+ {"Name": "invalid command test 1",
+ "Sequence": "ambiguous invalid" + ENTER,
+ "Result": BAD_ARG},
+ # test invalid commands
+ {"Name": "invalid command test 2",
+ "Sequence": "invalid" + ENTER,
+ "Result": NOT_FOUND},
+ {"Name": "invalid command test 3",
+ "Sequence": "ambiguousinvalid" + ENTER2,
+ "Result": NOT_FOUND},
-# test arrows and deletes
- {"Name" : "arrows & delete test 1",
- "Sequence" : "singlebad" + LEFT*2 + CTRL_B + DEL*3 + ENTER,
- "Result" : SINGLE},
- {"Name" : "arrows & delete test 2",
- "Sequence" : "singlebad" + LEFT*5 + RIGHT + CTRL_F + DEL*3 + ENTER,
- "Result" : SINGLE},
+ # test arrows and deletes
+ {"Name": "arrows & delete test 1",
+ "Sequence": "singlebad" + LEFT*2 + CTRL_B + DEL*3 + ENTER,
+ "Result": SINGLE},
+ {"Name": "arrows & delete test 2",
+ "Sequence": "singlebad" + LEFT*5 + RIGHT + CTRL_F + DEL*3 + ENTER,
+ "Result": SINGLE},
-# test backspace
- {"Name" : "backspace test",
- "Sequence" : "singlebad" + BKSPACE*3 + ENTER,
- "Result" : SINGLE},
+ # test backspace
+ {"Name": "backspace test",
+ "Sequence": "singlebad" + BKSPACE*3 + ENTER,
+ "Result": SINGLE},
-# test goto left and goto right
- {"Name" : "goto left test",
- "Sequence" : "biguous first" + CTRL_A + "am" + ENTER,
- "Result" : CMD1},
- {"Name" : "goto right test",
- "Sequence" : "biguous fir" + CTRL_A + "am" + CTRL_E + "st" + ENTER,
- "Result" : CMD1},
+ # test goto left and goto right
+ {"Name": "goto left test",
+ "Sequence": "biguous first" + CTRL_A + "am" + ENTER,
+ "Result": CMD1},
+ {"Name": "goto right test",
+ "Sequence": "biguous fir" + CTRL_A + "am" + CTRL_E + "st" + ENTER,
+ "Result": CMD1},
-# test goto words
- {"Name" : "goto left word test",
- "Sequence" : "ambiguous st" + ALT_B + "fir" + ENTER,
- "Result" : CMD1},
- {"Name" : "goto right word test",
- "Sequence" : "ambig first" + CTRL_A + ALT_F + "uous" + ENTER,
- "Result" : CMD1},
+ # test goto words
+ {"Name": "goto left word test",
+ "Sequence": "ambiguous st" + ALT_B + "fir" + ENTER,
+ "Result": CMD1},
+ {"Name": "goto right word test",
+ "Sequence": "ambig first" + CTRL_A + ALT_F + "uous" + ENTER,
+ "Result": CMD1},
-# test removing words
- {"Name" : "remove left word 1",
- "Sequence" : "single invalid" + CTRL_W + ENTER,
- "Result" : SINGLE},
- {"Name" : "remove left word 2",
- "Sequence" : "single invalid" + ALT_BKSPACE + ENTER,
- "Result" : SINGLE},
- {"Name" : "remove right word",
- "Sequence" : "single invalid" + ALT_B + ALT_D + ENTER,
- "Result" : SINGLE},
+ # test removing words
+ {"Name": "remove left word 1",
+ "Sequence": "single invalid" + CTRL_W + ENTER,
+ "Result": SINGLE},
+ {"Name": "remove left word 2",
+ "Sequence": "single invalid" + ALT_BKSPACE + ENTER,
+ "Result": SINGLE},
+ {"Name": "remove right word",
+ "Sequence": "single invalid" + ALT_B + ALT_D + ENTER,
+ "Result": SINGLE},
-# test kill buffer (copy and paste)
- {"Name" : "killbuffer test 1",
- "Sequence" : "ambiguous" + CTRL_A + CTRL_K + " first" + CTRL_A + CTRL_Y + ENTER,
- "Result" : CMD1},
- {"Name" : "killbuffer test 2",
- "Sequence" : "ambiguous" + CTRL_A + CTRL_K + CTRL_Y*26 + ENTER,
- "Result" : NOT_FOUND},
+ # test kill buffer (copy and paste)
+ {"Name": "killbuffer test 1",
+ "Sequence": "ambiguous" + CTRL_A + CTRL_K + " first" + CTRL_A +
+ CTRL_Y + ENTER,
+ "Result": CMD1},
+ {"Name": "killbuffer test 2",
+ "Sequence": "ambiguous" + CTRL_A + CTRL_K + CTRL_Y*26 + ENTER,
+ "Result": NOT_FOUND},
-# test newline
- {"Name" : "newline test",
- "Sequence" : "invalid" + CTRL_C + "single" + ENTER,
- "Result" : SINGLE},
+ # test newline
+ {"Name": "newline test",
+ "Sequence": "invalid" + CTRL_C + "single" + ENTER,
+ "Result": SINGLE},
-# test redisplay (nothing should really happen)
- {"Name" : "redisplay test",
- "Sequence" : "single" + CTRL_L + ENTER,
- "Result" : SINGLE},
+ # test redisplay (nothing should really happen)
+ {"Name": "redisplay test",
+ "Sequence": "single" + CTRL_L + ENTER,
+ "Result": SINGLE},
-# test autocomplete
- {"Name" : "autocomplete test 1",
- "Sequence" : "si" + TAB + ENTER,
- "Result" : SINGLE},
- {"Name" : "autocomplete test 2",
- "Sequence" : "si" + TAB + "_" + TAB + ENTER,
- "Result" : SINGLE_LONG},
- {"Name" : "autocomplete test 3",
- "Sequence" : "in" + TAB + ENTER,
- "Result" : NOT_FOUND},
- {"Name" : "autocomplete test 4",
- "Sequence" : "am" + TAB + ENTER,
- "Result" : BAD_ARG},
- {"Name" : "autocomplete test 5",
- "Sequence" : "am" + TAB + "fir" + TAB + ENTER,
- "Result" : CMD1},
- {"Name" : "autocomplete test 6",
- "Sequence" : "am" + TAB + "fir" + TAB + TAB + ENTER,
- "Result" : CMD1},
- {"Name" : "autocomplete test 7",
- "Sequence" : "am" + TAB + "fir" + TAB + " " + TAB + ENTER,
- "Result" : CMD1},
- {"Name" : "autocomplete test 8",
- "Sequence" : "am" + TAB + " am" + TAB + " " + ENTER,
- "Result" : AMBIG},
- {"Name" : "autocomplete test 9",
- "Sequence" : "am" + TAB + "inv" + TAB + ENTER,
- "Result" : BAD_ARG},
- {"Name" : "autocomplete test 10",
- "Sequence" : "au" + TAB + ENTER,
- "Result" : NOT_FOUND},
- {"Name" : "autocomplete test 11",
- "Sequence" : "au" + TAB + "1" + ENTER,
- "Result" : AUTO1},
- {"Name" : "autocomplete test 12",
- "Sequence" : "au" + TAB + "2" + ENTER,
- "Result" : AUTO2},
- {"Name" : "autocomplete test 13",
- "Sequence" : "au" + TAB + "2" + TAB + ENTER,
- "Result" : AUTO2},
- {"Name" : "autocomplete test 14",
- "Sequence" : "au" + TAB + "2 " + TAB + ENTER,
- "Result" : AUTO2},
- {"Name" : "autocomplete test 15",
- "Sequence" : "24" + TAB + ENTER,
- "Result" : "24"},
+ # test autocomplete
+ {"Name": "autocomplete test 1",
+ "Sequence": "si" + TAB + ENTER,
+ "Result": SINGLE},
+ {"Name": "autocomplete test 2",
+ "Sequence": "si" + TAB + "_" + TAB + ENTER,
+ "Result": SINGLE_LONG},
+ {"Name": "autocomplete test 3",
+ "Sequence": "in" + TAB + ENTER,
+ "Result": NOT_FOUND},
+ {"Name": "autocomplete test 4",
+ "Sequence": "am" + TAB + ENTER,
+ "Result": BAD_ARG},
+ {"Name": "autocomplete test 5",
+ "Sequence": "am" + TAB + "fir" + TAB + ENTER,
+ "Result": CMD1},
+ {"Name": "autocomplete test 6",
+ "Sequence": "am" + TAB + "fir" + TAB + TAB + ENTER,
+ "Result": CMD1},
+ {"Name": "autocomplete test 7",
+ "Sequence": "am" + TAB + "fir" + TAB + " " + TAB + ENTER,
+ "Result": CMD1},
+ {"Name": "autocomplete test 8",
+ "Sequence": "am" + TAB + " am" + TAB + " " + ENTER,
+ "Result": AMBIG},
+ {"Name": "autocomplete test 9",
+ "Sequence": "am" + TAB + "inv" + TAB + ENTER,
+ "Result": BAD_ARG},
+ {"Name": "autocomplete test 10",
+ "Sequence": "au" + TAB + ENTER,
+ "Result": NOT_FOUND},
+ {"Name": "autocomplete test 11",
+ "Sequence": "au" + TAB + "1" + ENTER,
+ "Result": AUTO1},
+ {"Name": "autocomplete test 12",
+ "Sequence": "au" + TAB + "2" + ENTER,
+ "Result": AUTO2},
+ {"Name": "autocomplete test 13",
+ "Sequence": "au" + TAB + "2" + TAB + ENTER,
+ "Result": AUTO2},
+ {"Name": "autocomplete test 14",
+ "Sequence": "au" + TAB + "2 " + TAB + ENTER,
+ "Result": AUTO2},
+ {"Name": "autocomplete test 15",
+ "Sequence": "24" + TAB + ENTER,
+ "Result": "24"},
-# test history
- {"Name" : "history test 1",
- "Sequence" : "invalid" + ENTER + "single" + ENTER + "invalid" + ENTER + UP + CTRL_P + ENTER,
- "Result" : SINGLE},
- {"Name" : "history test 2",
- "Sequence" : "invalid" + ENTER + "ambiguous first" + ENTER + "invalid" + ENTER + "single" + ENTER + UP * 3 + CTRL_N + DOWN + ENTER,
- "Result" : SINGLE},
+ # test history
+ {"Name": "history test 1",
+ "Sequence": "invalid" + ENTER + "single" + ENTER + "invalid" +
+ ENTER + UP + CTRL_P + ENTER,
+ "Result": SINGLE},
+ {"Name": "history test 2",
+ "Sequence": "invalid" + ENTER + "ambiguous first" + ENTER + "invalid" +
+ ENTER + "single" + ENTER + UP * 3 + CTRL_N + DOWN + ENTER,
+ "Result": SINGLE},
-#
-# tests that improve coverage
-#
+ #
+ # tests that improve coverage
+ #
-# empty space tests
- {"Name" : "empty space test 1",
- "Sequence" : RIGHT + LEFT + CTRL_B + CTRL_F + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 2",
- "Sequence" : BKSPACE + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 3",
- "Sequence" : CTRL_E*2 + CTRL_A*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 4",
- "Sequence" : ALT_F*2 + ALT_B*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 5",
- "Sequence" : " " + CTRL_E*2 + CTRL_A*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 6",
- "Sequence" : " " + CTRL_A + ALT_F*2 + ALT_B*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 7",
- "Sequence" : " " + CTRL_A + CTRL_D + CTRL_E + CTRL_D + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 8",
- "Sequence" : " space" + CTRL_W*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 9",
- "Sequence" : " space" + ALT_BKSPACE*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 10",
- "Sequence" : " space " + CTRL_A + ALT_D*3 + ENTER,
- "Result" : PROMPT},
+ # empty space tests
+ {"Name": "empty space test 1",
+ "Sequence": RIGHT + LEFT + CTRL_B + CTRL_F + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 2",
+ "Sequence": BKSPACE + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 3",
+ "Sequence": CTRL_E*2 + CTRL_A*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 4",
+ "Sequence": ALT_F*2 + ALT_B*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 5",
+ "Sequence": " " + CTRL_E*2 + CTRL_A*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 6",
+ "Sequence": " " + CTRL_A + ALT_F*2 + ALT_B*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 7",
+ "Sequence": " " + CTRL_A + CTRL_D + CTRL_E + CTRL_D + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 8",
+ "Sequence": " space" + CTRL_W*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 9",
+ "Sequence": " space" + ALT_BKSPACE*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 10",
+ "Sequence": " space " + CTRL_A + ALT_D*3 + ENTER,
+ "Result": PROMPT},
-# non-printable char tests
- {"Name" : "non-printable test 1",
- "Sequence" : chr(27) + chr(47) + ENTER,
- "Result" : PROMPT},
- {"Name" : "non-printable test 2",
- "Sequence" : chr(27) + chr(128) + ENTER*7,
- "Result" : PROMPT},
- {"Name" : "non-printable test 3",
- "Sequence" : chr(27) + chr(91) + chr(127) + ENTER*6,
- "Result" : PROMPT},
+ # non-printable char tests
+ {"Name": "non-printable test 1",
+ "Sequence": chr(27) + chr(47) + ENTER,
+ "Result": PROMPT},
+ {"Name": "non-printable test 2",
+ "Sequence": chr(27) + chr(128) + ENTER*7,
+ "Result": PROMPT},
+ {"Name": "non-printable test 3",
+ "Sequence": chr(27) + chr(91) + chr(127) + ENTER*6,
+ "Result": PROMPT},
-# miscellaneous tests
- {"Name" : "misc test 1",
- "Sequence" : ENTER,
- "Result" : PROMPT},
- {"Name" : "misc test 2",
- "Sequence" : "single #comment" + ENTER,
- "Result" : SINGLE},
- {"Name" : "misc test 3",
- "Sequence" : "#empty line" + ENTER,
- "Result" : PROMPT},
- {"Name" : "misc test 4",
- "Sequence" : " single " + ENTER,
- "Result" : SINGLE},
- {"Name" : "misc test 5",
- "Sequence" : "single#" + ENTER,
- "Result" : SINGLE},
- {"Name" : "misc test 6",
- "Sequence" : 'a' * 257 + ENTER,
- "Result" : NOT_FOUND},
- {"Name" : "misc test 7",
- "Sequence" : "clear_history" + UP*5 + DOWN*5 + ENTER,
- "Result" : PROMPT},
- {"Name" : "misc test 8",
- "Sequence" : "a" + HELP + CTRL_C,
- "Result" : PROMPT},
- {"Name" : "misc test 9",
- "Sequence" : CTRL_D*3,
- "Result" : None},
+ # miscellaneous tests
+ {"Name": "misc test 1",
+ "Sequence": ENTER,
+ "Result": PROMPT},
+ {"Name": "misc test 2",
+ "Sequence": "single #comment" + ENTER,
+ "Result": SINGLE},
+ {"Name": "misc test 3",
+ "Sequence": "#empty line" + ENTER,
+ "Result": PROMPT},
+ {"Name": "misc test 4",
+ "Sequence": " single " + ENTER,
+ "Result": SINGLE},
+ {"Name": "misc test 5",
+ "Sequence": "single#" + ENTER,
+ "Result": SINGLE},
+ {"Name": "misc test 6",
+ "Sequence": 'a' * 257 + ENTER,
+ "Result": NOT_FOUND},
+ {"Name": "misc test 7",
+ "Sequence": "clear_history" + UP*5 + DOWN*5 + ENTER,
+ "Result": PROMPT},
+ {"Name": "misc test 8",
+ "Sequence": "a" + HELP + CTRL_C,
+ "Result": PROMPT},
+ {"Name": "misc test 9",
+ "Sequence": CTRL_D*3,
+ "Result": None},
]
diff --git a/app/test/autotest.py b/app/test/autotest.py
index b9fd6b6..3a00538 100644
--- a/app/test/autotest.py
+++ b/app/test/autotest.py
@@ -33,44 +33,46 @@
# Script that uses either test app or qemu controlled by python-pexpect
-import sys, autotest_data, autotest_runner
-
+import autotest_data
+import autotest_runner
+import sys
def usage():
- print"Usage: autotest.py [test app|test iso image]",
- print "[target] [whitelist|-blacklist]"
+ print"Usage: autotest.py [test app|test iso image]",
+ print "[target] [whitelist|-blacklist]"
if len(sys.argv) < 3:
- usage()
- sys.exit(1)
+ usage()
+ sys.exit(1)
target = sys.argv[2]
-test_whitelist=None
-test_blacklist=None
+test_whitelist = None
+test_blacklist = None
# get blacklist/whitelist
if len(sys.argv) > 3:
- testlist = sys.argv[3].split(',')
- testlist = [test.lower() for test in testlist]
- if testlist[0].startswith('-'):
- testlist[0] = testlist[0].lstrip('-')
- test_blacklist = testlist
- else:
- test_whitelist = testlist
+ testlist = sys.argv[3].split(',')
+ testlist = [test.lower() for test in testlist]
+ if testlist[0].startswith('-'):
+ testlist[0] = testlist[0].lstrip('-')
+ test_blacklist = testlist
+ else:
+ test_whitelist = testlist
-cmdline = "%s -c f -n 4"%(sys.argv[1])
+cmdline = "%s -c f -n 4" % (sys.argv[1])
print cmdline
-runner = autotest_runner.AutotestRunner(cmdline, target, test_blacklist, test_whitelist)
+runner = autotest_runner.AutotestRunner(cmdline, target, test_blacklist,
+ test_whitelist)
for test_group in autotest_data.parallel_test_group_list:
- runner.add_parallel_test_group(test_group)
+ runner.add_parallel_test_group(test_group)
for test_group in autotest_data.non_parallel_test_group_list:
- runner.add_non_parallel_test_group(test_group)
+ runner.add_non_parallel_test_group(test_group)
num_fails = runner.run_all_tests()
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 9e8fd94..0cf4cfd 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# BSD LICENSE
#
@@ -36,12 +36,14 @@
from glob import glob
from autotest_test_funcs import *
+
# quick and dirty function to find out number of sockets
def num_sockets():
- result = len(glob("/sys/devices/system/node/node*"))
- if result == 0:
- return 1
- return result
+ result = len(glob("/sys/devices/system/node/node*"))
+ if result == 0:
+ return 1
+ return result
+
# Assign given number to each socket
# e.g. 32 becomes 32,32 or 32,32,32,32
@@ -51,420 +53,419 @@ def per_sockets(num):
# groups of tests that can be run in parallel
# the grouping has been found largely empirically
parallel_test_group_list = [
-
-{
- "Prefix": "group_1",
- "Memory" : per_sockets(8),
- "Tests" :
- [
- {
- "Name" : "Cycles autotest",
- "Command" : "cycles_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Timer autotest",
- "Command" : "timer_autotest",
- "Func" : timer_autotest,
- "Report" : None,
- },
- {
- "Name" : "Debug autotest",
- "Command" : "debug_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Errno autotest",
- "Command" : "errno_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Meter autotest",
- "Command" : "meter_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Common autotest",
- "Command" : "common_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Resource autotest",
- "Command" : "resource_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "group_2",
- "Memory" : "16",
- "Tests" :
- [
- {
- "Name" : "Memory autotest",
- "Command" : "memory_autotest",
- "Func" : memory_autotest,
- "Report" : None,
- },
- {
- "Name" : "Read/write lock autotest",
- "Command" : "rwlock_autotest",
- "Func" : rwlock_autotest,
- "Report" : None,
- },
- {
- "Name" : "Logs autotest",
- "Command" : "logs_autotest",
- "Func" : logs_autotest,
- "Report" : None,
- },
- {
- "Name" : "CPU flags autotest",
- "Command" : "cpuflags_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Version autotest",
- "Command" : "version_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "EAL filesystem autotest",
- "Command" : "eal_fs_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "EAL flags autotest",
- "Command" : "eal_flags_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Hash autotest",
- "Command" : "hash_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ],
-},
-{
- "Prefix": "group_3",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "LPM autotest",
- "Command" : "lpm_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "LPM6 autotest",
- "Command" : "lpm6_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Memcpy autotest",
- "Command" : "memcpy_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Memzone autotest",
- "Command" : "memzone_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "String autotest",
- "Command" : "string_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Alarm autotest",
- "Command" : "alarm_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "group_4",
- "Memory" : per_sockets(128),
- "Tests" :
- [
- {
- "Name" : "PCI autotest",
- "Command" : "pci_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Malloc autotest",
- "Command" : "malloc_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Multi-process autotest",
- "Command" : "multiprocess_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Mbuf autotest",
- "Command" : "mbuf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Per-lcore autotest",
- "Command" : "per_lcore_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Ring autotest",
- "Command" : "ring_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "group_5",
- "Memory" : "32",
- "Tests" :
- [
- {
- "Name" : "Spinlock autotest",
- "Command" : "spinlock_autotest",
- "Func" : spinlock_autotest,
- "Report" : None,
- },
- {
- "Name" : "Byte order autotest",
- "Command" : "byteorder_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "TAILQ autotest",
- "Command" : "tailq_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Command-line autotest",
- "Command" : "cmdline_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Interrupts autotest",
- "Command" : "interrupt_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "group_6",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "Function reentrancy autotest",
- "Command" : "func_reentrancy_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Mempool autotest",
- "Command" : "mempool_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Atomics autotest",
- "Command" : "atomic_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Prefetch autotest",
- "Command" : "prefetch_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" :"Red autotest",
- "Command" : "red_autotest",
- "Func" :default_autotest,
- "Report" :None,
- },
- ]
-},
-{
- "Prefix" : "group_7",
- "Memory" : "64",
- "Tests" :
- [
- {
- "Name" : "PMD ring autotest",
- "Command" : "ring_pmd_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Access list control autotest",
- "Command" : "acl_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" :"Sched autotest",
- "Command" : "sched_autotest",
- "Func" :default_autotest,
- "Report" :None,
- },
- ]
-},
+ {
+ "Prefix": "group_1",
+ "Memory": per_sockets(8),
+ "Tests":
+ [
+ {
+ "Name": "Cycles autotest",
+ "Command": "cycles_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Timer autotest",
+ "Command": "timer_autotest",
+ "Func": timer_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Debug autotest",
+ "Command": "debug_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Errno autotest",
+ "Command": "errno_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Meter autotest",
+ "Command": "meter_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Common autotest",
+ "Command": "common_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Resource autotest",
+ "Command": "resource_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "group_2",
+ "Memory": "16",
+ "Tests":
+ [
+ {
+ "Name": "Memory autotest",
+ "Command": "memory_autotest",
+ "Func": memory_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Read/write lock autotest",
+ "Command": "rwlock_autotest",
+ "Func": rwlock_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Logs autotest",
+ "Command": "logs_autotest",
+ "Func": logs_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "CPU flags autotest",
+ "Command": "cpuflags_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Version autotest",
+ "Command": "version_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "EAL filesystem autotest",
+ "Command": "eal_fs_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "EAL flags autotest",
+ "Command": "eal_flags_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Hash autotest",
+ "Command": "hash_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ],
+ },
+ {
+ "Prefix": "group_3",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "LPM autotest",
+ "Command": "lpm_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "LPM6 autotest",
+ "Command": "lpm6_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Memcpy autotest",
+ "Command": "memcpy_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Memzone autotest",
+ "Command": "memzone_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "String autotest",
+ "Command": "string_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Alarm autotest",
+ "Command": "alarm_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "group_4",
+ "Memory": per_sockets(128),
+ "Tests":
+ [
+ {
+ "Name": "PCI autotest",
+ "Command": "pci_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Malloc autotest",
+ "Command": "malloc_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Multi-process autotest",
+ "Command": "multiprocess_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Mbuf autotest",
+ "Command": "mbuf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Per-lcore autotest",
+ "Command": "per_lcore_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Ring autotest",
+ "Command": "ring_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "group_5",
+ "Memory": "32",
+ "Tests":
+ [
+ {
+ "Name": "Spinlock autotest",
+ "Command": "spinlock_autotest",
+ "Func": spinlock_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Byte order autotest",
+ "Command": "byteorder_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "TAILQ autotest",
+ "Command": "tailq_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Command-line autotest",
+ "Command": "cmdline_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Interrupts autotest",
+ "Command": "interrupt_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "group_6",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "Function reentrancy autotest",
+ "Command": "func_reentrancy_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Mempool autotest",
+ "Command": "mempool_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Atomics autotest",
+ "Command": "atomic_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Prefetch autotest",
+ "Command": "prefetch_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Red autotest",
+ "Command": "red_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "group_7",
+ "Memory": "64",
+ "Tests":
+ [
+ {
+ "Name": "PMD ring autotest",
+ "Command": "ring_pmd_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Access list control autotest",
+ "Command": "acl_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Sched autotest",
+ "Command": "sched_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
]
# tests that should not be run when any other tests are running
non_parallel_test_group_list = [
-{
- "Prefix" : "kni",
- "Memory" : "512",
- "Tests" :
- [
- {
- "Name" : "KNI autotest",
- "Command" : "kni_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "mempool_perf",
- "Memory" : per_sockets(256),
- "Tests" :
- [
- {
- "Name" : "Mempool performance autotest",
- "Command" : "mempool_perf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "memcpy_perf",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "Memcpy performance autotest",
- "Command" : "memcpy_perf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "hash_perf",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "Hash performance autotest",
- "Command" : "hash_perf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix" : "power",
- "Memory" : "16",
- "Tests" :
- [
- {
- "Name" : "Power autotest",
- "Command" : "power_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix" : "power_acpi_cpufreq",
- "Memory" : "16",
- "Tests" :
- [
- {
- "Name" : "Power ACPI cpufreq autotest",
- "Command" : "power_acpi_cpufreq_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix" : "power_kvm_vm",
- "Memory" : "16",
- "Tests" :
- [
- {
- "Name" : "Power KVM VM autotest",
- "Command" : "power_kvm_vm_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "timer_perf",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "Timer performance autotest",
- "Command" : "timer_perf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
+ {
+ "Prefix": "kni",
+ "Memory": "512",
+ "Tests":
+ [
+ {
+ "Name": "KNI autotest",
+ "Command": "kni_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "mempool_perf",
+ "Memory": per_sockets(256),
+ "Tests":
+ [
+ {
+ "Name": "Mempool performance autotest",
+ "Command": "mempool_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "memcpy_perf",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "Memcpy performance autotest",
+ "Command": "memcpy_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "hash_perf",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "Hash performance autotest",
+ "Command": "hash_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "power",
+ "Memory": "16",
+ "Tests":
+ [
+ {
+ "Name": "Power autotest",
+ "Command": "power_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "power_acpi_cpufreq",
+ "Memory": "16",
+ "Tests":
+ [
+ {
+ "Name": "Power ACPI cpufreq autotest",
+ "Command": "power_acpi_cpufreq_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "power_kvm_vm",
+ "Memory": "16",
+ "Tests":
+ [
+ {
+ "Name": "Power KVM VM autotest",
+ "Command": "power_kvm_vm_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "timer_perf",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "Timer performance autotest",
+ "Command": "timer_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
-#
-# Please always make sure that ring_perf is the last test!
-#
-{
- "Prefix": "ring_perf",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "Ring performance autotest",
- "Command" : "ring_perf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
+ #
+ # Please always make sure that ring_perf is the last test!
+ #
+ {
+ "Prefix": "ring_perf",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "Ring performance autotest",
+ "Command": "ring_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
]
diff --git a/app/test/autotest_runner.py b/app/test/autotest_runner.py
index 21d3be2..55b63a8 100644
--- a/app/test/autotest_runner.py
+++ b/app/test/autotest_runner.py
@@ -33,20 +33,29 @@
# The main logic behind running autotests in parallel
-import multiprocessing, subprocess, sys, pexpect, re, time, os, StringIO, csv
+import StringIO
+import csv
+import multiprocessing
+import pexpect
+import re
+import subprocess
+import sys
+import time
# wait for prompt
+
+
def wait_prompt(child):
- try:
- child.sendline()
- result = child.expect(["RTE>>", pexpect.TIMEOUT, pexpect.EOF],
- timeout = 120)
- except:
- return False
- if result == 0:
- return True
- else:
- return False
+ try:
+ child.sendline()
+ result = child.expect(["RTE>>", pexpect.TIMEOUT, pexpect.EOF],
+ timeout=120)
+ except:
+ return False
+ if result == 0:
+ return True
+ else:
+ return False
# run a test group
# each result tuple in results list consists of:
@@ -60,363 +69,363 @@ def wait_prompt(child):
# this function needs to be outside AutotestRunner class
# because otherwise Pool won't work (or rather it will require
# quite a bit of effort to make it work).
-def run_test_group(cmdline, test_group):
- results = []
- child = None
- start_time = time.time()
- startuplog = None
-
- # run test app
- try:
- # prepare logging of init
- startuplog = StringIO.StringIO()
-
- print >>startuplog, "\n%s %s\n" % ("="*20, test_group["Prefix"])
- print >>startuplog, "\ncmdline=%s" % cmdline
-
- child = pexpect.spawn(cmdline, logfile=startuplog)
-
- # wait for target to boot
- if not wait_prompt(child):
- child.close()
-
- results.append((-1, "Fail [No prompt]", "Start %s" % test_group["Prefix"],
- time.time() - start_time, startuplog.getvalue(), None))
-
- # mark all tests as failed
- for test in test_group["Tests"]:
- results.append((-1, "Fail [No prompt]", test["Name"],
- time.time() - start_time, "", None))
- # exit test
- return results
-
- except:
- results.append((-1, "Fail [Can't run]", "Start %s" % test_group["Prefix"],
- time.time() - start_time, startuplog.getvalue(), None))
-
- # mark all tests as failed
- for t in test_group["Tests"]:
- results.append((-1, "Fail [Can't run]", t["Name"],
- time.time() - start_time, "", None))
- # exit test
- return results
-
- # startup was successful
- results.append((0, "Success", "Start %s" % test_group["Prefix"],
- time.time() - start_time, startuplog.getvalue(), None))
-
- # parse the binary for available test commands
- binary = cmdline.split()[0]
- stripped = 'not stripped' not in subprocess.check_output(['file', binary])
- if not stripped:
- symbols = subprocess.check_output(['nm', binary]).decode('utf-8')
- avail_cmds = re.findall('test_register_(\w+)', symbols)
-
- # run all tests in test group
- for test in test_group["Tests"]:
-
- # create log buffer for each test
- # in multiprocessing environment, the logging would be
- # interleaved and will create a mess, hence the buffering
- logfile = StringIO.StringIO()
- child.logfile = logfile
-
- result = ()
-
- # make a note when the test started
- start_time = time.time()
-
- try:
- # print test name to log buffer
- print >>logfile, "\n%s %s\n" % ("-"*20, test["Name"])
-
- # run test function associated with the test
- if stripped or test["Command"] in avail_cmds:
- result = test["Func"](child, test["Command"])
- else:
- result = (0, "Skipped [Not Available]")
-
- # make a note when the test was finished
- end_time = time.time()
-
- # append test data to the result tuple
- result += (test["Name"], end_time - start_time,
- logfile.getvalue())
-
- # call report function, if any defined, and supply it with
- # target and complete log for test run
- if test["Report"]:
- report = test["Report"](self.target, log)
-
- # append report to results tuple
- result += (report,)
- else:
- # report is None
- result += (None,)
- except:
- # make a note when the test crashed
- end_time = time.time()
-
- # mark test as failed
- result = (-1, "Fail [Crash]", test["Name"],
- end_time - start_time, logfile.getvalue(), None)
- finally:
- # append the results to the results list
- results.append(result)
-
- # regardless of whether test has crashed, try quitting it
- try:
- child.sendline("quit")
- child.close()
- # if the test crashed, just do nothing instead
- except:
- # nop
- pass
-
- # return test results
- return results
-
+def run_test_group(cmdline, test_group):
+ results = []
+ child = None
+ start_time = time.time()
+ startuplog = None
+
+ # run test app
+ try:
+ # prepare logging of init
+ startuplog = StringIO.StringIO()
+
+ print >>startuplog, "\n%s %s\n" % ("=" * 20, test_group["Prefix"])
+ print >>startuplog, "\ncmdline=%s" % cmdline
+
+ child = pexpect.spawn(cmdline, logfile=startuplog)
+
+ # wait for target to boot
+ if not wait_prompt(child):
+ child.close()
+
+ results.append((-1,
+ "Fail [No prompt]",
+ "Start %s" % test_group["Prefix"],
+ time.time() - start_time,
+ startuplog.getvalue(),
+ None))
+
+ # mark all tests as failed
+ for test in test_group["Tests"]:
+ results.append((-1, "Fail [No prompt]", test["Name"],
+ time.time() - start_time, "", None))
+ # exit test
+ return results
+
+ except:
+ results.append((-1,
+ "Fail [Can't run]",
+ "Start %s" % test_group["Prefix"],
+ time.time() - start_time,
+ startuplog.getvalue(),
+ None))
+
+ # mark all tests as failed
+ for t in test_group["Tests"]:
+ results.append((-1, "Fail [Can't run]", t["Name"],
+ time.time() - start_time, "", None))
+ # exit test
+ return results
+
+ # startup was successful
+ results.append((0, "Success", "Start %s" % test_group["Prefix"],
+ time.time() - start_time, startuplog.getvalue(), None))
+
+ # parse the binary for available test commands
+ binary = cmdline.split()[0]
+ stripped = 'not stripped' not in subprocess.check_output(['file', binary])
+ if not stripped:
+ symbols = subprocess.check_output(['nm', binary]).decode('utf-8')
+ avail_cmds = re.findall('test_register_(\w+)', symbols)
+
+ # run all tests in test group
+ for test in test_group["Tests"]:
+
+ # create log buffer for each test
+ # in multiprocessing environment, the logging would be
+ # interleaved and will create a mess, hence the buffering
+ logfile = StringIO.StringIO()
+ child.logfile = logfile
+
+ result = ()
+
+ # make a note when the test started
+ start_time = time.time()
+
+ try:
+ # print test name to log buffer
+ print >>logfile, "\n%s %s\n" % ("-" * 20, test["Name"])
+
+ # run test function associated with the test
+ if stripped or test["Command"] in avail_cmds:
+ result = test["Func"](child, test["Command"])
+ else:
+ result = (0, "Skipped [Not Available]")
+
+ # make a note when the test was finished
+ end_time = time.time()
+
+ # append test data to the result tuple
+ result += (test["Name"], end_time - start_time,
+ logfile.getvalue())
+
+ # call report function, if any defined, and supply it with
+ # target and complete log for test run
+ if test["Report"]:
+ report = test["Report"](self.target, log)
+
+ # append report to results tuple
+ result += (report,)
+ else:
+ # report is None
+ result += (None,)
+ except:
+ # make a note when the test crashed
+ end_time = time.time()
+
+ # mark test as failed
+ result = (-1, "Fail [Crash]", test["Name"],
+ end_time - start_time, logfile.getvalue(), None)
+ finally:
+ # append the results to the results list
+ results.append(result)
+
+ # regardless of whether test has crashed, try quitting it
+ try:
+ child.sendline("quit")
+ child.close()
+ # if the test crashed, just do nothing instead
+ except:
+ # nop
+ pass
+
+ # return test results
+ return results
# class representing an instance of autotests run
class AutotestRunner:
- cmdline = ""
- parallel_test_groups = []
- non_parallel_test_groups = []
- logfile = None
- csvwriter = None
- target = ""
- start = None
- n_tests = 0
- fails = 0
- log_buffers = []
- blacklist = []
- whitelist = []
-
-
- def __init__(self, cmdline, target, blacklist, whitelist):
- self.cmdline = cmdline
- self.target = target
- self.blacklist = blacklist
- self.whitelist = whitelist
-
- # log file filename
- logfile = "%s.log" % target
- csvfile = "%s.csv" % target
-
- self.logfile = open(logfile, "w")
- csvfile = open(csvfile, "w")
- self.csvwriter = csv.writer(csvfile)
-
- # prepare results table
- self.csvwriter.writerow(["test_name","test_result","result_str"])
-
-
-
- # set up cmdline string
- def __get_cmdline(self, test):
- cmdline = self.cmdline
-
- # append memory limitations for each test
- # otherwise tests won't run in parallel
- if not "i686" in self.target:
- cmdline += " --socket-mem=%s"% test["Memory"]
- else:
- # affinitize startup so that tests don't fail on i686
- cmdline = "taskset 1 " + cmdline
- cmdline += " -m " + str(sum(map(int,test["Memory"].split(","))))
-
- # set group prefix for autotest group
- # otherwise they won't run in parallel
- cmdline += " --file-prefix=%s"% test["Prefix"]
-
- return cmdline
-
-
-
- def add_parallel_test_group(self,test_group):
- self.parallel_test_groups.append(test_group)
-
- def add_non_parallel_test_group(self,test_group):
- self.non_parallel_test_groups.append(test_group)
-
-
- def __process_results(self, results):
- # this iterates over individual test results
- for i, result in enumerate(results):
-
- # increase total number of tests that were run
- # do not include "start" test
- if i > 0:
- self.n_tests += 1
-
- # unpack result tuple
- test_result, result_str, test_name, \
- test_time, log, report = result
-
- # get total run time
- cur_time = time.time()
- total_time = int(cur_time - self.start)
-
- # print results, test run time and total time since start
- print ("%s:" % test_name).ljust(30),
- print result_str.ljust(29),
- print "[%02dm %02ds]" % (test_time / 60, test_time % 60),
-
- # don't print out total time every line, it's the same anyway
- if i == len(results) - 1:
- print "[%02dm %02ds]" % (total_time / 60, total_time % 60)
- else:
- print ""
-
- # if test failed and it wasn't a "start" test
- if test_result < 0 and not i == 0:
- self.fails += 1
-
- # collect logs
- self.log_buffers.append(log)
-
- # create report if it exists
- if report:
- try:
- f = open("%s_%s_report.rst" % (self.target,test_name), "w")
- except IOError:
- print "Report for %s could not be created!" % test_name
- else:
- with f:
- f.write(report)
-
- # write test result to CSV file
- if i != 0:
- self.csvwriter.writerow([test_name, test_result, result_str])
-
-
-
-
- # this function iterates over test groups and removes each
- # test that is not in whitelist/blacklist
- def __filter_groups(self, test_groups):
- groups_to_remove = []
-
- # filter out tests from parallel test groups
- for i, test_group in enumerate(test_groups):
-
- # iterate over a copy so that we could safely delete individual tests
- for test in test_group["Tests"][:]:
- test_id = test["Command"]
-
- # dump tests are specified in full e.g. "Dump_mempool"
- if "_autotest" in test_id:
- test_id = test_id[:-len("_autotest")]
-
- # filter out blacklisted/whitelisted tests
- if self.blacklist and test_id in self.blacklist:
- test_group["Tests"].remove(test)
- continue
- if self.whitelist and test_id not in self.whitelist:
- test_group["Tests"].remove(test)
- continue
-
- # modify or remove original group
- if len(test_group["Tests"]) > 0:
- test_groups[i] = test_group
- else:
- # remember which groups should be deleted
- # put the numbers backwards so that we start
- # deleting from the end, not from the beginning
- groups_to_remove.insert(0, i)
-
- # remove test groups that need to be removed
- for i in groups_to_remove:
- del test_groups[i]
-
- return test_groups
-
-
-
- # iterate over test groups and run tests associated with them
- def run_all_tests(self):
- # filter groups
- self.parallel_test_groups = \
- self.__filter_groups(self.parallel_test_groups)
- self.non_parallel_test_groups = \
- self.__filter_groups(self.non_parallel_test_groups)
-
- # create a pool of worker threads
- pool = multiprocessing.Pool(processes=1)
-
- results = []
-
- # whatever happens, try to save as much logs as possible
- try:
-
- # create table header
- print ""
- print "Test name".ljust(30),
- print "Test result".ljust(29),
- print "Test".center(9),
- print "Total".center(9)
- print "=" * 80
-
- # make a note of tests start time
- self.start = time.time()
-
- # assign worker threads to run test groups
- for test_group in self.parallel_test_groups:
- result = pool.apply_async(run_test_group,
- [self.__get_cmdline(test_group), test_group])
- results.append(result)
-
- # iterate while we have group execution results to get
- while len(results) > 0:
-
- # iterate over a copy to be able to safely delete results
- # this iterates over a list of group results
- for group_result in results[:]:
-
- # if the thread hasn't finished yet, continue
- if not group_result.ready():
- continue
-
- res = group_result.get()
-
- self.__process_results(res)
-
- # remove result from results list once we're done with it
- results.remove(group_result)
-
- # run non_parallel tests. they are run one by one, synchronously
- for test_group in self.non_parallel_test_groups:
- group_result = run_test_group(self.__get_cmdline(test_group), test_group)
-
- self.__process_results(group_result)
-
- # get total run time
- cur_time = time.time()
- total_time = int(cur_time - self.start)
-
- # print out summary
- print "=" * 80
- print "Total run time: %02dm %02ds" % (total_time / 60, total_time % 60)
- if self.fails != 0:
- print "Number of failed tests: %s" % str(self.fails)
-
- # write summary to logfile
- self.logfile.write("Summary\n")
- self.logfile.write("Target: ".ljust(15) + "%s\n" % self.target)
- self.logfile.write("Tests: ".ljust(15) + "%i\n" % self.n_tests)
- self.logfile.write("Failed tests: ".ljust(15) + "%i\n" % self.fails)
- except:
- print "Exception occured"
- print sys.exc_info()
- self.fails = 1
-
- # drop logs from all executions to a logfile
- for buf in self.log_buffers:
- self.logfile.write(buf.replace("\r",""))
-
- log_buffers = []
-
- return self.fails
+ cmdline = ""
+ parallel_test_groups = []
+ non_parallel_test_groups = []
+ logfile = None
+ csvwriter = None
+ target = ""
+ start = None
+ n_tests = 0
+ fails = 0
+ log_buffers = []
+ blacklist = []
+ whitelist = []
+
+ def __init__(self, cmdline, target, blacklist, whitelist):
+ self.cmdline = cmdline
+ self.target = target
+ self.blacklist = blacklist
+ self.whitelist = whitelist
+
+ # log file filename
+ logfile = "%s.log" % target
+ csvfile = "%s.csv" % target
+
+ self.logfile = open(logfile, "w")
+ csvfile = open(csvfile, "w")
+ self.csvwriter = csv.writer(csvfile)
+
+ # prepare results table
+ self.csvwriter.writerow(["test_name", "test_result", "result_str"])
+
+ # set up cmdline string
+ def __get_cmdline(self, test):
+ cmdline = self.cmdline
+
+ # append memory limitations for each test
+ # otherwise tests won't run in parallel
+ if "i686" not in self.target:
+ cmdline += " --socket-mem=%s" % test["Memory"]
+ else:
+ # affinitize startup so that tests don't fail on i686
+ cmdline = "taskset 1 " + cmdline
+ cmdline += " -m " + str(sum(map(int, test["Memory"].split(","))))
+
+ # set group prefix for autotest group
+ # otherwise they won't run in parallel
+ cmdline += " --file-prefix=%s" % test["Prefix"]
+
+ return cmdline
+
+ def add_parallel_test_group(self, test_group):
+ self.parallel_test_groups.append(test_group)
+
+ def add_non_parallel_test_group(self, test_group):
+ self.non_parallel_test_groups.append(test_group)
+
+ def __process_results(self, results):
+ # this iterates over individual test results
+ for i, result in enumerate(results):
+
+ # increase total number of tests that were run
+ # do not include "start" test
+ if i > 0:
+ self.n_tests += 1
+
+ # unpack result tuple
+ test_result, result_str, test_name, \
+ test_time, log, report = result
+
+ # get total run time
+ cur_time = time.time()
+ total_time = int(cur_time - self.start)
+
+ # print results, test run time and total time since start
+ print ("%s:" % test_name).ljust(30),
+ print result_str.ljust(29),
+ print "[%02dm %02ds]" % (test_time / 60, test_time % 60),
+
+ # don't print out total time every line, it's the same anyway
+ if i == len(results) - 1:
+ print "[%02dm %02ds]" % (total_time / 60, total_time % 60)
+ else:
+ print ""
+
+ # if test failed and it wasn't a "start" test
+ if test_result < 0 and not i == 0:
+ self.fails += 1
+
+ # collect logs
+ self.log_buffers.append(log)
+
+ # create report if it exists
+ if report:
+ try:
+ f = open("%s_%s_report.rst" %
+ (self.target, test_name), "w")
+ except IOError:
+ print "Report for %s could not be created!" % test_name
+ else:
+ with f:
+ f.write(report)
+
+ # write test result to CSV file
+ if i != 0:
+ self.csvwriter.writerow([test_name, test_result, result_str])
+
+ # this function iterates over test groups and removes each
+ # test that is not in whitelist/blacklist
+ def __filter_groups(self, test_groups):
+ groups_to_remove = []
+
+ # filter out tests from parallel test groups
+ for i, test_group in enumerate(test_groups):
+
+ # iterate over a copy so that we could safely delete individual
+ # tests
+ for test in test_group["Tests"][:]:
+ test_id = test["Command"]
+
+ # dump tests are specified in full e.g. "Dump_mempool"
+ if "_autotest" in test_id:
+ test_id = test_id[:-len("_autotest")]
+
+ # filter out blacklisted/whitelisted tests
+ if self.blacklist and test_id in self.blacklist:
+ test_group["Tests"].remove(test)
+ continue
+ if self.whitelist and test_id not in self.whitelist:
+ test_group["Tests"].remove(test)
+ continue
+
+ # modify or remove original group
+ if len(test_group["Tests"]) > 0:
+ test_groups[i] = test_group
+ else:
+ # remember which groups should be deleted
+ # put the numbers backwards so that we start
+ # deleting from the end, not from the beginning
+ groups_to_remove.insert(0, i)
+
+ # remove test groups that need to be removed
+ for i in groups_to_remove:
+ del test_groups[i]
+
+ return test_groups
+
+ # iterate over test groups and run tests associated with them
+ def run_all_tests(self):
+ # filter groups
+ self.parallel_test_groups = \
+ self.__filter_groups(self.parallel_test_groups)
+ self.non_parallel_test_groups = \
+ self.__filter_groups(self.non_parallel_test_groups)
+
+ # create a pool of worker threads
+ pool = multiprocessing.Pool(processes=1)
+
+ results = []
+
+ # whatever happens, try to save as much logs as possible
+ try:
+
+ # create table header
+ print ""
+ print "Test name".ljust(30),
+ print "Test result".ljust(29),
+ print "Test".center(9),
+ print "Total".center(9)
+ print "=" * 80
+
+ # make a note of tests start time
+ self.start = time.time()
+
+ # assign worker threads to run test groups
+ for test_group in self.parallel_test_groups:
+ result = pool.apply_async(run_test_group,
+ [self.__get_cmdline(test_group),
+ test_group])
+ results.append(result)
+
+ # iterate while we have group execution results to get
+ while len(results) > 0:
+
+ # iterate over a copy to be able to safely delete results
+ # this iterates over a list of group results
+ for group_result in results[:]:
+
+ # if the thread hasn't finished yet, continue
+ if not group_result.ready():
+ continue
+
+ res = group_result.get()
+
+ self.__process_results(res)
+
+ # remove result from results list once we're done with it
+ results.remove(group_result)
+
+ # run non_parallel tests. they are run one by one, synchronously
+ for test_group in self.non_parallel_test_groups:
+ group_result = run_test_group(
+ self.__get_cmdline(test_group), test_group)
+
+ self.__process_results(group_result)
+
+ # get total run time
+ cur_time = time.time()
+ total_time = int(cur_time - self.start)
+
+ # print out summary
+ print "=" * 80
+ print "Total run time: %02dm %02ds" % (total_time / 60,
+ total_time % 60)
+ if self.fails != 0:
+ print "Number of failed tests: %s" % str(self.fails)
+
+ # write summary to logfile
+ self.logfile.write("Summary\n")
+ self.logfile.write("Target: ".ljust(15) + "%s\n" % self.target)
+ self.logfile.write("Tests: ".ljust(15) + "%i\n" % self.n_tests)
+ self.logfile.write("Failed tests: ".ljust(
+ 15) + "%i\n" % self.fails)
+ except:
+ print "Exception occurred"
+ print sys.exc_info()
+ self.fails = 1
+
+ # drop logs from all executions to a logfile
+ for buf in self.log_buffers:
+ self.logfile.write(buf.replace("\r", ""))
+
+ return self.fails
diff --git a/app/test/autotest_test_funcs.py b/app/test/autotest_test_funcs.py
index 14cffd0..c482ea8 100644
--- a/app/test/autotest_test_funcs.py
+++ b/app/test/autotest_test_funcs.py
@@ -33,257 +33,272 @@
# Test functions
-import sys, pexpect, time, os, re
+import pexpect
# default autotest, used to run most tests
# waits for "Test OK"
+
+
def default_autotest(child, test_name):
- child.sendline(test_name)
- result = child.expect(["Test OK", "Test Failed",
- "Command not found", pexpect.TIMEOUT], timeout = 900)
- if result == 1:
- return -1, "Fail"
- elif result == 2:
- return -1, "Fail [Not found]"
- elif result == 3:
- return -1, "Fail [Timeout]"
- return 0, "Success"
+ child.sendline(test_name)
+ result = child.expect(["Test OK", "Test Failed",
+ "Command not found", pexpect.TIMEOUT], timeout=900)
+ if result == 1:
+ return -1, "Fail"
+ elif result == 2:
+ return -1, "Fail [Not found]"
+ elif result == 3:
+ return -1, "Fail [Timeout]"
+ return 0, "Success"
# autotest used to run dump commands
# just fires the command
+
+
def dump_autotest(child, test_name):
- child.sendline(test_name)
- return 0, "Success"
+ child.sendline(test_name)
+ return 0, "Success"
# memory autotest
# reads output and waits for Test OK
+
+
def memory_autotest(child, test_name):
- child.sendline(test_name)
- regexp = "phys:0x[0-9a-f]*, len:([0-9]*), virt:0x[0-9a-f]*, socket_id:[0-9]*"
- index = child.expect([regexp, pexpect.TIMEOUT], timeout = 180)
- if index != 0:
- return -1, "Fail [Timeout]"
- size = int(child.match.groups()[0], 16)
- if size <= 0:
- return -1, "Fail [Bad size]"
- index = child.expect(["Test OK", "Test Failed",
- pexpect.TIMEOUT], timeout = 10)
- if index == 1:
- return -1, "Fail"
- elif index == 2:
- return -1, "Fail [Timeout]"
- return 0, "Success"
+ child.sendline(test_name)
+ regexp = "phys:0x[0-9a-f]*, len:([0-9]*), virt:0x[0-9a-f]*, " \
+ "socket_id:[0-9]*"
+ index = child.expect([regexp, pexpect.TIMEOUT], timeout=180)
+ if index != 0:
+ return -1, "Fail [Timeout]"
+ size = int(child.match.groups()[0], 16)
+ if size <= 0:
+ return -1, "Fail [Bad size]"
+ index = child.expect(["Test OK", "Test Failed",
+ pexpect.TIMEOUT], timeout=10)
+ if index == 1:
+ return -1, "Fail"
+ elif index == 2:
+ return -1, "Fail [Timeout]"
+ return 0, "Success"
+
def spinlock_autotest(child, test_name):
- i = 0
- ir = 0
- child.sendline(test_name)
- while True:
- index = child.expect(["Test OK",
- "Test Failed",
- "Hello from core ([0-9]*) !",
- "Hello from within recursive locks from ([0-9]*) !",
- pexpect.TIMEOUT], timeout = 5)
- # ok
- if index == 0:
- break
-
- # message, check ordering
- elif index == 2:
- if int(child.match.groups()[0]) < i:
- return -1, "Fail [Bad order]"
- i = int(child.match.groups()[0])
- elif index == 3:
- if int(child.match.groups()[0]) < ir:
- return -1, "Fail [Bad order]"
- ir = int(child.match.groups()[0])
-
- # fail
- elif index == 4:
- return -1, "Fail [Timeout]"
- elif index == 1:
- return -1, "Fail"
-
- return 0, "Success"
+ i = 0
+ ir = 0
+ child.sendline(test_name)
+ while True:
+ index = child.expect(["Test OK",
+ "Test Failed",
+ "Hello from core ([0-9]*) !",
+ "Hello from within recursive locks "
+ "from ([0-9]*) !",
+ pexpect.TIMEOUT], timeout=5)
+ # ok
+ if index == 0:
+ break
+
+ # message, check ordering
+ elif index == 2:
+ if int(child.match.groups()[0]) < i:
+ return -1, "Fail [Bad order]"
+ i = int(child.match.groups()[0])
+ elif index == 3:
+ if int(child.match.groups()[0]) < ir:
+ return -1, "Fail [Bad order]"
+ ir = int(child.match.groups()[0])
+
+ # fail
+ elif index == 4:
+ return -1, "Fail [Timeout]"
+ elif index == 1:
+ return -1, "Fail"
+
+ return 0, "Success"
+
def rwlock_autotest(child, test_name):
- i = 0
- child.sendline(test_name)
- while True:
- index = child.expect(["Test OK",
- "Test Failed",
- "Hello from core ([0-9]*) !",
- "Global write lock taken on master core ([0-9]*)",
- pexpect.TIMEOUT], timeout = 10)
- # ok
- if index == 0:
- if i != 0xffff:
- return -1, "Fail [Message is missing]"
- break
-
- # message, check ordering
- elif index == 2:
- if int(child.match.groups()[0]) < i:
- return -1, "Fail [Bad order]"
- i = int(child.match.groups()[0])
-
- # must be the last message, check ordering
- elif index == 3:
- i = 0xffff
-
- elif index == 4:
- return -1, "Fail [Timeout]"
-
- # fail
- else:
- return -1, "Fail"
-
- return 0, "Success"
+ i = 0
+ child.sendline(test_name)
+ while True:
+ index = child.expect(["Test OK",
+ "Test Failed",
+ "Hello from core ([0-9]*) !",
+ "Global write lock taken on master "
+ "core ([0-9]*)",
+ pexpect.TIMEOUT], timeout=10)
+ # ok
+ if index == 0:
+ if i != 0xffff:
+ return -1, "Fail [Message is missing]"
+ break
+
+ # message, check ordering
+ elif index == 2:
+ if int(child.match.groups()[0]) < i:
+ return -1, "Fail [Bad order]"
+ i = int(child.match.groups()[0])
+
+ # must be the last message, check ordering
+ elif index == 3:
+ i = 0xffff
+
+ elif index == 4:
+ return -1, "Fail [Timeout]"
+
+ # fail
+ else:
+ return -1, "Fail"
+
+ return 0, "Success"
+
def logs_autotest(child, test_name):
- i = 0
- child.sendline(test_name)
-
- log_list = [
- "TESTAPP1: error message",
- "TESTAPP1: critical message",
- "TESTAPP2: critical message",
- "TESTAPP1: error message",
- ]
-
- for log_msg in log_list:
- index = child.expect([log_msg,
- "Test OK",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 10)
-
- if index == 3:
- return -1, "Fail [Timeout]"
- # not ok
- elif index != 0:
- return -1, "Fail"
-
- index = child.expect(["Test OK",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 10)
-
- return 0, "Success"
+ child.sendline(test_name)
+
+ log_list = [
+ "TESTAPP1: error message",
+ "TESTAPP1: critical message",
+ "TESTAPP2: critical message",
+ "TESTAPP1: error message",
+ ]
+
+ for log_msg in log_list:
+ index = child.expect([log_msg,
+ "Test OK",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=10)
+
+ if index == 3:
+ return -1, "Fail [Timeout]"
+ # not ok
+ elif index != 0:
+ return -1, "Fail"
+
+ index = child.expect(["Test OK",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=10)
+
+ return 0, "Success"
+
def timer_autotest(child, test_name):
- i = 0
- child.sendline(test_name)
-
- index = child.expect(["Start timer stress tests",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 5)
-
- if index == 1:
- return -1, "Fail"
- elif index == 2:
- return -1, "Fail [Timeout]"
-
- index = child.expect(["Start timer stress tests 2",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 5)
-
- if index == 1:
- return -1, "Fail"
- elif index == 2:
- return -1, "Fail [Timeout]"
-
- index = child.expect(["Start timer basic tests",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 5)
-
- if index == 1:
- return -1, "Fail"
- elif index == 2:
- return -1, "Fail [Timeout]"
-
- prev_lcore_timer1 = -1
-
- lcore_tim0 = -1
- lcore_tim1 = -1
- lcore_tim2 = -1
- lcore_tim3 = -1
-
- while True:
- index = child.expect(["TESTTIMER: ([0-9]*): callback id=([0-9]*) count=([0-9]*) on core ([0-9]*)",
- "Test OK",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 10)
-
- if index == 1:
- break
-
- if index == 2:
- return -1, "Fail"
- elif index == 3:
- return -1, "Fail [Timeout]"
-
- try:
- t = int(child.match.groups()[0])
- id = int(child.match.groups()[1])
- cnt = int(child.match.groups()[2])
- lcore = int(child.match.groups()[3])
- except:
- return -1, "Fail [Cannot parse]"
-
- # timer0 always expires on the same core when cnt < 20
- if id == 0:
- if lcore_tim0 == -1:
- lcore_tim0 = lcore
- elif lcore != lcore_tim0 and cnt < 20:
- return -1, "Fail [lcore != lcore_tim0 (%d, %d)]"%(lcore, lcore_tim0)
- if cnt > 21:
- return -1, "Fail [tim0 cnt > 21]"
-
- # timer1 each time expires on a different core
- if id == 1:
- if lcore == lcore_tim1:
- return -1, "Fail [lcore == lcore_tim1 (%d, %d)]"%(lcore, lcore_tim1)
- lcore_tim1 = lcore
- if cnt > 10:
- return -1, "Fail [tim1 cnt > 30]"
-
- # timer0 always expires on the same core
- if id == 2:
- if lcore_tim2 == -1:
- lcore_tim2 = lcore
- elif lcore != lcore_tim2:
- return -1, "Fail [lcore != lcore_tim2 (%d, %d)]"%(lcore, lcore_tim2)
- if cnt > 30:
- return -1, "Fail [tim2 cnt > 30]"
-
- # timer0 always expires on the same core
- if id == 3:
- if lcore_tim3 == -1:
- lcore_tim3 = lcore
- elif lcore != lcore_tim3:
- return -1, "Fail [lcore_tim3 changed (%d -> %d)]"%(lcore, lcore_tim3)
- if cnt > 30:
- return -1, "Fail [tim3 cnt > 30]"
-
- # must be 2 different cores
- if lcore_tim0 == lcore_tim3:
- return -1, "Fail [lcore_tim0 (%d) == lcore_tim3 (%d)]"%(lcore_tim0, lcore_tim3)
-
- return 0, "Success"
+ child.sendline(test_name)
+
+ index = child.expect(["Start timer stress tests",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=5)
+
+ if index == 1:
+ return -1, "Fail"
+ elif index == 2:
+ return -1, "Fail [Timeout]"
+
+ index = child.expect(["Start timer stress tests 2",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=5)
+
+ if index == 1:
+ return -1, "Fail"
+ elif index == 2:
+ return -1, "Fail [Timeout]"
+
+ index = child.expect(["Start timer basic tests",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=5)
+
+ if index == 1:
+ return -1, "Fail"
+ elif index == 2:
+ return -1, "Fail [Timeout]"
+
+ lcore_tim0 = -1
+ lcore_tim1 = -1
+ lcore_tim2 = -1
+ lcore_tim3 = -1
+
+ while True:
+ index = child.expect(["TESTTIMER: ([0-9]*): callback id=([0-9]*) "
+ "count=([0-9]*) on core ([0-9]*)",
+ "Test OK",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=10)
+
+ if index == 1:
+ break
+
+ if index == 2:
+ return -1, "Fail"
+ elif index == 3:
+ return -1, "Fail [Timeout]"
+
+ try:
+ id = int(child.match.groups()[1])
+ cnt = int(child.match.groups()[2])
+ lcore = int(child.match.groups()[3])
+ except:
+ return -1, "Fail [Cannot parse]"
+
+ # timer0 always expires on the same core when cnt < 20
+ if id == 0:
+ if lcore_tim0 == -1:
+ lcore_tim0 = lcore
+ elif lcore != lcore_tim0 and cnt < 20:
+ return -1, "Fail [lcore != lcore_tim0 (%d, %d)]" \
+ % (lcore, lcore_tim0)
+ if cnt > 21:
+ return -1, "Fail [tim0 cnt > 21]"
+
+ # timer1 each time expires on a different core
+ if id == 1:
+ if lcore == lcore_tim1:
+ return -1, "Fail [lcore == lcore_tim1 (%d, %d)]" \
+ % (lcore, lcore_tim1)
+ lcore_tim1 = lcore
+ if cnt > 10:
+ return -1, "Fail [tim1 cnt > 30]"
+
+ # timer0 always expires on the same core
+ if id == 2:
+ if lcore_tim2 == -1:
+ lcore_tim2 = lcore
+ elif lcore != lcore_tim2:
+ return -1, "Fail [lcore != lcore_tim2 (%d, %d)]" \
+ % (lcore, lcore_tim2)
+ if cnt > 30:
+ return -1, "Fail [tim2 cnt > 30]"
+
+ # timer0 always expires on the same core
+ if id == 3:
+ if lcore_tim3 == -1:
+ lcore_tim3 = lcore
+ elif lcore != lcore_tim3:
+ return -1, "Fail [lcore_tim3 changed (%d -> %d)]" \
+ % (lcore, lcore_tim3)
+ if cnt > 30:
+ return -1, "Fail [tim3 cnt > 30]"
+
+ # must be 2 different cores
+ if lcore_tim0 == lcore_tim3:
+ return -1, "Fail [lcore_tim0 (%d) == lcore_tim3 (%d)]" \
+ % (lcore_tim0, lcore_tim3)
+
+ return 0, "Success"
+
def ring_autotest(child, test_name):
- child.sendline(test_name)
- index = child.expect(["Test OK", "Test Failed",
- pexpect.TIMEOUT], timeout = 2)
- if index == 1:
- return -1, "Fail"
- elif index == 2:
- return -1, "Fail [Timeout]"
-
- child.sendline("set_watermark test 100")
- child.sendline("dump_ring test")
- index = child.expect([" watermark=100",
- pexpect.TIMEOUT], timeout = 1)
- if index != 0:
- return -1, "Fail [Bad watermark]"
-
- return 0, "Success"
+ child.sendline(test_name)
+ index = child.expect(["Test OK", "Test Failed",
+ pexpect.TIMEOUT], timeout=2)
+ if index == 1:
+ return -1, "Fail"
+ elif index == 2:
+ return -1, "Fail [Timeout]"
+
+ child.sendline("set_watermark test 100")
+ child.sendline("dump_ring test")
+ index = child.expect([" watermark=100",
+ pexpect.TIMEOUT], timeout=1)
+ if index != 0:
+ return -1, "Fail [Bad watermark]"
+
+ return 0, "Success"
diff --git a/doc/guides/conf.py b/doc/guides/conf.py
index 29e8efb..34c62de 100644
--- a/doc/guides/conf.py
+++ b/doc/guides/conf.py
@@ -58,7 +58,8 @@
html_show_copyright = False
highlight_language = 'none'
-version = subprocess.check_output(['make', '-sRrC', '../../', 'showversion']).decode('utf-8').rstrip()
+version = subprocess.check_output(['make', '-sRrC', '../../', 'showversion'])
+version = version.decode('utf-8').rstrip()
release = version
master_doc = 'index'
@@ -94,6 +95,7 @@
'preamble': latex_preamble
}
+
# Override the default Latex formatter in order to modify the
# code/verbatim blocks.
class CustomLatexFormatter(LatexFormatter):
@@ -117,12 +119,12 @@ def __init__(self, **options):
("tools/devbind", "dpdk-devbind",
"check device status and bind/unbind them from drivers", "", 8)]
-######## :numref: fallback ########
+
+# ####### :numref: fallback ########
# The following hook functions add some simple handling for the :numref:
# directive for Sphinx versions prior to 1.3.1. The functions replace the
# :numref: reference with a link to the target (for all Sphinx doc types).
# It doesn't try to label figures/tables.
-
def numref_role(reftype, rawtext, text, lineno, inliner):
"""
Add a Sphinx role to handle numref references. Note, we can't convert
@@ -136,6 +138,7 @@ def numref_role(reftype, rawtext, text, lineno, inliner):
internal=True)
return [newnode], []
+
def process_numref(app, doctree, from_docname):
"""
Process the numref nodes once the doctree has been built and prior to
diff --git a/examples/ip_pipeline/config/diagram-generator.py b/examples/ip_pipeline/config/diagram-generator.py
index 6b7170b..1748833 100755
--- a/examples/ip_pipeline/config/diagram-generator.py
+++ b/examples/ip_pipeline/config/diagram-generator.py
@@ -36,7 +36,8 @@
# the DPDK ip_pipeline application.
#
# The input configuration file is translated to an output file in DOT syntax,
-# which is then used to create the image file using graphviz (www.graphviz.org).
+# which is then used to create the image file using graphviz
+# (www.graphviz.org).
#
from __future__ import print_function
@@ -94,6 +95,7 @@
# SOURCEx | SOURCEx | SOURCEx | PIPELINEy | SOURCEx
# SINKx | SINKx | PIPELINEy | SINKx | SINKx
+
#
# Parse the input configuration file to detect the graph nodes and edges
#
@@ -321,16 +323,17 @@ def process_config_file(cfgfile):
#
print('Creating image file "%s" ...' % imgfile)
if os.system('which dot > /dev/null'):
- print('Error: Unable to locate "dot" executable.' \
- 'Please install the "graphviz" package (www.graphviz.org).')
+ print('Error: Unable to locate "dot" executable.'
+ 'Please install the "graphviz" package (www.graphviz.org).')
return
os.system(dot_cmd)
if __name__ == '__main__':
- parser = argparse.ArgumentParser(description=\
- 'Create diagram for IP pipeline configuration file.')
+ parser = argparse.ArgumentParser(description='Create diagram for IP '
+ 'pipeline configuration '
+ 'file.')
parser.add_argument(
'-f',
diff --git a/examples/ip_pipeline/config/pipeline-to-core-mapping.py b/examples/ip_pipeline/config/pipeline-to-core-mapping.py
index c2050b8..7a4eaa2 100755
--- a/examples/ip_pipeline/config/pipeline-to-core-mapping.py
+++ b/examples/ip_pipeline/config/pipeline-to-core-mapping.py
@@ -39,15 +39,14 @@
#
from __future__ import print_function
-import sys
-import errno
-import os
-import re
+from collections import namedtuple
+import argparse
import array
+import errno
import itertools
+import os
import re
-import argparse
-from collections import namedtuple
+import sys
# default values
enable_stage0_traceout = 1
diff --git a/tools/cpu_layout.py b/tools/cpu_layout.py
index d38d0b5..ccc22ec 100755
--- a/tools/cpu_layout.py
+++ b/tools/cpu_layout.py
@@ -38,40 +38,40 @@
cores = []
core_map = {}
-fd=open("/proc/cpuinfo")
+fd = open("/proc/cpuinfo")
lines = fd.readlines()
fd.close()
core_details = []
core_lines = {}
for line in lines:
- if len(line.strip()) != 0:
- name, value = line.split(":", 1)
- core_lines[name.strip()] = value.strip()
- else:
- core_details.append(core_lines)
- core_lines = {}
+ if len(line.strip()) != 0:
+ name, value = line.split(":", 1)
+ core_lines[name.strip()] = value.strip()
+ else:
+ core_details.append(core_lines)
+ core_lines = {}
for core in core_details:
- for field in ["processor", "core id", "physical id"]:
- if field not in core:
- print "Error getting '%s' value from /proc/cpuinfo" % field
- sys.exit(1)
- core[field] = int(core[field])
+ for field in ["processor", "core id", "physical id"]:
+ if field not in core:
+ print "Error getting '%s' value from /proc/cpuinfo" % field
+ sys.exit(1)
+ core[field] = int(core[field])
- if core["core id"] not in cores:
- cores.append(core["core id"])
- if core["physical id"] not in sockets:
- sockets.append(core["physical id"])
- key = (core["physical id"], core["core id"])
- if key not in core_map:
- core_map[key] = []
- core_map[key].append(core["processor"])
+ if core["core id"] not in cores:
+ cores.append(core["core id"])
+ if core["physical id"] not in sockets:
+ sockets.append(core["physical id"])
+ key = (core["physical id"], core["core id"])
+ if key not in core_map:
+ core_map[key] = []
+ core_map[key].append(core["processor"])
print "============================================================"
print "Core and Socket Information (as reported by '/proc/cpuinfo')"
print "============================================================\n"
-print "cores = ",cores
+print "cores = ", cores
print "sockets = ", sockets
print ""
@@ -81,15 +81,16 @@
print " ".ljust(max_core_id_len + len('Core ')),
for s in sockets:
- print "Socket %s" % str(s).ljust(max_core_map_len - len('Socket ')),
+ print "Socket %s" % str(s).ljust(max_core_map_len - len('Socket ')),
print ""
+
print " ".ljust(max_core_id_len + len('Core ')),
for s in sockets:
- print "--------".ljust(max_core_map_len),
+ print "--------".ljust(max_core_map_len),
print ""
for c in cores:
- print "Core %s" % str(c).ljust(max_core_id_len),
- for s in sockets:
- print str(core_map[(s,c)]).ljust(max_core_map_len),
- print ""
+ print "Core %s" % str(c).ljust(max_core_id_len),
+ for s in sockets:
+ print str(core_map[(s, c)]).ljust(max_core_map_len),
+ print ""
diff --git a/tools/dpdk-devbind.py b/tools/dpdk-devbind.py
index f1d374d..4f51a4b 100755
--- a/tools/dpdk-devbind.py
+++ b/tools/dpdk-devbind.py
@@ -93,10 +93,10 @@ def usage():
Unbind a device (Equivalent to \"-b none\")
--force:
- By default, network devices which are used by Linux - as indicated by having
- routes in the routing table - cannot be modified. Using the --force
- flag overrides this behavior, allowing active links to be forcibly
- unbound.
+ By default, network devices which are used by Linux - as indicated by
+ having routes in the routing table - cannot be modified. Using the
+ --force flag overrides this behavior, allowing active links to be
+ forcibly unbound.
WARNING: This can lead to loss of network connection and should be used
with caution.
@@ -151,7 +151,7 @@ def find_module(mod):
# check for a copy based off current path
tools_dir = dirname(abspath(sys.argv[0]))
- if (tools_dir.endswith("tools")):
+ if tools_dir.endswith("tools"):
base_dir = dirname(tools_dir)
find_out = check_output(["find", base_dir, "-name", mod + ".ko"])
if len(find_out) > 0: # something matched
@@ -249,7 +249,7 @@ def get_nic_details():
dev = {}
dev_lines = check_output(["lspci", "-Dvmmn"]).splitlines()
for dev_line in dev_lines:
- if (len(dev_line) == 0):
+ if len(dev_line) == 0:
if dev["Class"][0:2] == NETWORK_BASE_CLASS:
# convert device and vendor ids to numbers, then add to global
dev["Vendor"] = int(dev["Vendor"], 16)
@@ -315,8 +315,8 @@ def get_crypto_details():
dev = {}
dev_lines = check_output(["lspci", "-Dvmmn"]).splitlines()
for dev_line in dev_lines:
- if (len(dev_line) == 0):
- if (dev["Class"][0:2] == CRYPTO_BASE_CLASS):
+ if len(dev_line) == 0:
+ if dev["Class"][0:2] == CRYPTO_BASE_CLASS:
# convert device and vendor ids to numbers, then add to global
dev["Vendor"] = int(dev["Vendor"], 16)
dev["Device"] = int(dev["Device"], 16)
@@ -513,7 +513,8 @@ def display_devices(title, dev_list, extra_params=None):
for dev in dev_list:
if extra_params is not None:
strings.append("%s '%s' %s" % (dev["Slot"],
- dev["Device_str"], extra_params % dev))
+ dev["Device_str"],
+ extra_params % dev))
else:
strings.append("%s '%s'" % (dev["Slot"], dev["Device_str"]))
# sort before printing, so that the entries appear in PCI order
@@ -532,7 +533,7 @@ def show_status():
# split our list of network devices into the three categories above
for d in devices.keys():
- if (NETWORK_BASE_CLASS in devices[d]["Class"]):
+ if NETWORK_BASE_CLASS in devices[d]["Class"]:
if not has_driver(d):
no_drv.append(devices[d])
continue
@@ -555,7 +556,7 @@ def show_status():
no_drv = []
for d in devices.keys():
- if (CRYPTO_BASE_CLASS in devices[d]["Class"]):
+ if CRYPTO_BASE_CLASS in devices[d]["Class"]:
if not has_driver(d):
no_drv.append(devices[d])
continue
diff --git a/tools/dpdk-pmdinfo.py b/tools/dpdk-pmdinfo.py
index 3db9819..3d3ad7d 100755
--- a/tools/dpdk-pmdinfo.py
+++ b/tools/dpdk-pmdinfo.py
@@ -4,52 +4,20 @@
# Utility to dump PMD_INFO_STRING support from an object file
#
# -------------------------------------------------------------------------
+import json
import os
+import platform
+import string
import sys
+from elftools.common.exceptions import ELFError
+from elftools.common.py3compat import (byte2int, bytes2str, str2bytes)
+from elftools.elf.elffile import ELFFile
from optparse import OptionParser
-import string
-import json
-import platform
# For running from development directory. It should take precedence over the
# installed pyelftools.
sys.path.insert(0, '.')
-
-from elftools import __version__
-from elftools.common.exceptions import ELFError
-from elftools.common.py3compat import (
- ifilter, byte2int, bytes2str, itervalues, str2bytes)
-from elftools.elf.elffile import ELFFile
-from elftools.elf.dynamic import DynamicSection, DynamicSegment
-from elftools.elf.enums import ENUM_D_TAG
-from elftools.elf.segments import InterpSegment
-from elftools.elf.sections import SymbolTableSection
-from elftools.elf.gnuversions import (
- GNUVerSymSection, GNUVerDefSection,
- GNUVerNeedSection,
-)
-from elftools.elf.relocation import RelocationSection
-from elftools.elf.descriptions import (
- describe_ei_class, describe_ei_data, describe_ei_version,
- describe_ei_osabi, describe_e_type, describe_e_machine,
- describe_e_version_numeric, describe_p_type, describe_p_flags,
- describe_sh_type, describe_sh_flags,
- describe_symbol_type, describe_symbol_bind, describe_symbol_visibility,
- describe_symbol_shndx, describe_reloc_type, describe_dyn_tag,
- describe_ver_flags,
-)
-from elftools.elf.constants import E_FLAGS
-from elftools.dwarf.dwarfinfo import DWARFInfo
-from elftools.dwarf.descriptions import (
- describe_reg_name, describe_attr_value, set_global_machine_arch,
- describe_CFI_instructions, describe_CFI_register_rule,
- describe_CFI_CFA_rule,
-)
-from elftools.dwarf.constants import (
- DW_LNS_copy, DW_LNS_set_file, DW_LNE_define_file)
-from elftools.dwarf.callframe import CIE, FDE
-
raw_output = False
pcidb = None
@@ -326,7 +294,7 @@ def parse_pmd_info_string(self, mystring):
for i in optional_pmd_info:
try:
print("%s: %s" % (i['tag'], pmdinfo[i['id']]))
- except KeyError as e:
+ except KeyError:
continue
if (len(pmdinfo["pci_ids"]) != 0):
@@ -475,7 +443,7 @@ def process_dt_needed_entries(self):
with open(library, 'rb') as file:
try:
libelf = ReadElf(file, sys.stdout)
- except ELFError as e:
+ except ELFError:
print("%s is no an ELF file" % library)
continue
libelf.process_dt_needed_entries()
@@ -491,7 +459,7 @@ def scan_autoload_path(autoload_path):
try:
dirs = os.listdir(autoload_path)
- except OSError as e:
+ except OSError:
# Couldn't read the directory, give up
return
@@ -503,10 +471,10 @@ def scan_autoload_path(autoload_path):
try:
file = open(dpath, 'rb')
readelf = ReadElf(file, sys.stdout)
- except ELFError as e:
+ except ELFError:
# this is likely not an elf file, skip it
continue
- except IOError as e:
+ except IOError:
# No permission to read the file, skip it
continue
@@ -531,7 +499,7 @@ def scan_for_autoload_pmds(dpdk_path):
file = open(dpdk_path, 'rb')
try:
readelf = ReadElf(file, sys.stdout)
- except ElfError as e:
+ except ElfError:
if raw_output is False:
print("Unable to parse %s" % file)
return
@@ -557,7 +525,7 @@ def main(stream=None):
global raw_output
global pcidb
- pcifile_default = "./pci.ids" # for unknown OS's assume local file
+ pcifile_default = "./pci.ids" # For unknown OS's assume local file
if platform.system() == 'Linux':
pcifile_default = "/usr/share/hwdata/pci.ids"
elif platform.system() == 'FreeBSD':
@@ -577,7 +545,8 @@ def main(stream=None):
"to get vendor names from",
default=pcifile_default, metavar="FILE")
optparser.add_option("-t", "--table", dest="tblout",
- help="output information on hw support as a hex table",
+ help="output information on hw support as a "
+ "hex table",
action='store_true')
optparser.add_option("-p", "--plugindir", dest="pdir",
help="scan dpdk for autoload plugins",
--
2.7.4
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 2/3] app: make python apps python2/3 compliant
2016-12-08 15:51 [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant John McNamara
` (10 preceding siblings ...)
2016-12-18 14:25 ` [dpdk-dev] [PATCH v2 1/3] app: make python apps pep8 compliant John McNamara
@ 2016-12-18 14:25 ` John McNamara
2016-12-18 14:25 ` [dpdk-dev] [PATCH v2 3/3] doc: add required python versions to docs John McNamara
` (8 subsequent siblings)
20 siblings, 0 replies; 28+ messages in thread
From: John McNamara @ 2016-12-18 14:25 UTC (permalink / raw)
To: dev; +Cc: mkletzan, thomas.monjalon, nhorman, John McNamara
Make all the DPDK Python apps work with Python 2 or 3 to
allow them to work with whatever is the system default.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
app/cmdline_test/cmdline_test.py | 26 ++++++++++++------------
app/cmdline_test/cmdline_test_data.py | 2 --
app/test/autotest.py | 10 ++++-----
app/test/autotest_data.py | 2 --
app/test/autotest_runner.py | 37 ++++++++++++++++------------------
app/test/autotest_test_funcs.py | 2 --
tools/cpu_layout.py | 38 ++++++++++++++++++-----------------
tools/dpdk-devbind.py | 2 +-
tools/dpdk-pmdinfo.py | 14 +++++++------
9 files changed, 64 insertions(+), 69 deletions(-)
diff --git a/app/cmdline_test/cmdline_test.py b/app/cmdline_test/cmdline_test.py
index 4729987..229f71f 100755
--- a/app/cmdline_test/cmdline_test.py
+++ b/app/cmdline_test/cmdline_test.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# BSD LICENSE
#
@@ -32,7 +32,7 @@
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# Script that runs cmdline_test app and feeds keystrokes into it.
-
+from __future__ import print_function
import cmdline_test_data
import os
import pexpect
@@ -81,38 +81,38 @@ def runHistoryTest(child):
# the path to cmdline_test executable is supplied via command-line.
if len(sys.argv) < 2:
- print "Error: please supply cmdline_test app path"
+ print("Error: please supply cmdline_test app path")
sys.exit(1)
test_app_path = sys.argv[1]
if not os.path.exists(test_app_path):
- print "Error: please supply cmdline_test app path"
+ print("Error: please supply cmdline_test app path")
sys.exit(1)
child = pexpect.spawn(test_app_path)
-print "Running command-line tests..."
+print("Running command-line tests...")
for test in cmdline_test_data.tests:
- print (test["Name"] + ":").ljust(30),
+ testname = (test["Name"] + ":").ljust(30)
try:
runTest(child, test)
- print "PASS"
+ print(testname, "PASS")
except:
- print "FAIL"
- print child
+ print(testname, "FAIL")
+ print(child)
sys.exit(1)
# since last test quits the app, run new instance
child = pexpect.spawn(test_app_path)
-print ("History fill test:").ljust(30),
+testname = ("History fill test:").ljust(30)
try:
runHistoryTest(child)
- print "PASS"
+ print(testname, "PASS")
except:
- print "FAIL"
- print child
+ print(testname, "FAIL")
+ print(child)
sys.exit(1)
child.close()
sys.exit(0)
diff --git a/app/cmdline_test/cmdline_test_data.py b/app/cmdline_test/cmdline_test_data.py
index 3ce6cbc..28dfefe 100644
--- a/app/cmdline_test/cmdline_test_data.py
+++ b/app/cmdline_test/cmdline_test_data.py
@@ -1,5 +1,3 @@
-#!/usr/bin/python
-
# BSD LICENSE
#
# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
diff --git a/app/test/autotest.py b/app/test/autotest.py
index 3a00538..5c19a02 100644
--- a/app/test/autotest.py
+++ b/app/test/autotest.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# BSD LICENSE
#
@@ -32,15 +32,15 @@
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# Script that uses either test app or qemu controlled by python-pexpect
-
+from __future__ import print_function
import autotest_data
import autotest_runner
import sys
def usage():
- print"Usage: autotest.py [test app|test iso image]",
- print "[target] [whitelist|-blacklist]"
+ print("Usage: autotest.py [test app|test iso image] ",
+ "[target] [whitelist|-blacklist]")
if len(sys.argv) < 3:
usage()
@@ -63,7 +63,7 @@ def usage():
cmdline = "%s -c f -n 4" % (sys.argv[1])
-print cmdline
+print(cmdline)
runner = autotest_runner.AutotestRunner(cmdline, target, test_blacklist,
test_whitelist)
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 0cf4cfd..0cd598b 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -1,5 +1,3 @@
-#!/usr/bin/env python
-
# BSD LICENSE
#
# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
diff --git a/app/test/autotest_runner.py b/app/test/autotest_runner.py
index 55b63a8..fc882ec 100644
--- a/app/test/autotest_runner.py
+++ b/app/test/autotest_runner.py
@@ -1,5 +1,3 @@
-#!/usr/bin/python
-
# BSD LICENSE
#
# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
@@ -271,15 +269,16 @@ def __process_results(self, results):
total_time = int(cur_time - self.start)
# print results, test run time and total time since start
- print ("%s:" % test_name).ljust(30),
- print result_str.ljust(29),
- print "[%02dm %02ds]" % (test_time / 60, test_time % 60),
+ result = ("%s:" % test_name).ljust(30)
+ result += result_str.ljust(29)
+ result += "[%02dm %02ds]" % (test_time / 60, test_time % 60)
# don't print out total time every line, it's the same anyway
if i == len(results) - 1:
- print "[%02dm %02ds]" % (total_time / 60, total_time % 60)
+ print(result,
+ "[%02dm %02ds]" % (total_time / 60, total_time % 60))
else:
- print ""
+ print(result)
# if test failed and it wasn't a "start" test
if test_result < 0 and not i == 0:
@@ -294,7 +293,7 @@ def __process_results(self, results):
f = open("%s_%s_report.rst" %
(self.target, test_name), "w")
except IOError:
- print "Report for %s could not be created!" % test_name
+ print("Report for %s could not be created!" % test_name)
else:
with f:
f.write(report)
@@ -360,12 +359,10 @@ def run_all_tests(self):
try:
# create table header
- print ""
- print "Test name".ljust(30),
- print "Test result".ljust(29),
- print "Test".center(9),
- print "Total".center(9)
- print "=" * 80
+ print("")
+ print("Test name".ljust(30), "Test result".ljust(29),
+ "Test".center(9), "Total".center(9))
+ print("=" * 80)
# make a note of tests start time
self.start = time.time()
@@ -407,11 +404,11 @@ def run_all_tests(self):
total_time = int(cur_time - self.start)
# print out summary
- print "=" * 80
- print "Total run time: %02dm %02ds" % (total_time / 60,
- total_time % 60)
+ print("=" * 80)
+ print("Total run time: %02dm %02ds" % (total_time / 60,
+ total_time % 60))
if self.fails != 0:
- print "Number of failed tests: %s" % str(self.fails)
+ print("Number of failed tests: %s" % str(self.fails))
# write summary to logfile
self.logfile.write("Summary\n")
@@ -420,8 +417,8 @@ def run_all_tests(self):
self.logfile.write("Failed tests: ".ljust(
15) + "%i\n" % self.fails)
except:
- print "Exception occurred"
- print sys.exc_info()
+ print("Exception occurred")
+ print(sys.exc_info())
self.fails = 1
# drop logs from all executions to a logfile
diff --git a/app/test/autotest_test_funcs.py b/app/test/autotest_test_funcs.py
index c482ea8..1c5f390 100644
--- a/app/test/autotest_test_funcs.py
+++ b/app/test/autotest_test_funcs.py
@@ -1,5 +1,3 @@
-#!/usr/bin/python
-
# BSD LICENSE
#
# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
diff --git a/tools/cpu_layout.py b/tools/cpu_layout.py
index ccc22ec..0e049a6 100755
--- a/tools/cpu_layout.py
+++ b/tools/cpu_layout.py
@@ -1,4 +1,5 @@
-#! /usr/bin/python
+#!/usr/bin/env python
+
#
# BSD LICENSE
#
@@ -31,7 +32,7 @@
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
-
+from __future__ import print_function
import sys
sockets = []
@@ -55,7 +56,7 @@
for core in core_details:
for field in ["processor", "core id", "physical id"]:
if field not in core:
- print "Error getting '%s' value from /proc/cpuinfo" % field
+ print("Error getting '%s' value from /proc/cpuinfo" % field)
sys.exit(1)
core[field] = int(core[field])
@@ -68,29 +69,30 @@
core_map[key] = []
core_map[key].append(core["processor"])
-print "============================================================"
-print "Core and Socket Information (as reported by '/proc/cpuinfo')"
-print "============================================================\n"
-print "cores = ", cores
-print "sockets = ", sockets
-print ""
+print("============================================================")
+print("Core and Socket Information (as reported by '/proc/cpuinfo')")
+print("============================================================\n")
+print("cores = ", cores)
+print("sockets = ", sockets)
+print("")
max_processor_len = len(str(len(cores) * len(sockets) * 2 - 1))
max_core_map_len = max_processor_len * 2 + len('[, ]') + len('Socket ')
max_core_id_len = len(str(max(cores)))
-print " ".ljust(max_core_id_len + len('Core ')),
+output = " ".ljust(max_core_id_len + len('Core '))
for s in sockets:
- print "Socket %s" % str(s).ljust(max_core_map_len - len('Socket ')),
-print ""
+ output += " Socket %s" % str(s).ljust(max_core_map_len - len('Socket '))
+print(output)
-print " ".ljust(max_core_id_len + len('Core ')),
+output = " ".ljust(max_core_id_len + len('Core '))
for s in sockets:
- print "--------".ljust(max_core_map_len),
-print ""
+ output += " --------".ljust(max_core_map_len)
+ output += " "
+print(output)
for c in cores:
- print "Core %s" % str(c).ljust(max_core_id_len),
+ output = "Core %s" % str(c).ljust(max_core_id_len)
for s in sockets:
- print str(core_map[(s, c)]).ljust(max_core_map_len),
- print ""
+ output += " " + str(core_map[(s, c)]).ljust(max_core_map_len)
+ print(output)
diff --git a/tools/dpdk-devbind.py b/tools/dpdk-devbind.py
index 4f51a4b..e057b87 100755
--- a/tools/dpdk-devbind.py
+++ b/tools/dpdk-devbind.py
@@ -1,4 +1,4 @@
-#! /usr/bin/python
+#! /usr/bin/env python
#
# BSD LICENSE
#
diff --git a/tools/dpdk-pmdinfo.py b/tools/dpdk-pmdinfo.py
index 3d3ad7d..d4e51aa 100755
--- a/tools/dpdk-pmdinfo.py
+++ b/tools/dpdk-pmdinfo.py
@@ -1,9 +1,11 @@
#!/usr/bin/env python
+
# -------------------------------------------------------------------------
#
# Utility to dump PMD_INFO_STRING support from an object file
#
# -------------------------------------------------------------------------
+from __future__ import print_function
import json
import os
import platform
@@ -54,7 +56,7 @@ def addDevice(self, deviceStr):
self.devices[devID] = Device(deviceStr)
def report(self):
- print self.ID, self.name
+ print(self.ID, self.name)
for id, dev in self.devices.items():
dev.report()
@@ -80,7 +82,7 @@ def __init__(self, deviceStr):
self.subdevices = {}
def report(self):
- print "\t%s\t%s" % (self.ID, self.name)
+ print("\t%s\t%s" % (self.ID, self.name))
for subID, subdev in self.subdevices.items():
subdev.report()
@@ -126,7 +128,7 @@ def __init__(self, vendor, device, name):
self.name = name
def report(self):
- print "\t\t%s\t%s\t%s" % (self.vendorID, self.deviceID, self.name)
+ print("\t\t%s\t%s\t%s" % (self.vendorID, self.deviceID, self.name))
class PCIIds:
@@ -154,7 +156,7 @@ def reportVendors(self):
"""Reports the vendors
"""
for vid, v in self.vendors.items():
- print v.ID, v.name
+ print(v.ID, v.name)
def report(self, vendor=None):
"""
@@ -185,7 +187,7 @@ def findDate(self, content):
def parse(self):
if len(self.contents) < 1:
- print "data/%s-pci.ids not found" % self.date
+ print("data/%s-pci.ids not found" % self.date)
else:
vendorID = ""
deviceID = ""
@@ -432,7 +434,7 @@ def process_dt_needed_entries(self):
for tag in dynsec.iter_tags():
if tag.entry.d_tag == 'DT_NEEDED':
- rc = tag.needed.find("librte_pmd")
+ rc = tag.needed.find(b"librte_pmd")
if (rc != -1):
library = search_file(tag.needed,
runpath + ":" + ldlibpath +
--
2.7.4
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 3/3] doc: add required python versions to docs
2016-12-08 15:51 [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant John McNamara
` (11 preceding siblings ...)
2016-12-18 14:25 ` [dpdk-dev] [PATCH v2 2/3] app: make python apps python2/3 compliant John McNamara
@ 2016-12-18 14:25 ` John McNamara
2016-12-18 14:32 ` [dpdk-dev] [PATCH v3 0/3] app: make python apps python2/3 compliant John McNamara
` (7 subsequent siblings)
20 siblings, 0 replies; 28+ messages in thread
From: John McNamara @ 2016-12-18 14:25 UTC (permalink / raw)
To: dev; +Cc: mkletzan, thomas.monjalon, nhorman, John McNamara
Add a requirement to support both Python 2 and 3 to the
DPDK Python Coding Standards and Getting started Guide.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
doc/guides/contributing/coding_style.rst | 3 ++-
doc/guides/linux_gsg/sys_reqs.rst | 2 +-
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/doc/guides/contributing/coding_style.rst b/doc/guides/contributing/coding_style.rst
index 1eb67f3..4163960 100644
--- a/doc/guides/contributing/coding_style.rst
+++ b/doc/guides/contributing/coding_style.rst
@@ -690,6 +690,7 @@ Control Statements
Python Code
-----------
-All python code should be compliant with `PEP8 (Style Guide for Python Code) <https://www.python.org/dev/peps/pep-0008/>`_.
+All Python code should work with Python 2.7+ and 3.2+ and be compliant with
+`PEP8 (Style Guide for Python Code) <https://www.python.org/dev/peps/pep-0008/>`_.
The ``pep8`` tool can be used for testing compliance with the guidelines.
diff --git a/doc/guides/linux_gsg/sys_reqs.rst b/doc/guides/linux_gsg/sys_reqs.rst
index 3d74342..9653a13 100644
--- a/doc/guides/linux_gsg/sys_reqs.rst
+++ b/doc/guides/linux_gsg/sys_reqs.rst
@@ -86,7 +86,7 @@ Compilation of the DPDK
.. note::
- Python, version 2.6 or 2.7, to use various helper scripts included in the DPDK package.
+ Python, version 2.7+ or 3.2+, to use various helper scripts included in the DPDK package.
**Optional Tools:**
--
2.7.4
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v3 0/3] app: make python apps python2/3 compliant
2016-12-08 15:51 [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant John McNamara
` (12 preceding siblings ...)
2016-12-18 14:25 ` [dpdk-dev] [PATCH v2 3/3] doc: add required python versions to docs John McNamara
@ 2016-12-18 14:32 ` John McNamara
2016-12-18 14:32 ` [dpdk-dev] [PATCH v3 1/3] app: make python apps pep8 compliant John McNamara
` (6 subsequent siblings)
20 siblings, 0 replies; 28+ messages in thread
From: John McNamara @ 2016-12-18 14:32 UTC (permalink / raw)
To: dev; +Cc: mkletzan, thomas.monjalon, nhorman, John McNamara
These patches refactor the DPDK Python applications to make them Python 2/3
compatible.
In order to do this the patchset starts by making the apps PEP8 compliant in
accordance with the DPDK Coding guidelines:
http://dpdk.org/doc/guides/contributing/coding_style.html#python-code
Implementing PEP8 and Python 2/3 compliance means that we can check all future
Python patches for consistency. Python 2/3 support also makes downstream
packaging easier as more distros move to Python 3 as the system python.
See the previous discussion about Python2/3 compatibilty here:
http://dpdk.org/ml/archives/dev/2016-December/051683.html
V3: * Squash shebang patch into Python 3 patch.
* Only add /usr/bin/env shebang line to code that is executable.
V2: * Fix broken rebase.
John McNamara (3):
app: make python apps pep8 compliant
app: make python apps python2/3 compliant
doc: add required python versions to docs
app/cmdline_test/cmdline_test.py | 87 ++-
app/cmdline_test/cmdline_test_data.py | 403 +++++-----
app/test/autotest.py | 46 +-
app/test/autotest_data.py | 831 ++++++++++-----------
app/test/autotest_runner.py | 740 +++++++++---------
app/test/autotest_test_funcs.py | 481 ++++++------
doc/guides/conf.py | 9 +-
doc/guides/contributing/coding_style.rst | 3 +-
doc/guides/linux_gsg/sys_reqs.rst | 2 +-
examples/ip_pipeline/config/diagram-generator.py | 13 +-
.../ip_pipeline/config/pipeline-to-core-mapping.py | 11 +-
tools/cpu_layout.py | 79 +-
tools/dpdk-devbind.py | 25 +-
tools/dpdk-pmdinfo.py | 75 +-
14 files changed, 1405 insertions(+), 1400 deletions(-)
--
2.7.4
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v3 1/3] app: make python apps pep8 compliant
2016-12-08 15:51 [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant John McNamara
` (13 preceding siblings ...)
2016-12-18 14:32 ` [dpdk-dev] [PATCH v3 0/3] app: make python apps python2/3 compliant John McNamara
@ 2016-12-18 14:32 ` John McNamara
2016-12-18 14:32 ` [dpdk-dev] [PATCH v3 2/3] app: make python apps python2/3 compliant John McNamara
` (5 subsequent siblings)
20 siblings, 0 replies; 28+ messages in thread
From: John McNamara @ 2016-12-18 14:32 UTC (permalink / raw)
To: dev; +Cc: mkletzan, thomas.monjalon, nhorman, John McNamara
Make all DPDK python application compliant with the PEP8 standard
to allow for consistency checking of patches and to allow further
refactoring.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
app/cmdline_test/cmdline_test.py | 81 +-
app/cmdline_test/cmdline_test_data.py | 401 +++++-----
app/test/autotest.py | 40 +-
app/test/autotest_data.py | 831 +++++++++++----------
app/test/autotest_runner.py | 739 +++++++++---------
app/test/autotest_test_funcs.py | 479 ++++++------
doc/guides/conf.py | 9 +-
examples/ip_pipeline/config/diagram-generator.py | 13 +-
.../ip_pipeline/config/pipeline-to-core-mapping.py | 11 +-
tools/cpu_layout.py | 55 +-
tools/dpdk-devbind.py | 23 +-
tools/dpdk-pmdinfo.py | 61 +-
12 files changed, 1376 insertions(+), 1367 deletions(-)
diff --git a/app/cmdline_test/cmdline_test.py b/app/cmdline_test/cmdline_test.py
index 8efc5ea..4729987 100755
--- a/app/cmdline_test/cmdline_test.py
+++ b/app/cmdline_test/cmdline_test.py
@@ -33,16 +33,21 @@
# Script that runs cmdline_test app and feeds keystrokes into it.
-import sys, pexpect, string, os, cmdline_test_data
+import cmdline_test_data
+import os
+import pexpect
+import sys
+
#
# function to run test
#
-def runTest(child,test):
- child.send(test["Sequence"])
- if test["Result"] == None:
- return 0
- child.expect(test["Result"],1)
+def runTest(child, test):
+ child.send(test["Sequence"])
+ if test["Result"] is None:
+ return 0
+ child.expect(test["Result"], 1)
+
#
# history test is a special case
@@ -57,57 +62,57 @@ def runTest(child,test):
# This is a self-contained test, it needs only a pexpect child
#
def runHistoryTest(child):
- # find out history size
- child.sendline(cmdline_test_data.CMD_GET_BUFSIZE)
- child.expect("History buffer size: \\d+", timeout=1)
- history_size = int(child.after[len(cmdline_test_data.BUFSIZE_TEMPLATE):])
- i = 0
+ # find out history size
+ child.sendline(cmdline_test_data.CMD_GET_BUFSIZE)
+ child.expect("History buffer size: \\d+", timeout=1)
+ history_size = int(child.after[len(cmdline_test_data.BUFSIZE_TEMPLATE):])
+ i = 0
- # fill the history with numbers
- while i < history_size / 10:
- # add 1 to prevent from parsing as octals
- child.send("1" + str(i).zfill(8) + cmdline_test_data.ENTER)
- # the app will simply print out the number
- child.expect(str(i + 100000000), timeout=1)
- i += 1
- # scroll back history
- child.send(cmdline_test_data.UP * (i + 2) + cmdline_test_data.ENTER)
- child.expect("100000000", timeout=1)
+ # fill the history with numbers
+ while i < history_size / 10:
+ # add 1 to prevent from parsing as octals
+ child.send("1" + str(i).zfill(8) + cmdline_test_data.ENTER)
+ # the app will simply print out the number
+ child.expect(str(i + 100000000), timeout=1)
+ i += 1
+ # scroll back history
+ child.send(cmdline_test_data.UP * (i + 2) + cmdline_test_data.ENTER)
+ child.expect("100000000", timeout=1)
# the path to cmdline_test executable is supplied via command-line.
if len(sys.argv) < 2:
- print "Error: please supply cmdline_test app path"
- sys.exit(1)
+ print "Error: please supply cmdline_test app path"
+ sys.exit(1)
test_app_path = sys.argv[1]
if not os.path.exists(test_app_path):
- print "Error: please supply cmdline_test app path"
- sys.exit(1)
+ print "Error: please supply cmdline_test app path"
+ sys.exit(1)
child = pexpect.spawn(test_app_path)
print "Running command-line tests..."
for test in cmdline_test_data.tests:
- print (test["Name"] + ":").ljust(30),
- try:
- runTest(child,test)
- print "PASS"
- except:
- print "FAIL"
- print child
- sys.exit(1)
+ print (test["Name"] + ":").ljust(30),
+ try:
+ runTest(child, test)
+ print "PASS"
+ except:
+ print "FAIL"
+ print child
+ sys.exit(1)
# since last test quits the app, run new instance
child = pexpect.spawn(test_app_path)
print ("History fill test:").ljust(30),
try:
- runHistoryTest(child)
- print "PASS"
+ runHistoryTest(child)
+ print "PASS"
except:
- print "FAIL"
- print child
- sys.exit(1)
+ print "FAIL"
+ print child
+ sys.exit(1)
child.close()
sys.exit(0)
diff --git a/app/cmdline_test/cmdline_test_data.py b/app/cmdline_test/cmdline_test_data.py
index b1945a5..3ce6cbc 100644
--- a/app/cmdline_test/cmdline_test_data.py
+++ b/app/cmdline_test/cmdline_test_data.py
@@ -33,8 +33,6 @@
# collection of static data
-import sys
-
# keycode constants
CTRL_A = chr(1)
CTRL_B = chr(2)
@@ -95,217 +93,220 @@
# and expected output (if any).
tests = [
-# test basic commands
- {"Name" : "command test 1",
- "Sequence" : "ambiguous first" + ENTER,
- "Result" : CMD1},
- {"Name" : "command test 2",
- "Sequence" : "ambiguous second" + ENTER,
- "Result" : CMD2},
- {"Name" : "command test 3",
- "Sequence" : "ambiguous ambiguous" + ENTER,
- "Result" : AMBIG},
- {"Name" : "command test 4",
- "Sequence" : "ambiguous ambiguous2" + ENTER,
- "Result" : AMBIG},
+ # test basic commands
+ {"Name": "command test 1",
+ "Sequence": "ambiguous first" + ENTER,
+ "Result": CMD1},
+ {"Name": "command test 2",
+ "Sequence": "ambiguous second" + ENTER,
+ "Result": CMD2},
+ {"Name": "command test 3",
+ "Sequence": "ambiguous ambiguous" + ENTER,
+ "Result": AMBIG},
+ {"Name": "command test 4",
+ "Sequence": "ambiguous ambiguous2" + ENTER,
+ "Result": AMBIG},
- {"Name" : "invalid command test 1",
- "Sequence" : "ambiguous invalid" + ENTER,
- "Result" : BAD_ARG},
-# test invalid commands
- {"Name" : "invalid command test 2",
- "Sequence" : "invalid" + ENTER,
- "Result" : NOT_FOUND},
- {"Name" : "invalid command test 3",
- "Sequence" : "ambiguousinvalid" + ENTER2,
- "Result" : NOT_FOUND},
+ {"Name": "invalid command test 1",
+ "Sequence": "ambiguous invalid" + ENTER,
+ "Result": BAD_ARG},
+ # test invalid commands
+ {"Name": "invalid command test 2",
+ "Sequence": "invalid" + ENTER,
+ "Result": NOT_FOUND},
+ {"Name": "invalid command test 3",
+ "Sequence": "ambiguousinvalid" + ENTER2,
+ "Result": NOT_FOUND},
-# test arrows and deletes
- {"Name" : "arrows & delete test 1",
- "Sequence" : "singlebad" + LEFT*2 + CTRL_B + DEL*3 + ENTER,
- "Result" : SINGLE},
- {"Name" : "arrows & delete test 2",
- "Sequence" : "singlebad" + LEFT*5 + RIGHT + CTRL_F + DEL*3 + ENTER,
- "Result" : SINGLE},
+ # test arrows and deletes
+ {"Name": "arrows & delete test 1",
+ "Sequence": "singlebad" + LEFT*2 + CTRL_B + DEL*3 + ENTER,
+ "Result": SINGLE},
+ {"Name": "arrows & delete test 2",
+ "Sequence": "singlebad" + LEFT*5 + RIGHT + CTRL_F + DEL*3 + ENTER,
+ "Result": SINGLE},
-# test backspace
- {"Name" : "backspace test",
- "Sequence" : "singlebad" + BKSPACE*3 + ENTER,
- "Result" : SINGLE},
+ # test backspace
+ {"Name": "backspace test",
+ "Sequence": "singlebad" + BKSPACE*3 + ENTER,
+ "Result": SINGLE},
-# test goto left and goto right
- {"Name" : "goto left test",
- "Sequence" : "biguous first" + CTRL_A + "am" + ENTER,
- "Result" : CMD1},
- {"Name" : "goto right test",
- "Sequence" : "biguous fir" + CTRL_A + "am" + CTRL_E + "st" + ENTER,
- "Result" : CMD1},
+ # test goto left and goto right
+ {"Name": "goto left test",
+ "Sequence": "biguous first" + CTRL_A + "am" + ENTER,
+ "Result": CMD1},
+ {"Name": "goto right test",
+ "Sequence": "biguous fir" + CTRL_A + "am" + CTRL_E + "st" + ENTER,
+ "Result": CMD1},
-# test goto words
- {"Name" : "goto left word test",
- "Sequence" : "ambiguous st" + ALT_B + "fir" + ENTER,
- "Result" : CMD1},
- {"Name" : "goto right word test",
- "Sequence" : "ambig first" + CTRL_A + ALT_F + "uous" + ENTER,
- "Result" : CMD1},
+ # test goto words
+ {"Name": "goto left word test",
+ "Sequence": "ambiguous st" + ALT_B + "fir" + ENTER,
+ "Result": CMD1},
+ {"Name": "goto right word test",
+ "Sequence": "ambig first" + CTRL_A + ALT_F + "uous" + ENTER,
+ "Result": CMD1},
-# test removing words
- {"Name" : "remove left word 1",
- "Sequence" : "single invalid" + CTRL_W + ENTER,
- "Result" : SINGLE},
- {"Name" : "remove left word 2",
- "Sequence" : "single invalid" + ALT_BKSPACE + ENTER,
- "Result" : SINGLE},
- {"Name" : "remove right word",
- "Sequence" : "single invalid" + ALT_B + ALT_D + ENTER,
- "Result" : SINGLE},
+ # test removing words
+ {"Name": "remove left word 1",
+ "Sequence": "single invalid" + CTRL_W + ENTER,
+ "Result": SINGLE},
+ {"Name": "remove left word 2",
+ "Sequence": "single invalid" + ALT_BKSPACE + ENTER,
+ "Result": SINGLE},
+ {"Name": "remove right word",
+ "Sequence": "single invalid" + ALT_B + ALT_D + ENTER,
+ "Result": SINGLE},
-# test kill buffer (copy and paste)
- {"Name" : "killbuffer test 1",
- "Sequence" : "ambiguous" + CTRL_A + CTRL_K + " first" + CTRL_A + CTRL_Y + ENTER,
- "Result" : CMD1},
- {"Name" : "killbuffer test 2",
- "Sequence" : "ambiguous" + CTRL_A + CTRL_K + CTRL_Y*26 + ENTER,
- "Result" : NOT_FOUND},
+ # test kill buffer (copy and paste)
+ {"Name": "killbuffer test 1",
+ "Sequence": "ambiguous" + CTRL_A + CTRL_K + " first" + CTRL_A +
+ CTRL_Y + ENTER,
+ "Result": CMD1},
+ {"Name": "killbuffer test 2",
+ "Sequence": "ambiguous" + CTRL_A + CTRL_K + CTRL_Y*26 + ENTER,
+ "Result": NOT_FOUND},
-# test newline
- {"Name" : "newline test",
- "Sequence" : "invalid" + CTRL_C + "single" + ENTER,
- "Result" : SINGLE},
+ # test newline
+ {"Name": "newline test",
+ "Sequence": "invalid" + CTRL_C + "single" + ENTER,
+ "Result": SINGLE},
-# test redisplay (nothing should really happen)
- {"Name" : "redisplay test",
- "Sequence" : "single" + CTRL_L + ENTER,
- "Result" : SINGLE},
+ # test redisplay (nothing should really happen)
+ {"Name": "redisplay test",
+ "Sequence": "single" + CTRL_L + ENTER,
+ "Result": SINGLE},
-# test autocomplete
- {"Name" : "autocomplete test 1",
- "Sequence" : "si" + TAB + ENTER,
- "Result" : SINGLE},
- {"Name" : "autocomplete test 2",
- "Sequence" : "si" + TAB + "_" + TAB + ENTER,
- "Result" : SINGLE_LONG},
- {"Name" : "autocomplete test 3",
- "Sequence" : "in" + TAB + ENTER,
- "Result" : NOT_FOUND},
- {"Name" : "autocomplete test 4",
- "Sequence" : "am" + TAB + ENTER,
- "Result" : BAD_ARG},
- {"Name" : "autocomplete test 5",
- "Sequence" : "am" + TAB + "fir" + TAB + ENTER,
- "Result" : CMD1},
- {"Name" : "autocomplete test 6",
- "Sequence" : "am" + TAB + "fir" + TAB + TAB + ENTER,
- "Result" : CMD1},
- {"Name" : "autocomplete test 7",
- "Sequence" : "am" + TAB + "fir" + TAB + " " + TAB + ENTER,
- "Result" : CMD1},
- {"Name" : "autocomplete test 8",
- "Sequence" : "am" + TAB + " am" + TAB + " " + ENTER,
- "Result" : AMBIG},
- {"Name" : "autocomplete test 9",
- "Sequence" : "am" + TAB + "inv" + TAB + ENTER,
- "Result" : BAD_ARG},
- {"Name" : "autocomplete test 10",
- "Sequence" : "au" + TAB + ENTER,
- "Result" : NOT_FOUND},
- {"Name" : "autocomplete test 11",
- "Sequence" : "au" + TAB + "1" + ENTER,
- "Result" : AUTO1},
- {"Name" : "autocomplete test 12",
- "Sequence" : "au" + TAB + "2" + ENTER,
- "Result" : AUTO2},
- {"Name" : "autocomplete test 13",
- "Sequence" : "au" + TAB + "2" + TAB + ENTER,
- "Result" : AUTO2},
- {"Name" : "autocomplete test 14",
- "Sequence" : "au" + TAB + "2 " + TAB + ENTER,
- "Result" : AUTO2},
- {"Name" : "autocomplete test 15",
- "Sequence" : "24" + TAB + ENTER,
- "Result" : "24"},
+ # test autocomplete
+ {"Name": "autocomplete test 1",
+ "Sequence": "si" + TAB + ENTER,
+ "Result": SINGLE},
+ {"Name": "autocomplete test 2",
+ "Sequence": "si" + TAB + "_" + TAB + ENTER,
+ "Result": SINGLE_LONG},
+ {"Name": "autocomplete test 3",
+ "Sequence": "in" + TAB + ENTER,
+ "Result": NOT_FOUND},
+ {"Name": "autocomplete test 4",
+ "Sequence": "am" + TAB + ENTER,
+ "Result": BAD_ARG},
+ {"Name": "autocomplete test 5",
+ "Sequence": "am" + TAB + "fir" + TAB + ENTER,
+ "Result": CMD1},
+ {"Name": "autocomplete test 6",
+ "Sequence": "am" + TAB + "fir" + TAB + TAB + ENTER,
+ "Result": CMD1},
+ {"Name": "autocomplete test 7",
+ "Sequence": "am" + TAB + "fir" + TAB + " " + TAB + ENTER,
+ "Result": CMD1},
+ {"Name": "autocomplete test 8",
+ "Sequence": "am" + TAB + " am" + TAB + " " + ENTER,
+ "Result": AMBIG},
+ {"Name": "autocomplete test 9",
+ "Sequence": "am" + TAB + "inv" + TAB + ENTER,
+ "Result": BAD_ARG},
+ {"Name": "autocomplete test 10",
+ "Sequence": "au" + TAB + ENTER,
+ "Result": NOT_FOUND},
+ {"Name": "autocomplete test 11",
+ "Sequence": "au" + TAB + "1" + ENTER,
+ "Result": AUTO1},
+ {"Name": "autocomplete test 12",
+ "Sequence": "au" + TAB + "2" + ENTER,
+ "Result": AUTO2},
+ {"Name": "autocomplete test 13",
+ "Sequence": "au" + TAB + "2" + TAB + ENTER,
+ "Result": AUTO2},
+ {"Name": "autocomplete test 14",
+ "Sequence": "au" + TAB + "2 " + TAB + ENTER,
+ "Result": AUTO2},
+ {"Name": "autocomplete test 15",
+ "Sequence": "24" + TAB + ENTER,
+ "Result": "24"},
-# test history
- {"Name" : "history test 1",
- "Sequence" : "invalid" + ENTER + "single" + ENTER + "invalid" + ENTER + UP + CTRL_P + ENTER,
- "Result" : SINGLE},
- {"Name" : "history test 2",
- "Sequence" : "invalid" + ENTER + "ambiguous first" + ENTER + "invalid" + ENTER + "single" + ENTER + UP * 3 + CTRL_N + DOWN + ENTER,
- "Result" : SINGLE},
+ # test history
+ {"Name": "history test 1",
+ "Sequence": "invalid" + ENTER + "single" + ENTER + "invalid" +
+ ENTER + UP + CTRL_P + ENTER,
+ "Result": SINGLE},
+ {"Name": "history test 2",
+ "Sequence": "invalid" + ENTER + "ambiguous first" + ENTER + "invalid" +
+ ENTER + "single" + ENTER + UP * 3 + CTRL_N + DOWN + ENTER,
+ "Result": SINGLE},
-#
-# tests that improve coverage
-#
+ #
+ # tests that improve coverage
+ #
-# empty space tests
- {"Name" : "empty space test 1",
- "Sequence" : RIGHT + LEFT + CTRL_B + CTRL_F + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 2",
- "Sequence" : BKSPACE + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 3",
- "Sequence" : CTRL_E*2 + CTRL_A*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 4",
- "Sequence" : ALT_F*2 + ALT_B*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 5",
- "Sequence" : " " + CTRL_E*2 + CTRL_A*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 6",
- "Sequence" : " " + CTRL_A + ALT_F*2 + ALT_B*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 7",
- "Sequence" : " " + CTRL_A + CTRL_D + CTRL_E + CTRL_D + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 8",
- "Sequence" : " space" + CTRL_W*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 9",
- "Sequence" : " space" + ALT_BKSPACE*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 10",
- "Sequence" : " space " + CTRL_A + ALT_D*3 + ENTER,
- "Result" : PROMPT},
+ # empty space tests
+ {"Name": "empty space test 1",
+ "Sequence": RIGHT + LEFT + CTRL_B + CTRL_F + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 2",
+ "Sequence": BKSPACE + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 3",
+ "Sequence": CTRL_E*2 + CTRL_A*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 4",
+ "Sequence": ALT_F*2 + ALT_B*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 5",
+ "Sequence": " " + CTRL_E*2 + CTRL_A*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 6",
+ "Sequence": " " + CTRL_A + ALT_F*2 + ALT_B*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 7",
+ "Sequence": " " + CTRL_A + CTRL_D + CTRL_E + CTRL_D + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 8",
+ "Sequence": " space" + CTRL_W*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 9",
+ "Sequence": " space" + ALT_BKSPACE*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 10",
+ "Sequence": " space " + CTRL_A + ALT_D*3 + ENTER,
+ "Result": PROMPT},
-# non-printable char tests
- {"Name" : "non-printable test 1",
- "Sequence" : chr(27) + chr(47) + ENTER,
- "Result" : PROMPT},
- {"Name" : "non-printable test 2",
- "Sequence" : chr(27) + chr(128) + ENTER*7,
- "Result" : PROMPT},
- {"Name" : "non-printable test 3",
- "Sequence" : chr(27) + chr(91) + chr(127) + ENTER*6,
- "Result" : PROMPT},
+ # non-printable char tests
+ {"Name": "non-printable test 1",
+ "Sequence": chr(27) + chr(47) + ENTER,
+ "Result": PROMPT},
+ {"Name": "non-printable test 2",
+ "Sequence": chr(27) + chr(128) + ENTER*7,
+ "Result": PROMPT},
+ {"Name": "non-printable test 3",
+ "Sequence": chr(27) + chr(91) + chr(127) + ENTER*6,
+ "Result": PROMPT},
-# miscellaneous tests
- {"Name" : "misc test 1",
- "Sequence" : ENTER,
- "Result" : PROMPT},
- {"Name" : "misc test 2",
- "Sequence" : "single #comment" + ENTER,
- "Result" : SINGLE},
- {"Name" : "misc test 3",
- "Sequence" : "#empty line" + ENTER,
- "Result" : PROMPT},
- {"Name" : "misc test 4",
- "Sequence" : " single " + ENTER,
- "Result" : SINGLE},
- {"Name" : "misc test 5",
- "Sequence" : "single#" + ENTER,
- "Result" : SINGLE},
- {"Name" : "misc test 6",
- "Sequence" : 'a' * 257 + ENTER,
- "Result" : NOT_FOUND},
- {"Name" : "misc test 7",
- "Sequence" : "clear_history" + UP*5 + DOWN*5 + ENTER,
- "Result" : PROMPT},
- {"Name" : "misc test 8",
- "Sequence" : "a" + HELP + CTRL_C,
- "Result" : PROMPT},
- {"Name" : "misc test 9",
- "Sequence" : CTRL_D*3,
- "Result" : None},
+ # miscellaneous tests
+ {"Name": "misc test 1",
+ "Sequence": ENTER,
+ "Result": PROMPT},
+ {"Name": "misc test 2",
+ "Sequence": "single #comment" + ENTER,
+ "Result": SINGLE},
+ {"Name": "misc test 3",
+ "Sequence": "#empty line" + ENTER,
+ "Result": PROMPT},
+ {"Name": "misc test 4",
+ "Sequence": " single " + ENTER,
+ "Result": SINGLE},
+ {"Name": "misc test 5",
+ "Sequence": "single#" + ENTER,
+ "Result": SINGLE},
+ {"Name": "misc test 6",
+ "Sequence": 'a' * 257 + ENTER,
+ "Result": NOT_FOUND},
+ {"Name": "misc test 7",
+ "Sequence": "clear_history" + UP*5 + DOWN*5 + ENTER,
+ "Result": PROMPT},
+ {"Name": "misc test 8",
+ "Sequence": "a" + HELP + CTRL_C,
+ "Result": PROMPT},
+ {"Name": "misc test 9",
+ "Sequence": CTRL_D*3,
+ "Result": None},
]
diff --git a/app/test/autotest.py b/app/test/autotest.py
index b9fd6b6..3a00538 100644
--- a/app/test/autotest.py
+++ b/app/test/autotest.py
@@ -33,44 +33,46 @@
# Script that uses either test app or qemu controlled by python-pexpect
-import sys, autotest_data, autotest_runner
-
+import autotest_data
+import autotest_runner
+import sys
def usage():
- print"Usage: autotest.py [test app|test iso image]",
- print "[target] [whitelist|-blacklist]"
+ print"Usage: autotest.py [test app|test iso image]",
+ print "[target] [whitelist|-blacklist]"
if len(sys.argv) < 3:
- usage()
- sys.exit(1)
+ usage()
+ sys.exit(1)
target = sys.argv[2]
-test_whitelist=None
-test_blacklist=None
+test_whitelist = None
+test_blacklist = None
# get blacklist/whitelist
if len(sys.argv) > 3:
- testlist = sys.argv[3].split(',')
- testlist = [test.lower() for test in testlist]
- if testlist[0].startswith('-'):
- testlist[0] = testlist[0].lstrip('-')
- test_blacklist = testlist
- else:
- test_whitelist = testlist
+ testlist = sys.argv[3].split(',')
+ testlist = [test.lower() for test in testlist]
+ if testlist[0].startswith('-'):
+ testlist[0] = testlist[0].lstrip('-')
+ test_blacklist = testlist
+ else:
+ test_whitelist = testlist
-cmdline = "%s -c f -n 4"%(sys.argv[1])
+cmdline = "%s -c f -n 4" % (sys.argv[1])
print cmdline
-runner = autotest_runner.AutotestRunner(cmdline, target, test_blacklist, test_whitelist)
+runner = autotest_runner.AutotestRunner(cmdline, target, test_blacklist,
+ test_whitelist)
for test_group in autotest_data.parallel_test_group_list:
- runner.add_parallel_test_group(test_group)
+ runner.add_parallel_test_group(test_group)
for test_group in autotest_data.non_parallel_test_group_list:
- runner.add_non_parallel_test_group(test_group)
+ runner.add_non_parallel_test_group(test_group)
num_fails = runner.run_all_tests()
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 9e8fd94..0cf4cfd 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# BSD LICENSE
#
@@ -36,12 +36,14 @@
from glob import glob
from autotest_test_funcs import *
+
# quick and dirty function to find out number of sockets
def num_sockets():
- result = len(glob("/sys/devices/system/node/node*"))
- if result == 0:
- return 1
- return result
+ result = len(glob("/sys/devices/system/node/node*"))
+ if result == 0:
+ return 1
+ return result
+
# Assign given number to each socket
# e.g. 32 becomes 32,32 or 32,32,32,32
@@ -51,420 +53,419 @@ def per_sockets(num):
# groups of tests that can be run in parallel
# the grouping has been found largely empirically
parallel_test_group_list = [
-
-{
- "Prefix": "group_1",
- "Memory" : per_sockets(8),
- "Tests" :
- [
- {
- "Name" : "Cycles autotest",
- "Command" : "cycles_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Timer autotest",
- "Command" : "timer_autotest",
- "Func" : timer_autotest,
- "Report" : None,
- },
- {
- "Name" : "Debug autotest",
- "Command" : "debug_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Errno autotest",
- "Command" : "errno_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Meter autotest",
- "Command" : "meter_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Common autotest",
- "Command" : "common_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Resource autotest",
- "Command" : "resource_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "group_2",
- "Memory" : "16",
- "Tests" :
- [
- {
- "Name" : "Memory autotest",
- "Command" : "memory_autotest",
- "Func" : memory_autotest,
- "Report" : None,
- },
- {
- "Name" : "Read/write lock autotest",
- "Command" : "rwlock_autotest",
- "Func" : rwlock_autotest,
- "Report" : None,
- },
- {
- "Name" : "Logs autotest",
- "Command" : "logs_autotest",
- "Func" : logs_autotest,
- "Report" : None,
- },
- {
- "Name" : "CPU flags autotest",
- "Command" : "cpuflags_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Version autotest",
- "Command" : "version_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "EAL filesystem autotest",
- "Command" : "eal_fs_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "EAL flags autotest",
- "Command" : "eal_flags_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Hash autotest",
- "Command" : "hash_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ],
-},
-{
- "Prefix": "group_3",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "LPM autotest",
- "Command" : "lpm_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "LPM6 autotest",
- "Command" : "lpm6_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Memcpy autotest",
- "Command" : "memcpy_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Memzone autotest",
- "Command" : "memzone_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "String autotest",
- "Command" : "string_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Alarm autotest",
- "Command" : "alarm_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "group_4",
- "Memory" : per_sockets(128),
- "Tests" :
- [
- {
- "Name" : "PCI autotest",
- "Command" : "pci_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Malloc autotest",
- "Command" : "malloc_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Multi-process autotest",
- "Command" : "multiprocess_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Mbuf autotest",
- "Command" : "mbuf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Per-lcore autotest",
- "Command" : "per_lcore_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Ring autotest",
- "Command" : "ring_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "group_5",
- "Memory" : "32",
- "Tests" :
- [
- {
- "Name" : "Spinlock autotest",
- "Command" : "spinlock_autotest",
- "Func" : spinlock_autotest,
- "Report" : None,
- },
- {
- "Name" : "Byte order autotest",
- "Command" : "byteorder_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "TAILQ autotest",
- "Command" : "tailq_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Command-line autotest",
- "Command" : "cmdline_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Interrupts autotest",
- "Command" : "interrupt_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "group_6",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "Function reentrancy autotest",
- "Command" : "func_reentrancy_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Mempool autotest",
- "Command" : "mempool_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Atomics autotest",
- "Command" : "atomic_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Prefetch autotest",
- "Command" : "prefetch_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" :"Red autotest",
- "Command" : "red_autotest",
- "Func" :default_autotest,
- "Report" :None,
- },
- ]
-},
-{
- "Prefix" : "group_7",
- "Memory" : "64",
- "Tests" :
- [
- {
- "Name" : "PMD ring autotest",
- "Command" : "ring_pmd_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Access list control autotest",
- "Command" : "acl_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" :"Sched autotest",
- "Command" : "sched_autotest",
- "Func" :default_autotest,
- "Report" :None,
- },
- ]
-},
+ {
+ "Prefix": "group_1",
+ "Memory": per_sockets(8),
+ "Tests":
+ [
+ {
+ "Name": "Cycles autotest",
+ "Command": "cycles_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Timer autotest",
+ "Command": "timer_autotest",
+ "Func": timer_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Debug autotest",
+ "Command": "debug_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Errno autotest",
+ "Command": "errno_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Meter autotest",
+ "Command": "meter_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Common autotest",
+ "Command": "common_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Resource autotest",
+ "Command": "resource_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "group_2",
+ "Memory": "16",
+ "Tests":
+ [
+ {
+ "Name": "Memory autotest",
+ "Command": "memory_autotest",
+ "Func": memory_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Read/write lock autotest",
+ "Command": "rwlock_autotest",
+ "Func": rwlock_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Logs autotest",
+ "Command": "logs_autotest",
+ "Func": logs_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "CPU flags autotest",
+ "Command": "cpuflags_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Version autotest",
+ "Command": "version_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "EAL filesystem autotest",
+ "Command": "eal_fs_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "EAL flags autotest",
+ "Command": "eal_flags_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Hash autotest",
+ "Command": "hash_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ],
+ },
+ {
+ "Prefix": "group_3",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "LPM autotest",
+ "Command": "lpm_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "LPM6 autotest",
+ "Command": "lpm6_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Memcpy autotest",
+ "Command": "memcpy_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Memzone autotest",
+ "Command": "memzone_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "String autotest",
+ "Command": "string_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Alarm autotest",
+ "Command": "alarm_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "group_4",
+ "Memory": per_sockets(128),
+ "Tests":
+ [
+ {
+ "Name": "PCI autotest",
+ "Command": "pci_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Malloc autotest",
+ "Command": "malloc_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Multi-process autotest",
+ "Command": "multiprocess_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Mbuf autotest",
+ "Command": "mbuf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Per-lcore autotest",
+ "Command": "per_lcore_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Ring autotest",
+ "Command": "ring_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "group_5",
+ "Memory": "32",
+ "Tests":
+ [
+ {
+ "Name": "Spinlock autotest",
+ "Command": "spinlock_autotest",
+ "Func": spinlock_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Byte order autotest",
+ "Command": "byteorder_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "TAILQ autotest",
+ "Command": "tailq_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Command-line autotest",
+ "Command": "cmdline_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Interrupts autotest",
+ "Command": "interrupt_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "group_6",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "Function reentrancy autotest",
+ "Command": "func_reentrancy_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Mempool autotest",
+ "Command": "mempool_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Atomics autotest",
+ "Command": "atomic_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Prefetch autotest",
+ "Command": "prefetch_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Red autotest",
+ "Command": "red_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "group_7",
+ "Memory": "64",
+ "Tests":
+ [
+ {
+ "Name": "PMD ring autotest",
+ "Command": "ring_pmd_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Access list control autotest",
+ "Command": "acl_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Sched autotest",
+ "Command": "sched_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
]
# tests that should not be run when any other tests are running
non_parallel_test_group_list = [
-{
- "Prefix" : "kni",
- "Memory" : "512",
- "Tests" :
- [
- {
- "Name" : "KNI autotest",
- "Command" : "kni_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "mempool_perf",
- "Memory" : per_sockets(256),
- "Tests" :
- [
- {
- "Name" : "Mempool performance autotest",
- "Command" : "mempool_perf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "memcpy_perf",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "Memcpy performance autotest",
- "Command" : "memcpy_perf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "hash_perf",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "Hash performance autotest",
- "Command" : "hash_perf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix" : "power",
- "Memory" : "16",
- "Tests" :
- [
- {
- "Name" : "Power autotest",
- "Command" : "power_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix" : "power_acpi_cpufreq",
- "Memory" : "16",
- "Tests" :
- [
- {
- "Name" : "Power ACPI cpufreq autotest",
- "Command" : "power_acpi_cpufreq_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix" : "power_kvm_vm",
- "Memory" : "16",
- "Tests" :
- [
- {
- "Name" : "Power KVM VM autotest",
- "Command" : "power_kvm_vm_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "timer_perf",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "Timer performance autotest",
- "Command" : "timer_perf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
+ {
+ "Prefix": "kni",
+ "Memory": "512",
+ "Tests":
+ [
+ {
+ "Name": "KNI autotest",
+ "Command": "kni_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "mempool_perf",
+ "Memory": per_sockets(256),
+ "Tests":
+ [
+ {
+ "Name": "Mempool performance autotest",
+ "Command": "mempool_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "memcpy_perf",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "Memcpy performance autotest",
+ "Command": "memcpy_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "hash_perf",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "Hash performance autotest",
+ "Command": "hash_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "power",
+ "Memory": "16",
+ "Tests":
+ [
+ {
+ "Name": "Power autotest",
+ "Command": "power_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "power_acpi_cpufreq",
+ "Memory": "16",
+ "Tests":
+ [
+ {
+ "Name": "Power ACPI cpufreq autotest",
+ "Command": "power_acpi_cpufreq_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "power_kvm_vm",
+ "Memory": "16",
+ "Tests":
+ [
+ {
+ "Name": "Power KVM VM autotest",
+ "Command": "power_kvm_vm_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "timer_perf",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "Timer performance autotest",
+ "Command": "timer_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
-#
-# Please always make sure that ring_perf is the last test!
-#
-{
- "Prefix": "ring_perf",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "Ring performance autotest",
- "Command" : "ring_perf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
+ #
+ # Please always make sure that ring_perf is the last test!
+ #
+ {
+ "Prefix": "ring_perf",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "Ring performance autotest",
+ "Command": "ring_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
]
diff --git a/app/test/autotest_runner.py b/app/test/autotest_runner.py
index 21d3be2..55b63a8 100644
--- a/app/test/autotest_runner.py
+++ b/app/test/autotest_runner.py
@@ -33,20 +33,29 @@
# The main logic behind running autotests in parallel
-import multiprocessing, subprocess, sys, pexpect, re, time, os, StringIO, csv
+import StringIO
+import csv
+import multiprocessing
+import pexpect
+import re
+import subprocess
+import sys
+import time
# wait for prompt
+
+
def wait_prompt(child):
- try:
- child.sendline()
- result = child.expect(["RTE>>", pexpect.TIMEOUT, pexpect.EOF],
- timeout = 120)
- except:
- return False
- if result == 0:
- return True
- else:
- return False
+ try:
+ child.sendline()
+ result = child.expect(["RTE>>", pexpect.TIMEOUT, pexpect.EOF],
+ timeout=120)
+ except:
+ return False
+ if result == 0:
+ return True
+ else:
+ return False
# run a test group
# each result tuple in results list consists of:
@@ -60,363 +69,363 @@ def wait_prompt(child):
# this function needs to be outside AutotestRunner class
# because otherwise Pool won't work (or rather it will require
# quite a bit of effort to make it work).
-def run_test_group(cmdline, test_group):
- results = []
- child = None
- start_time = time.time()
- startuplog = None
-
- # run test app
- try:
- # prepare logging of init
- startuplog = StringIO.StringIO()
-
- print >>startuplog, "\n%s %s\n" % ("="*20, test_group["Prefix"])
- print >>startuplog, "\ncmdline=%s" % cmdline
-
- child = pexpect.spawn(cmdline, logfile=startuplog)
-
- # wait for target to boot
- if not wait_prompt(child):
- child.close()
-
- results.append((-1, "Fail [No prompt]", "Start %s" % test_group["Prefix"],
- time.time() - start_time, startuplog.getvalue(), None))
-
- # mark all tests as failed
- for test in test_group["Tests"]:
- results.append((-1, "Fail [No prompt]", test["Name"],
- time.time() - start_time, "", None))
- # exit test
- return results
-
- except:
- results.append((-1, "Fail [Can't run]", "Start %s" % test_group["Prefix"],
- time.time() - start_time, startuplog.getvalue(), None))
-
- # mark all tests as failed
- for t in test_group["Tests"]:
- results.append((-1, "Fail [Can't run]", t["Name"],
- time.time() - start_time, "", None))
- # exit test
- return results
-
- # startup was successful
- results.append((0, "Success", "Start %s" % test_group["Prefix"],
- time.time() - start_time, startuplog.getvalue(), None))
-
- # parse the binary for available test commands
- binary = cmdline.split()[0]
- stripped = 'not stripped' not in subprocess.check_output(['file', binary])
- if not stripped:
- symbols = subprocess.check_output(['nm', binary]).decode('utf-8')
- avail_cmds = re.findall('test_register_(\w+)', symbols)
-
- # run all tests in test group
- for test in test_group["Tests"]:
-
- # create log buffer for each test
- # in multiprocessing environment, the logging would be
- # interleaved and will create a mess, hence the buffering
- logfile = StringIO.StringIO()
- child.logfile = logfile
-
- result = ()
-
- # make a note when the test started
- start_time = time.time()
-
- try:
- # print test name to log buffer
- print >>logfile, "\n%s %s\n" % ("-"*20, test["Name"])
-
- # run test function associated with the test
- if stripped or test["Command"] in avail_cmds:
- result = test["Func"](child, test["Command"])
- else:
- result = (0, "Skipped [Not Available]")
-
- # make a note when the test was finished
- end_time = time.time()
-
- # append test data to the result tuple
- result += (test["Name"], end_time - start_time,
- logfile.getvalue())
-
- # call report function, if any defined, and supply it with
- # target and complete log for test run
- if test["Report"]:
- report = test["Report"](self.target, log)
-
- # append report to results tuple
- result += (report,)
- else:
- # report is None
- result += (None,)
- except:
- # make a note when the test crashed
- end_time = time.time()
-
- # mark test as failed
- result = (-1, "Fail [Crash]", test["Name"],
- end_time - start_time, logfile.getvalue(), None)
- finally:
- # append the results to the results list
- results.append(result)
-
- # regardless of whether test has crashed, try quitting it
- try:
- child.sendline("quit")
- child.close()
- # if the test crashed, just do nothing instead
- except:
- # nop
- pass
-
- # return test results
- return results
-
+def run_test_group(cmdline, test_group):
+ results = []
+ child = None
+ start_time = time.time()
+ startuplog = None
+
+ # run test app
+ try:
+ # prepare logging of init
+ startuplog = StringIO.StringIO()
+
+ print >>startuplog, "\n%s %s\n" % ("=" * 20, test_group["Prefix"])
+ print >>startuplog, "\ncmdline=%s" % cmdline
+
+ child = pexpect.spawn(cmdline, logfile=startuplog)
+
+ # wait for target to boot
+ if not wait_prompt(child):
+ child.close()
+
+ results.append((-1,
+ "Fail [No prompt]",
+ "Start %s" % test_group["Prefix"],
+ time.time() - start_time,
+ startuplog.getvalue(),
+ None))
+
+ # mark all tests as failed
+ for test in test_group["Tests"]:
+ results.append((-1, "Fail [No prompt]", test["Name"],
+ time.time() - start_time, "", None))
+ # exit test
+ return results
+
+ except:
+ results.append((-1,
+ "Fail [Can't run]",
+ "Start %s" % test_group["Prefix"],
+ time.time() - start_time,
+ startuplog.getvalue(),
+ None))
+
+ # mark all tests as failed
+ for t in test_group["Tests"]:
+ results.append((-1, "Fail [Can't run]", t["Name"],
+ time.time() - start_time, "", None))
+ # exit test
+ return results
+
+ # startup was successful
+ results.append((0, "Success", "Start %s" % test_group["Prefix"],
+ time.time() - start_time, startuplog.getvalue(), None))
+
+ # parse the binary for available test commands
+ binary = cmdline.split()[0]
+ stripped = 'not stripped' not in subprocess.check_output(['file', binary])
+ if not stripped:
+ symbols = subprocess.check_output(['nm', binary]).decode('utf-8')
+ avail_cmds = re.findall('test_register_(\w+)', symbols)
+
+ # run all tests in test group
+ for test in test_group["Tests"]:
+
+ # create log buffer for each test
+ # in multiprocessing environment, the logging would be
+ # interleaved and will create a mess, hence the buffering
+ logfile = StringIO.StringIO()
+ child.logfile = logfile
+
+ result = ()
+
+ # make a note when the test started
+ start_time = time.time()
+
+ try:
+ # print test name to log buffer
+ print >>logfile, "\n%s %s\n" % ("-" * 20, test["Name"])
+
+ # run test function associated with the test
+ if stripped or test["Command"] in avail_cmds:
+ result = test["Func"](child, test["Command"])
+ else:
+ result = (0, "Skipped [Not Available]")
+
+ # make a note when the test was finished
+ end_time = time.time()
+
+ # append test data to the result tuple
+ result += (test["Name"], end_time - start_time,
+ logfile.getvalue())
+
+ # call report function, if any defined, and supply it with
+ # target and complete log for test run
+ if test["Report"]:
+ report = test["Report"](self.target, log)
+
+ # append report to results tuple
+ result += (report,)
+ else:
+ # report is None
+ result += (None,)
+ except:
+ # make a note when the test crashed
+ end_time = time.time()
+
+ # mark test as failed
+ result = (-1, "Fail [Crash]", test["Name"],
+ end_time - start_time, logfile.getvalue(), None)
+ finally:
+ # append the results to the results list
+ results.append(result)
+
+ # regardless of whether test has crashed, try quitting it
+ try:
+ child.sendline("quit")
+ child.close()
+ # if the test crashed, just do nothing instead
+ except:
+ # nop
+ pass
+
+ # return test results
+ return results
# class representing an instance of autotests run
class AutotestRunner:
- cmdline = ""
- parallel_test_groups = []
- non_parallel_test_groups = []
- logfile = None
- csvwriter = None
- target = ""
- start = None
- n_tests = 0
- fails = 0
- log_buffers = []
- blacklist = []
- whitelist = []
-
-
- def __init__(self, cmdline, target, blacklist, whitelist):
- self.cmdline = cmdline
- self.target = target
- self.blacklist = blacklist
- self.whitelist = whitelist
-
- # log file filename
- logfile = "%s.log" % target
- csvfile = "%s.csv" % target
-
- self.logfile = open(logfile, "w")
- csvfile = open(csvfile, "w")
- self.csvwriter = csv.writer(csvfile)
-
- # prepare results table
- self.csvwriter.writerow(["test_name","test_result","result_str"])
-
-
-
- # set up cmdline string
- def __get_cmdline(self, test):
- cmdline = self.cmdline
-
- # append memory limitations for each test
- # otherwise tests won't run in parallel
- if not "i686" in self.target:
- cmdline += " --socket-mem=%s"% test["Memory"]
- else:
- # affinitize startup so that tests don't fail on i686
- cmdline = "taskset 1 " + cmdline
- cmdline += " -m " + str(sum(map(int,test["Memory"].split(","))))
-
- # set group prefix for autotest group
- # otherwise they won't run in parallel
- cmdline += " --file-prefix=%s"% test["Prefix"]
-
- return cmdline
-
-
-
- def add_parallel_test_group(self,test_group):
- self.parallel_test_groups.append(test_group)
-
- def add_non_parallel_test_group(self,test_group):
- self.non_parallel_test_groups.append(test_group)
-
-
- def __process_results(self, results):
- # this iterates over individual test results
- for i, result in enumerate(results):
-
- # increase total number of tests that were run
- # do not include "start" test
- if i > 0:
- self.n_tests += 1
-
- # unpack result tuple
- test_result, result_str, test_name, \
- test_time, log, report = result
-
- # get total run time
- cur_time = time.time()
- total_time = int(cur_time - self.start)
-
- # print results, test run time and total time since start
- print ("%s:" % test_name).ljust(30),
- print result_str.ljust(29),
- print "[%02dm %02ds]" % (test_time / 60, test_time % 60),
-
- # don't print out total time every line, it's the same anyway
- if i == len(results) - 1:
- print "[%02dm %02ds]" % (total_time / 60, total_time % 60)
- else:
- print ""
-
- # if test failed and it wasn't a "start" test
- if test_result < 0 and not i == 0:
- self.fails += 1
-
- # collect logs
- self.log_buffers.append(log)
-
- # create report if it exists
- if report:
- try:
- f = open("%s_%s_report.rst" % (self.target,test_name), "w")
- except IOError:
- print "Report for %s could not be created!" % test_name
- else:
- with f:
- f.write(report)
-
- # write test result to CSV file
- if i != 0:
- self.csvwriter.writerow([test_name, test_result, result_str])
-
-
-
-
- # this function iterates over test groups and removes each
- # test that is not in whitelist/blacklist
- def __filter_groups(self, test_groups):
- groups_to_remove = []
-
- # filter out tests from parallel test groups
- for i, test_group in enumerate(test_groups):
-
- # iterate over a copy so that we could safely delete individual tests
- for test in test_group["Tests"][:]:
- test_id = test["Command"]
-
- # dump tests are specified in full e.g. "Dump_mempool"
- if "_autotest" in test_id:
- test_id = test_id[:-len("_autotest")]
-
- # filter out blacklisted/whitelisted tests
- if self.blacklist and test_id in self.blacklist:
- test_group["Tests"].remove(test)
- continue
- if self.whitelist and test_id not in self.whitelist:
- test_group["Tests"].remove(test)
- continue
-
- # modify or remove original group
- if len(test_group["Tests"]) > 0:
- test_groups[i] = test_group
- else:
- # remember which groups should be deleted
- # put the numbers backwards so that we start
- # deleting from the end, not from the beginning
- groups_to_remove.insert(0, i)
-
- # remove test groups that need to be removed
- for i in groups_to_remove:
- del test_groups[i]
-
- return test_groups
-
-
-
- # iterate over test groups and run tests associated with them
- def run_all_tests(self):
- # filter groups
- self.parallel_test_groups = \
- self.__filter_groups(self.parallel_test_groups)
- self.non_parallel_test_groups = \
- self.__filter_groups(self.non_parallel_test_groups)
-
- # create a pool of worker threads
- pool = multiprocessing.Pool(processes=1)
-
- results = []
-
- # whatever happens, try to save as much logs as possible
- try:
-
- # create table header
- print ""
- print "Test name".ljust(30),
- print "Test result".ljust(29),
- print "Test".center(9),
- print "Total".center(9)
- print "=" * 80
-
- # make a note of tests start time
- self.start = time.time()
-
- # assign worker threads to run test groups
- for test_group in self.parallel_test_groups:
- result = pool.apply_async(run_test_group,
- [self.__get_cmdline(test_group), test_group])
- results.append(result)
-
- # iterate while we have group execution results to get
- while len(results) > 0:
-
- # iterate over a copy to be able to safely delete results
- # this iterates over a list of group results
- for group_result in results[:]:
-
- # if the thread hasn't finished yet, continue
- if not group_result.ready():
- continue
-
- res = group_result.get()
-
- self.__process_results(res)
-
- # remove result from results list once we're done with it
- results.remove(group_result)
-
- # run non_parallel tests. they are run one by one, synchronously
- for test_group in self.non_parallel_test_groups:
- group_result = run_test_group(self.__get_cmdline(test_group), test_group)
-
- self.__process_results(group_result)
-
- # get total run time
- cur_time = time.time()
- total_time = int(cur_time - self.start)
-
- # print out summary
- print "=" * 80
- print "Total run time: %02dm %02ds" % (total_time / 60, total_time % 60)
- if self.fails != 0:
- print "Number of failed tests: %s" % str(self.fails)
-
- # write summary to logfile
- self.logfile.write("Summary\n")
- self.logfile.write("Target: ".ljust(15) + "%s\n" % self.target)
- self.logfile.write("Tests: ".ljust(15) + "%i\n" % self.n_tests)
- self.logfile.write("Failed tests: ".ljust(15) + "%i\n" % self.fails)
- except:
- print "Exception occured"
- print sys.exc_info()
- self.fails = 1
-
- # drop logs from all executions to a logfile
- for buf in self.log_buffers:
- self.logfile.write(buf.replace("\r",""))
-
- log_buffers = []
-
- return self.fails
+ cmdline = ""
+ parallel_test_groups = []
+ non_parallel_test_groups = []
+ logfile = None
+ csvwriter = None
+ target = ""
+ start = None
+ n_tests = 0
+ fails = 0
+ log_buffers = []
+ blacklist = []
+ whitelist = []
+
+ def __init__(self, cmdline, target, blacklist, whitelist):
+ self.cmdline = cmdline
+ self.target = target
+ self.blacklist = blacklist
+ self.whitelist = whitelist
+
+ # log file filename
+ logfile = "%s.log" % target
+ csvfile = "%s.csv" % target
+
+ self.logfile = open(logfile, "w")
+ csvfile = open(csvfile, "w")
+ self.csvwriter = csv.writer(csvfile)
+
+ # prepare results table
+ self.csvwriter.writerow(["test_name", "test_result", "result_str"])
+
+ # set up cmdline string
+ def __get_cmdline(self, test):
+ cmdline = self.cmdline
+
+ # append memory limitations for each test
+ # otherwise tests won't run in parallel
+ if "i686" not in self.target:
+ cmdline += " --socket-mem=%s" % test["Memory"]
+ else:
+ # affinitize startup so that tests don't fail on i686
+ cmdline = "taskset 1 " + cmdline
+ cmdline += " -m " + str(sum(map(int, test["Memory"].split(","))))
+
+ # set group prefix for autotest group
+ # otherwise they won't run in parallel
+ cmdline += " --file-prefix=%s" % test["Prefix"]
+
+ return cmdline
+
+ def add_parallel_test_group(self, test_group):
+ self.parallel_test_groups.append(test_group)
+
+ def add_non_parallel_test_group(self, test_group):
+ self.non_parallel_test_groups.append(test_group)
+
+ def __process_results(self, results):
+ # this iterates over individual test results
+ for i, result in enumerate(results):
+
+ # increase total number of tests that were run
+ # do not include "start" test
+ if i > 0:
+ self.n_tests += 1
+
+ # unpack result tuple
+ test_result, result_str, test_name, \
+ test_time, log, report = result
+
+ # get total run time
+ cur_time = time.time()
+ total_time = int(cur_time - self.start)
+
+ # print results, test run time and total time since start
+ print ("%s:" % test_name).ljust(30),
+ print result_str.ljust(29),
+ print "[%02dm %02ds]" % (test_time / 60, test_time % 60),
+
+ # don't print out total time every line, it's the same anyway
+ if i == len(results) - 1:
+ print "[%02dm %02ds]" % (total_time / 60, total_time % 60)
+ else:
+ print ""
+
+ # if test failed and it wasn't a "start" test
+ if test_result < 0 and not i == 0:
+ self.fails += 1
+
+ # collect logs
+ self.log_buffers.append(log)
+
+ # create report if it exists
+ if report:
+ try:
+ f = open("%s_%s_report.rst" %
+ (self.target, test_name), "w")
+ except IOError:
+ print "Report for %s could not be created!" % test_name
+ else:
+ with f:
+ f.write(report)
+
+ # write test result to CSV file
+ if i != 0:
+ self.csvwriter.writerow([test_name, test_result, result_str])
+
+ # this function iterates over test groups and removes each
+ # test that is not in whitelist/blacklist
+ def __filter_groups(self, test_groups):
+ groups_to_remove = []
+
+ # filter out tests from parallel test groups
+ for i, test_group in enumerate(test_groups):
+
+ # iterate over a copy so that we could safely delete individual
+ # tests
+ for test in test_group["Tests"][:]:
+ test_id = test["Command"]
+
+ # dump tests are specified in full e.g. "Dump_mempool"
+ if "_autotest" in test_id:
+ test_id = test_id[:-len("_autotest")]
+
+ # filter out blacklisted/whitelisted tests
+ if self.blacklist and test_id in self.blacklist:
+ test_group["Tests"].remove(test)
+ continue
+ if self.whitelist and test_id not in self.whitelist:
+ test_group["Tests"].remove(test)
+ continue
+
+ # modify or remove original group
+ if len(test_group["Tests"]) > 0:
+ test_groups[i] = test_group
+ else:
+ # remember which groups should be deleted
+ # put the numbers backwards so that we start
+ # deleting from the end, not from the beginning
+ groups_to_remove.insert(0, i)
+
+ # remove test groups that need to be removed
+ for i in groups_to_remove:
+ del test_groups[i]
+
+ return test_groups
+
+ # iterate over test groups and run tests associated with them
+ def run_all_tests(self):
+ # filter groups
+ self.parallel_test_groups = \
+ self.__filter_groups(self.parallel_test_groups)
+ self.non_parallel_test_groups = \
+ self.__filter_groups(self.non_parallel_test_groups)
+
+ # create a pool of worker threads
+ pool = multiprocessing.Pool(processes=1)
+
+ results = []
+
+ # whatever happens, try to save as much logs as possible
+ try:
+
+ # create table header
+ print ""
+ print "Test name".ljust(30),
+ print "Test result".ljust(29),
+ print "Test".center(9),
+ print "Total".center(9)
+ print "=" * 80
+
+ # make a note of tests start time
+ self.start = time.time()
+
+ # assign worker threads to run test groups
+ for test_group in self.parallel_test_groups:
+ result = pool.apply_async(run_test_group,
+ [self.__get_cmdline(test_group),
+ test_group])
+ results.append(result)
+
+ # iterate while we have group execution results to get
+ while len(results) > 0:
+
+ # iterate over a copy to be able to safely delete results
+ # this iterates over a list of group results
+ for group_result in results[:]:
+
+ # if the thread hasn't finished yet, continue
+ if not group_result.ready():
+ continue
+
+ res = group_result.get()
+
+ self.__process_results(res)
+
+ # remove result from results list once we're done with it
+ results.remove(group_result)
+
+ # run non_parallel tests. they are run one by one, synchronously
+ for test_group in self.non_parallel_test_groups:
+ group_result = run_test_group(
+ self.__get_cmdline(test_group), test_group)
+
+ self.__process_results(group_result)
+
+ # get total run time
+ cur_time = time.time()
+ total_time = int(cur_time - self.start)
+
+ # print out summary
+ print "=" * 80
+ print "Total run time: %02dm %02ds" % (total_time / 60,
+ total_time % 60)
+ if self.fails != 0:
+ print "Number of failed tests: %s" % str(self.fails)
+
+ # write summary to logfile
+ self.logfile.write("Summary\n")
+ self.logfile.write("Target: ".ljust(15) + "%s\n" % self.target)
+ self.logfile.write("Tests: ".ljust(15) + "%i\n" % self.n_tests)
+ self.logfile.write("Failed tests: ".ljust(
+ 15) + "%i\n" % self.fails)
+ except:
+ print "Exception occurred"
+ print sys.exc_info()
+ self.fails = 1
+
+ # drop logs from all executions to a logfile
+ for buf in self.log_buffers:
+ self.logfile.write(buf.replace("\r", ""))
+
+ return self.fails
diff --git a/app/test/autotest_test_funcs.py b/app/test/autotest_test_funcs.py
index 14cffd0..c482ea8 100644
--- a/app/test/autotest_test_funcs.py
+++ b/app/test/autotest_test_funcs.py
@@ -33,257 +33,272 @@
# Test functions
-import sys, pexpect, time, os, re
+import pexpect
# default autotest, used to run most tests
# waits for "Test OK"
+
+
def default_autotest(child, test_name):
- child.sendline(test_name)
- result = child.expect(["Test OK", "Test Failed",
- "Command not found", pexpect.TIMEOUT], timeout = 900)
- if result == 1:
- return -1, "Fail"
- elif result == 2:
- return -1, "Fail [Not found]"
- elif result == 3:
- return -1, "Fail [Timeout]"
- return 0, "Success"
+ child.sendline(test_name)
+ result = child.expect(["Test OK", "Test Failed",
+ "Command not found", pexpect.TIMEOUT], timeout=900)
+ if result == 1:
+ return -1, "Fail"
+ elif result == 2:
+ return -1, "Fail [Not found]"
+ elif result == 3:
+ return -1, "Fail [Timeout]"
+ return 0, "Success"
# autotest used to run dump commands
# just fires the command
+
+
def dump_autotest(child, test_name):
- child.sendline(test_name)
- return 0, "Success"
+ child.sendline(test_name)
+ return 0, "Success"
# memory autotest
# reads output and waits for Test OK
+
+
def memory_autotest(child, test_name):
- child.sendline(test_name)
- regexp = "phys:0x[0-9a-f]*, len:([0-9]*), virt:0x[0-9a-f]*, socket_id:[0-9]*"
- index = child.expect([regexp, pexpect.TIMEOUT], timeout = 180)
- if index != 0:
- return -1, "Fail [Timeout]"
- size = int(child.match.groups()[0], 16)
- if size <= 0:
- return -1, "Fail [Bad size]"
- index = child.expect(["Test OK", "Test Failed",
- pexpect.TIMEOUT], timeout = 10)
- if index == 1:
- return -1, "Fail"
- elif index == 2:
- return -1, "Fail [Timeout]"
- return 0, "Success"
+ child.sendline(test_name)
+ regexp = "phys:0x[0-9a-f]*, len:([0-9]*), virt:0x[0-9a-f]*, " \
+ "socket_id:[0-9]*"
+ index = child.expect([regexp, pexpect.TIMEOUT], timeout=180)
+ if index != 0:
+ return -1, "Fail [Timeout]"
+ size = int(child.match.groups()[0], 16)
+ if size <= 0:
+ return -1, "Fail [Bad size]"
+ index = child.expect(["Test OK", "Test Failed",
+ pexpect.TIMEOUT], timeout=10)
+ if index == 1:
+ return -1, "Fail"
+ elif index == 2:
+ return -1, "Fail [Timeout]"
+ return 0, "Success"
+
def spinlock_autotest(child, test_name):
- i = 0
- ir = 0
- child.sendline(test_name)
- while True:
- index = child.expect(["Test OK",
- "Test Failed",
- "Hello from core ([0-9]*) !",
- "Hello from within recursive locks from ([0-9]*) !",
- pexpect.TIMEOUT], timeout = 5)
- # ok
- if index == 0:
- break
-
- # message, check ordering
- elif index == 2:
- if int(child.match.groups()[0]) < i:
- return -1, "Fail [Bad order]"
- i = int(child.match.groups()[0])
- elif index == 3:
- if int(child.match.groups()[0]) < ir:
- return -1, "Fail [Bad order]"
- ir = int(child.match.groups()[0])
-
- # fail
- elif index == 4:
- return -1, "Fail [Timeout]"
- elif index == 1:
- return -1, "Fail"
-
- return 0, "Success"
+ i = 0
+ ir = 0
+ child.sendline(test_name)
+ while True:
+ index = child.expect(["Test OK",
+ "Test Failed",
+ "Hello from core ([0-9]*) !",
+ "Hello from within recursive locks "
+ "from ([0-9]*) !",
+ pexpect.TIMEOUT], timeout=5)
+ # ok
+ if index == 0:
+ break
+
+ # message, check ordering
+ elif index == 2:
+ if int(child.match.groups()[0]) < i:
+ return -1, "Fail [Bad order]"
+ i = int(child.match.groups()[0])
+ elif index == 3:
+ if int(child.match.groups()[0]) < ir:
+ return -1, "Fail [Bad order]"
+ ir = int(child.match.groups()[0])
+
+ # fail
+ elif index == 4:
+ return -1, "Fail [Timeout]"
+ elif index == 1:
+ return -1, "Fail"
+
+ return 0, "Success"
+
def rwlock_autotest(child, test_name):
- i = 0
- child.sendline(test_name)
- while True:
- index = child.expect(["Test OK",
- "Test Failed",
- "Hello from core ([0-9]*) !",
- "Global write lock taken on master core ([0-9]*)",
- pexpect.TIMEOUT], timeout = 10)
- # ok
- if index == 0:
- if i != 0xffff:
- return -1, "Fail [Message is missing]"
- break
-
- # message, check ordering
- elif index == 2:
- if int(child.match.groups()[0]) < i:
- return -1, "Fail [Bad order]"
- i = int(child.match.groups()[0])
-
- # must be the last message, check ordering
- elif index == 3:
- i = 0xffff
-
- elif index == 4:
- return -1, "Fail [Timeout]"
-
- # fail
- else:
- return -1, "Fail"
-
- return 0, "Success"
+ i = 0
+ child.sendline(test_name)
+ while True:
+ index = child.expect(["Test OK",
+ "Test Failed",
+ "Hello from core ([0-9]*) !",
+ "Global write lock taken on master "
+ "core ([0-9]*)",
+ pexpect.TIMEOUT], timeout=10)
+ # ok
+ if index == 0:
+ if i != 0xffff:
+ return -1, "Fail [Message is missing]"
+ break
+
+ # message, check ordering
+ elif index == 2:
+ if int(child.match.groups()[0]) < i:
+ return -1, "Fail [Bad order]"
+ i = int(child.match.groups()[0])
+
+ # must be the last message, check ordering
+ elif index == 3:
+ i = 0xffff
+
+ elif index == 4:
+ return -1, "Fail [Timeout]"
+
+ # fail
+ else:
+ return -1, "Fail"
+
+ return 0, "Success"
+
def logs_autotest(child, test_name):
- i = 0
- child.sendline(test_name)
-
- log_list = [
- "TESTAPP1: error message",
- "TESTAPP1: critical message",
- "TESTAPP2: critical message",
- "TESTAPP1: error message",
- ]
-
- for log_msg in log_list:
- index = child.expect([log_msg,
- "Test OK",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 10)
-
- if index == 3:
- return -1, "Fail [Timeout]"
- # not ok
- elif index != 0:
- return -1, "Fail"
-
- index = child.expect(["Test OK",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 10)
-
- return 0, "Success"
+ child.sendline(test_name)
+
+ log_list = [
+ "TESTAPP1: error message",
+ "TESTAPP1: critical message",
+ "TESTAPP2: critical message",
+ "TESTAPP1: error message",
+ ]
+
+ for log_msg in log_list:
+ index = child.expect([log_msg,
+ "Test OK",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=10)
+
+ if index == 3:
+ return -1, "Fail [Timeout]"
+ # not ok
+ elif index != 0:
+ return -1, "Fail"
+
+ index = child.expect(["Test OK",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=10)
+
+ return 0, "Success"
+
def timer_autotest(child, test_name):
- i = 0
- child.sendline(test_name)
-
- index = child.expect(["Start timer stress tests",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 5)
-
- if index == 1:
- return -1, "Fail"
- elif index == 2:
- return -1, "Fail [Timeout]"
-
- index = child.expect(["Start timer stress tests 2",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 5)
-
- if index == 1:
- return -1, "Fail"
- elif index == 2:
- return -1, "Fail [Timeout]"
-
- index = child.expect(["Start timer basic tests",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 5)
-
- if index == 1:
- return -1, "Fail"
- elif index == 2:
- return -1, "Fail [Timeout]"
-
- prev_lcore_timer1 = -1
-
- lcore_tim0 = -1
- lcore_tim1 = -1
- lcore_tim2 = -1
- lcore_tim3 = -1
-
- while True:
- index = child.expect(["TESTTIMER: ([0-9]*): callback id=([0-9]*) count=([0-9]*) on core ([0-9]*)",
- "Test OK",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 10)
-
- if index == 1:
- break
-
- if index == 2:
- return -1, "Fail"
- elif index == 3:
- return -1, "Fail [Timeout]"
-
- try:
- t = int(child.match.groups()[0])
- id = int(child.match.groups()[1])
- cnt = int(child.match.groups()[2])
- lcore = int(child.match.groups()[3])
- except:
- return -1, "Fail [Cannot parse]"
-
- # timer0 always expires on the same core when cnt < 20
- if id == 0:
- if lcore_tim0 == -1:
- lcore_tim0 = lcore
- elif lcore != lcore_tim0 and cnt < 20:
- return -1, "Fail [lcore != lcore_tim0 (%d, %d)]"%(lcore, lcore_tim0)
- if cnt > 21:
- return -1, "Fail [tim0 cnt > 21]"
-
- # timer1 each time expires on a different core
- if id == 1:
- if lcore == lcore_tim1:
- return -1, "Fail [lcore == lcore_tim1 (%d, %d)]"%(lcore, lcore_tim1)
- lcore_tim1 = lcore
- if cnt > 10:
- return -1, "Fail [tim1 cnt > 30]"
-
- # timer0 always expires on the same core
- if id == 2:
- if lcore_tim2 == -1:
- lcore_tim2 = lcore
- elif lcore != lcore_tim2:
- return -1, "Fail [lcore != lcore_tim2 (%d, %d)]"%(lcore, lcore_tim2)
- if cnt > 30:
- return -1, "Fail [tim2 cnt > 30]"
-
- # timer0 always expires on the same core
- if id == 3:
- if lcore_tim3 == -1:
- lcore_tim3 = lcore
- elif lcore != lcore_tim3:
- return -1, "Fail [lcore_tim3 changed (%d -> %d)]"%(lcore, lcore_tim3)
- if cnt > 30:
- return -1, "Fail [tim3 cnt > 30]"
-
- # must be 2 different cores
- if lcore_tim0 == lcore_tim3:
- return -1, "Fail [lcore_tim0 (%d) == lcore_tim3 (%d)]"%(lcore_tim0, lcore_tim3)
-
- return 0, "Success"
+ child.sendline(test_name)
+
+ index = child.expect(["Start timer stress tests",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=5)
+
+ if index == 1:
+ return -1, "Fail"
+ elif index == 2:
+ return -1, "Fail [Timeout]"
+
+ index = child.expect(["Start timer stress tests 2",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=5)
+
+ if index == 1:
+ return -1, "Fail"
+ elif index == 2:
+ return -1, "Fail [Timeout]"
+
+ index = child.expect(["Start timer basic tests",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=5)
+
+ if index == 1:
+ return -1, "Fail"
+ elif index == 2:
+ return -1, "Fail [Timeout]"
+
+ lcore_tim0 = -1
+ lcore_tim1 = -1
+ lcore_tim2 = -1
+ lcore_tim3 = -1
+
+ while True:
+ index = child.expect(["TESTTIMER: ([0-9]*): callback id=([0-9]*) "
+ "count=([0-9]*) on core ([0-9]*)",
+ "Test OK",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=10)
+
+ if index == 1:
+ break
+
+ if index == 2:
+ return -1, "Fail"
+ elif index == 3:
+ return -1, "Fail [Timeout]"
+
+ try:
+ id = int(child.match.groups()[1])
+ cnt = int(child.match.groups()[2])
+ lcore = int(child.match.groups()[3])
+ except:
+ return -1, "Fail [Cannot parse]"
+
+ # timer0 always expires on the same core when cnt < 20
+ if id == 0:
+ if lcore_tim0 == -1:
+ lcore_tim0 = lcore
+ elif lcore != lcore_tim0 and cnt < 20:
+ return -1, "Fail [lcore != lcore_tim0 (%d, %d)]" \
+ % (lcore, lcore_tim0)
+ if cnt > 21:
+ return -1, "Fail [tim0 cnt > 21]"
+
+ # timer1 each time expires on a different core
+ if id == 1:
+ if lcore == lcore_tim1:
+ return -1, "Fail [lcore == lcore_tim1 (%d, %d)]" \
+ % (lcore, lcore_tim1)
+ lcore_tim1 = lcore
+ if cnt > 10:
+ return -1, "Fail [tim1 cnt > 30]"
+
+ # timer0 always expires on the same core
+ if id == 2:
+ if lcore_tim2 == -1:
+ lcore_tim2 = lcore
+ elif lcore != lcore_tim2:
+ return -1, "Fail [lcore != lcore_tim2 (%d, %d)]" \
+ % (lcore, lcore_tim2)
+ if cnt > 30:
+ return -1, "Fail [tim2 cnt > 30]"
+
+ # timer0 always expires on the same core
+ if id == 3:
+ if lcore_tim3 == -1:
+ lcore_tim3 = lcore
+ elif lcore != lcore_tim3:
+ return -1, "Fail [lcore_tim3 changed (%d -> %d)]" \
+ % (lcore, lcore_tim3)
+ if cnt > 30:
+ return -1, "Fail [tim3 cnt > 30]"
+
+ # must be 2 different cores
+ if lcore_tim0 == lcore_tim3:
+ return -1, "Fail [lcore_tim0 (%d) == lcore_tim3 (%d)]" \
+ % (lcore_tim0, lcore_tim3)
+
+ return 0, "Success"
+
def ring_autotest(child, test_name):
- child.sendline(test_name)
- index = child.expect(["Test OK", "Test Failed",
- pexpect.TIMEOUT], timeout = 2)
- if index == 1:
- return -1, "Fail"
- elif index == 2:
- return -1, "Fail [Timeout]"
-
- child.sendline("set_watermark test 100")
- child.sendline("dump_ring test")
- index = child.expect([" watermark=100",
- pexpect.TIMEOUT], timeout = 1)
- if index != 0:
- return -1, "Fail [Bad watermark]"
-
- return 0, "Success"
+ child.sendline(test_name)
+ index = child.expect(["Test OK", "Test Failed",
+ pexpect.TIMEOUT], timeout=2)
+ if index == 1:
+ return -1, "Fail"
+ elif index == 2:
+ return -1, "Fail [Timeout]"
+
+ child.sendline("set_watermark test 100")
+ child.sendline("dump_ring test")
+ index = child.expect([" watermark=100",
+ pexpect.TIMEOUT], timeout=1)
+ if index != 0:
+ return -1, "Fail [Bad watermark]"
+
+ return 0, "Success"
diff --git a/doc/guides/conf.py b/doc/guides/conf.py
index 29e8efb..34c62de 100644
--- a/doc/guides/conf.py
+++ b/doc/guides/conf.py
@@ -58,7 +58,8 @@
html_show_copyright = False
highlight_language = 'none'
-version = subprocess.check_output(['make', '-sRrC', '../../', 'showversion']).decode('utf-8').rstrip()
+version = subprocess.check_output(['make', '-sRrC', '../../', 'showversion'])
+version = version.decode('utf-8').rstrip()
release = version
master_doc = 'index'
@@ -94,6 +95,7 @@
'preamble': latex_preamble
}
+
# Override the default Latex formatter in order to modify the
# code/verbatim blocks.
class CustomLatexFormatter(LatexFormatter):
@@ -117,12 +119,12 @@ def __init__(self, **options):
("tools/devbind", "dpdk-devbind",
"check device status and bind/unbind them from drivers", "", 8)]
-######## :numref: fallback ########
+
+# ####### :numref: fallback ########
# The following hook functions add some simple handling for the :numref:
# directive for Sphinx versions prior to 1.3.1. The functions replace the
# :numref: reference with a link to the target (for all Sphinx doc types).
# It doesn't try to label figures/tables.
-
def numref_role(reftype, rawtext, text, lineno, inliner):
"""
Add a Sphinx role to handle numref references. Note, we can't convert
@@ -136,6 +138,7 @@ def numref_role(reftype, rawtext, text, lineno, inliner):
internal=True)
return [newnode], []
+
def process_numref(app, doctree, from_docname):
"""
Process the numref nodes once the doctree has been built and prior to
diff --git a/examples/ip_pipeline/config/diagram-generator.py b/examples/ip_pipeline/config/diagram-generator.py
index 6b7170b..1748833 100755
--- a/examples/ip_pipeline/config/diagram-generator.py
+++ b/examples/ip_pipeline/config/diagram-generator.py
@@ -36,7 +36,8 @@
# the DPDK ip_pipeline application.
#
# The input configuration file is translated to an output file in DOT syntax,
-# which is then used to create the image file using graphviz (www.graphviz.org).
+# which is then used to create the image file using graphviz
+# (www.graphviz.org).
#
from __future__ import print_function
@@ -94,6 +95,7 @@
# SOURCEx | SOURCEx | SOURCEx | PIPELINEy | SOURCEx
# SINKx | SINKx | PIPELINEy | SINKx | SINKx
+
#
# Parse the input configuration file to detect the graph nodes and edges
#
@@ -321,16 +323,17 @@ def process_config_file(cfgfile):
#
print('Creating image file "%s" ...' % imgfile)
if os.system('which dot > /dev/null'):
- print('Error: Unable to locate "dot" executable.' \
- 'Please install the "graphviz" package (www.graphviz.org).')
+ print('Error: Unable to locate "dot" executable.'
+ 'Please install the "graphviz" package (www.graphviz.org).')
return
os.system(dot_cmd)
if __name__ == '__main__':
- parser = argparse.ArgumentParser(description=\
- 'Create diagram for IP pipeline configuration file.')
+ parser = argparse.ArgumentParser(description='Create diagram for IP '
+ 'pipeline configuration '
+ 'file.')
parser.add_argument(
'-f',
diff --git a/examples/ip_pipeline/config/pipeline-to-core-mapping.py b/examples/ip_pipeline/config/pipeline-to-core-mapping.py
index c2050b8..7a4eaa2 100755
--- a/examples/ip_pipeline/config/pipeline-to-core-mapping.py
+++ b/examples/ip_pipeline/config/pipeline-to-core-mapping.py
@@ -39,15 +39,14 @@
#
from __future__ import print_function
-import sys
-import errno
-import os
-import re
+from collections import namedtuple
+import argparse
import array
+import errno
import itertools
+import os
import re
-import argparse
-from collections import namedtuple
+import sys
# default values
enable_stage0_traceout = 1
diff --git a/tools/cpu_layout.py b/tools/cpu_layout.py
index d38d0b5..ccc22ec 100755
--- a/tools/cpu_layout.py
+++ b/tools/cpu_layout.py
@@ -38,40 +38,40 @@
cores = []
core_map = {}
-fd=open("/proc/cpuinfo")
+fd = open("/proc/cpuinfo")
lines = fd.readlines()
fd.close()
core_details = []
core_lines = {}
for line in lines:
- if len(line.strip()) != 0:
- name, value = line.split(":", 1)
- core_lines[name.strip()] = value.strip()
- else:
- core_details.append(core_lines)
- core_lines = {}
+ if len(line.strip()) != 0:
+ name, value = line.split(":", 1)
+ core_lines[name.strip()] = value.strip()
+ else:
+ core_details.append(core_lines)
+ core_lines = {}
for core in core_details:
- for field in ["processor", "core id", "physical id"]:
- if field not in core:
- print "Error getting '%s' value from /proc/cpuinfo" % field
- sys.exit(1)
- core[field] = int(core[field])
+ for field in ["processor", "core id", "physical id"]:
+ if field not in core:
+ print "Error getting '%s' value from /proc/cpuinfo" % field
+ sys.exit(1)
+ core[field] = int(core[field])
- if core["core id"] not in cores:
- cores.append(core["core id"])
- if core["physical id"] not in sockets:
- sockets.append(core["physical id"])
- key = (core["physical id"], core["core id"])
- if key not in core_map:
- core_map[key] = []
- core_map[key].append(core["processor"])
+ if core["core id"] not in cores:
+ cores.append(core["core id"])
+ if core["physical id"] not in sockets:
+ sockets.append(core["physical id"])
+ key = (core["physical id"], core["core id"])
+ if key not in core_map:
+ core_map[key] = []
+ core_map[key].append(core["processor"])
print "============================================================"
print "Core and Socket Information (as reported by '/proc/cpuinfo')"
print "============================================================\n"
-print "cores = ",cores
+print "cores = ", cores
print "sockets = ", sockets
print ""
@@ -81,15 +81,16 @@
print " ".ljust(max_core_id_len + len('Core ')),
for s in sockets:
- print "Socket %s" % str(s).ljust(max_core_map_len - len('Socket ')),
+ print "Socket %s" % str(s).ljust(max_core_map_len - len('Socket ')),
print ""
+
print " ".ljust(max_core_id_len + len('Core ')),
for s in sockets:
- print "--------".ljust(max_core_map_len),
+ print "--------".ljust(max_core_map_len),
print ""
for c in cores:
- print "Core %s" % str(c).ljust(max_core_id_len),
- for s in sockets:
- print str(core_map[(s,c)]).ljust(max_core_map_len),
- print ""
+ print "Core %s" % str(c).ljust(max_core_id_len),
+ for s in sockets:
+ print str(core_map[(s, c)]).ljust(max_core_map_len),
+ print ""
diff --git a/tools/dpdk-devbind.py b/tools/dpdk-devbind.py
index f1d374d..4f51a4b 100755
--- a/tools/dpdk-devbind.py
+++ b/tools/dpdk-devbind.py
@@ -93,10 +93,10 @@ def usage():
Unbind a device (Equivalent to \"-b none\")
--force:
- By default, network devices which are used by Linux - as indicated by having
- routes in the routing table - cannot be modified. Using the --force
- flag overrides this behavior, allowing active links to be forcibly
- unbound.
+ By default, network devices which are used by Linux - as indicated by
+ having routes in the routing table - cannot be modified. Using the
+ --force flag overrides this behavior, allowing active links to be
+ forcibly unbound.
WARNING: This can lead to loss of network connection and should be used
with caution.
@@ -151,7 +151,7 @@ def find_module(mod):
# check for a copy based off current path
tools_dir = dirname(abspath(sys.argv[0]))
- if (tools_dir.endswith("tools")):
+ if tools_dir.endswith("tools"):
base_dir = dirname(tools_dir)
find_out = check_output(["find", base_dir, "-name", mod + ".ko"])
if len(find_out) > 0: # something matched
@@ -249,7 +249,7 @@ def get_nic_details():
dev = {}
dev_lines = check_output(["lspci", "-Dvmmn"]).splitlines()
for dev_line in dev_lines:
- if (len(dev_line) == 0):
+ if len(dev_line) == 0:
if dev["Class"][0:2] == NETWORK_BASE_CLASS:
# convert device and vendor ids to numbers, then add to global
dev["Vendor"] = int(dev["Vendor"], 16)
@@ -315,8 +315,8 @@ def get_crypto_details():
dev = {}
dev_lines = check_output(["lspci", "-Dvmmn"]).splitlines()
for dev_line in dev_lines:
- if (len(dev_line) == 0):
- if (dev["Class"][0:2] == CRYPTO_BASE_CLASS):
+ if len(dev_line) == 0:
+ if dev["Class"][0:2] == CRYPTO_BASE_CLASS:
# convert device and vendor ids to numbers, then add to global
dev["Vendor"] = int(dev["Vendor"], 16)
dev["Device"] = int(dev["Device"], 16)
@@ -513,7 +513,8 @@ def display_devices(title, dev_list, extra_params=None):
for dev in dev_list:
if extra_params is not None:
strings.append("%s '%s' %s" % (dev["Slot"],
- dev["Device_str"], extra_params % dev))
+ dev["Device_str"],
+ extra_params % dev))
else:
strings.append("%s '%s'" % (dev["Slot"], dev["Device_str"]))
# sort before printing, so that the entries appear in PCI order
@@ -532,7 +533,7 @@ def show_status():
# split our list of network devices into the three categories above
for d in devices.keys():
- if (NETWORK_BASE_CLASS in devices[d]["Class"]):
+ if NETWORK_BASE_CLASS in devices[d]["Class"]:
if not has_driver(d):
no_drv.append(devices[d])
continue
@@ -555,7 +556,7 @@ def show_status():
no_drv = []
for d in devices.keys():
- if (CRYPTO_BASE_CLASS in devices[d]["Class"]):
+ if CRYPTO_BASE_CLASS in devices[d]["Class"]:
if not has_driver(d):
no_drv.append(devices[d])
continue
diff --git a/tools/dpdk-pmdinfo.py b/tools/dpdk-pmdinfo.py
index 3db9819..3d3ad7d 100755
--- a/tools/dpdk-pmdinfo.py
+++ b/tools/dpdk-pmdinfo.py
@@ -4,52 +4,20 @@
# Utility to dump PMD_INFO_STRING support from an object file
#
# -------------------------------------------------------------------------
+import json
import os
+import platform
+import string
import sys
+from elftools.common.exceptions import ELFError
+from elftools.common.py3compat import (byte2int, bytes2str, str2bytes)
+from elftools.elf.elffile import ELFFile
from optparse import OptionParser
-import string
-import json
-import platform
# For running from development directory. It should take precedence over the
# installed pyelftools.
sys.path.insert(0, '.')
-
-from elftools import __version__
-from elftools.common.exceptions import ELFError
-from elftools.common.py3compat import (
- ifilter, byte2int, bytes2str, itervalues, str2bytes)
-from elftools.elf.elffile import ELFFile
-from elftools.elf.dynamic import DynamicSection, DynamicSegment
-from elftools.elf.enums import ENUM_D_TAG
-from elftools.elf.segments import InterpSegment
-from elftools.elf.sections import SymbolTableSection
-from elftools.elf.gnuversions import (
- GNUVerSymSection, GNUVerDefSection,
- GNUVerNeedSection,
-)
-from elftools.elf.relocation import RelocationSection
-from elftools.elf.descriptions import (
- describe_ei_class, describe_ei_data, describe_ei_version,
- describe_ei_osabi, describe_e_type, describe_e_machine,
- describe_e_version_numeric, describe_p_type, describe_p_flags,
- describe_sh_type, describe_sh_flags,
- describe_symbol_type, describe_symbol_bind, describe_symbol_visibility,
- describe_symbol_shndx, describe_reloc_type, describe_dyn_tag,
- describe_ver_flags,
-)
-from elftools.elf.constants import E_FLAGS
-from elftools.dwarf.dwarfinfo import DWARFInfo
-from elftools.dwarf.descriptions import (
- describe_reg_name, describe_attr_value, set_global_machine_arch,
- describe_CFI_instructions, describe_CFI_register_rule,
- describe_CFI_CFA_rule,
-)
-from elftools.dwarf.constants import (
- DW_LNS_copy, DW_LNS_set_file, DW_LNE_define_file)
-from elftools.dwarf.callframe import CIE, FDE
-
raw_output = False
pcidb = None
@@ -326,7 +294,7 @@ def parse_pmd_info_string(self, mystring):
for i in optional_pmd_info:
try:
print("%s: %s" % (i['tag'], pmdinfo[i['id']]))
- except KeyError as e:
+ except KeyError:
continue
if (len(pmdinfo["pci_ids"]) != 0):
@@ -475,7 +443,7 @@ def process_dt_needed_entries(self):
with open(library, 'rb') as file:
try:
libelf = ReadElf(file, sys.stdout)
- except ELFError as e:
+ except ELFError:
print("%s is no an ELF file" % library)
continue
libelf.process_dt_needed_entries()
@@ -491,7 +459,7 @@ def scan_autoload_path(autoload_path):
try:
dirs = os.listdir(autoload_path)
- except OSError as e:
+ except OSError:
# Couldn't read the directory, give up
return
@@ -503,10 +471,10 @@ def scan_autoload_path(autoload_path):
try:
file = open(dpath, 'rb')
readelf = ReadElf(file, sys.stdout)
- except ELFError as e:
+ except ELFError:
# this is likely not an elf file, skip it
continue
- except IOError as e:
+ except IOError:
# No permission to read the file, skip it
continue
@@ -531,7 +499,7 @@ def scan_for_autoload_pmds(dpdk_path):
file = open(dpdk_path, 'rb')
try:
readelf = ReadElf(file, sys.stdout)
- except ElfError as e:
+ except ElfError:
if raw_output is False:
print("Unable to parse %s" % file)
return
@@ -557,7 +525,7 @@ def main(stream=None):
global raw_output
global pcidb
- pcifile_default = "./pci.ids" # for unknown OS's assume local file
+ pcifile_default = "./pci.ids" # For unknown OS's assume local file
if platform.system() == 'Linux':
pcifile_default = "/usr/share/hwdata/pci.ids"
elif platform.system() == 'FreeBSD':
@@ -577,7 +545,8 @@ def main(stream=None):
"to get vendor names from",
default=pcifile_default, metavar="FILE")
optparser.add_option("-t", "--table", dest="tblout",
- help="output information on hw support as a hex table",
+ help="output information on hw support as a "
+ "hex table",
action='store_true')
optparser.add_option("-p", "--plugindir", dest="pdir",
help="scan dpdk for autoload plugins",
--
2.7.4
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v3 2/3] app: make python apps python2/3 compliant
2016-12-08 15:51 [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant John McNamara
` (14 preceding siblings ...)
2016-12-18 14:32 ` [dpdk-dev] [PATCH v3 1/3] app: make python apps pep8 compliant John McNamara
@ 2016-12-18 14:32 ` John McNamara
2016-12-18 14:32 ` [dpdk-dev] [PATCH v3 3/3] doc: add required python versions to docs John McNamara
` (4 subsequent siblings)
20 siblings, 0 replies; 28+ messages in thread
From: John McNamara @ 2016-12-18 14:32 UTC (permalink / raw)
To: dev; +Cc: mkletzan, thomas.monjalon, nhorman, John McNamara
Make all the DPDK Python apps work with Python 2 or 3 to
allow them to work with whatever is the system default.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
app/cmdline_test/cmdline_test.py | 26 ++++++++++++------------
app/cmdline_test/cmdline_test_data.py | 2 --
app/test/autotest.py | 10 ++++-----
app/test/autotest_data.py | 2 --
app/test/autotest_runner.py | 37 ++++++++++++++++------------------
app/test/autotest_test_funcs.py | 2 --
tools/cpu_layout.py | 38 ++++++++++++++++++-----------------
tools/dpdk-devbind.py | 2 +-
tools/dpdk-pmdinfo.py | 14 +++++++------
9 files changed, 64 insertions(+), 69 deletions(-)
diff --git a/app/cmdline_test/cmdline_test.py b/app/cmdline_test/cmdline_test.py
index 4729987..229f71f 100755
--- a/app/cmdline_test/cmdline_test.py
+++ b/app/cmdline_test/cmdline_test.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# BSD LICENSE
#
@@ -32,7 +32,7 @@
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# Script that runs cmdline_test app and feeds keystrokes into it.
-
+from __future__ import print_function
import cmdline_test_data
import os
import pexpect
@@ -81,38 +81,38 @@ def runHistoryTest(child):
# the path to cmdline_test executable is supplied via command-line.
if len(sys.argv) < 2:
- print "Error: please supply cmdline_test app path"
+ print("Error: please supply cmdline_test app path")
sys.exit(1)
test_app_path = sys.argv[1]
if not os.path.exists(test_app_path):
- print "Error: please supply cmdline_test app path"
+ print("Error: please supply cmdline_test app path")
sys.exit(1)
child = pexpect.spawn(test_app_path)
-print "Running command-line tests..."
+print("Running command-line tests...")
for test in cmdline_test_data.tests:
- print (test["Name"] + ":").ljust(30),
+ testname = (test["Name"] + ":").ljust(30)
try:
runTest(child, test)
- print "PASS"
+ print(testname, "PASS")
except:
- print "FAIL"
- print child
+ print(testname, "FAIL")
+ print(child)
sys.exit(1)
# since last test quits the app, run new instance
child = pexpect.spawn(test_app_path)
-print ("History fill test:").ljust(30),
+testname = ("History fill test:").ljust(30)
try:
runHistoryTest(child)
- print "PASS"
+ print(testname, "PASS")
except:
- print "FAIL"
- print child
+ print(testname, "FAIL")
+ print(child)
sys.exit(1)
child.close()
sys.exit(0)
diff --git a/app/cmdline_test/cmdline_test_data.py b/app/cmdline_test/cmdline_test_data.py
index 3ce6cbc..28dfefe 100644
--- a/app/cmdline_test/cmdline_test_data.py
+++ b/app/cmdline_test/cmdline_test_data.py
@@ -1,5 +1,3 @@
-#!/usr/bin/python
-
# BSD LICENSE
#
# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
diff --git a/app/test/autotest.py b/app/test/autotest.py
index 3a00538..5c19a02 100644
--- a/app/test/autotest.py
+++ b/app/test/autotest.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# BSD LICENSE
#
@@ -32,15 +32,15 @@
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# Script that uses either test app or qemu controlled by python-pexpect
-
+from __future__ import print_function
import autotest_data
import autotest_runner
import sys
def usage():
- print"Usage: autotest.py [test app|test iso image]",
- print "[target] [whitelist|-blacklist]"
+ print("Usage: autotest.py [test app|test iso image] ",
+ "[target] [whitelist|-blacklist]")
if len(sys.argv) < 3:
usage()
@@ -63,7 +63,7 @@ def usage():
cmdline = "%s -c f -n 4" % (sys.argv[1])
-print cmdline
+print(cmdline)
runner = autotest_runner.AutotestRunner(cmdline, target, test_blacklist,
test_whitelist)
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 0cf4cfd..0cd598b 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -1,5 +1,3 @@
-#!/usr/bin/env python
-
# BSD LICENSE
#
# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
diff --git a/app/test/autotest_runner.py b/app/test/autotest_runner.py
index 55b63a8..fc882ec 100644
--- a/app/test/autotest_runner.py
+++ b/app/test/autotest_runner.py
@@ -1,5 +1,3 @@
-#!/usr/bin/python
-
# BSD LICENSE
#
# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
@@ -271,15 +269,16 @@ def __process_results(self, results):
total_time = int(cur_time - self.start)
# print results, test run time and total time since start
- print ("%s:" % test_name).ljust(30),
- print result_str.ljust(29),
- print "[%02dm %02ds]" % (test_time / 60, test_time % 60),
+ result = ("%s:" % test_name).ljust(30)
+ result += result_str.ljust(29)
+ result += "[%02dm %02ds]" % (test_time / 60, test_time % 60)
# don't print out total time every line, it's the same anyway
if i == len(results) - 1:
- print "[%02dm %02ds]" % (total_time / 60, total_time % 60)
+ print(result,
+ "[%02dm %02ds]" % (total_time / 60, total_time % 60))
else:
- print ""
+ print(result)
# if test failed and it wasn't a "start" test
if test_result < 0 and not i == 0:
@@ -294,7 +293,7 @@ def __process_results(self, results):
f = open("%s_%s_report.rst" %
(self.target, test_name), "w")
except IOError:
- print "Report for %s could not be created!" % test_name
+ print("Report for %s could not be created!" % test_name)
else:
with f:
f.write(report)
@@ -360,12 +359,10 @@ def run_all_tests(self):
try:
# create table header
- print ""
- print "Test name".ljust(30),
- print "Test result".ljust(29),
- print "Test".center(9),
- print "Total".center(9)
- print "=" * 80
+ print("")
+ print("Test name".ljust(30), "Test result".ljust(29),
+ "Test".center(9), "Total".center(9))
+ print("=" * 80)
# make a note of tests start time
self.start = time.time()
@@ -407,11 +404,11 @@ def run_all_tests(self):
total_time = int(cur_time - self.start)
# print out summary
- print "=" * 80
- print "Total run time: %02dm %02ds" % (total_time / 60,
- total_time % 60)
+ print("=" * 80)
+ print("Total run time: %02dm %02ds" % (total_time / 60,
+ total_time % 60))
if self.fails != 0:
- print "Number of failed tests: %s" % str(self.fails)
+ print("Number of failed tests: %s" % str(self.fails))
# write summary to logfile
self.logfile.write("Summary\n")
@@ -420,8 +417,8 @@ def run_all_tests(self):
self.logfile.write("Failed tests: ".ljust(
15) + "%i\n" % self.fails)
except:
- print "Exception occurred"
- print sys.exc_info()
+ print("Exception occurred")
+ print(sys.exc_info())
self.fails = 1
# drop logs from all executions to a logfile
diff --git a/app/test/autotest_test_funcs.py b/app/test/autotest_test_funcs.py
index c482ea8..1c5f390 100644
--- a/app/test/autotest_test_funcs.py
+++ b/app/test/autotest_test_funcs.py
@@ -1,5 +1,3 @@
-#!/usr/bin/python
-
# BSD LICENSE
#
# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
diff --git a/tools/cpu_layout.py b/tools/cpu_layout.py
index ccc22ec..0e049a6 100755
--- a/tools/cpu_layout.py
+++ b/tools/cpu_layout.py
@@ -1,4 +1,5 @@
-#! /usr/bin/python
+#!/usr/bin/env python
+
#
# BSD LICENSE
#
@@ -31,7 +32,7 @@
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
-
+from __future__ import print_function
import sys
sockets = []
@@ -55,7 +56,7 @@
for core in core_details:
for field in ["processor", "core id", "physical id"]:
if field not in core:
- print "Error getting '%s' value from /proc/cpuinfo" % field
+ print("Error getting '%s' value from /proc/cpuinfo" % field)
sys.exit(1)
core[field] = int(core[field])
@@ -68,29 +69,30 @@
core_map[key] = []
core_map[key].append(core["processor"])
-print "============================================================"
-print "Core and Socket Information (as reported by '/proc/cpuinfo')"
-print "============================================================\n"
-print "cores = ", cores
-print "sockets = ", sockets
-print ""
+print("============================================================")
+print("Core and Socket Information (as reported by '/proc/cpuinfo')")
+print("============================================================\n")
+print("cores = ", cores)
+print("sockets = ", sockets)
+print("")
max_processor_len = len(str(len(cores) * len(sockets) * 2 - 1))
max_core_map_len = max_processor_len * 2 + len('[, ]') + len('Socket ')
max_core_id_len = len(str(max(cores)))
-print " ".ljust(max_core_id_len + len('Core ')),
+output = " ".ljust(max_core_id_len + len('Core '))
for s in sockets:
- print "Socket %s" % str(s).ljust(max_core_map_len - len('Socket ')),
-print ""
+ output += " Socket %s" % str(s).ljust(max_core_map_len - len('Socket '))
+print(output)
-print " ".ljust(max_core_id_len + len('Core ')),
+output = " ".ljust(max_core_id_len + len('Core '))
for s in sockets:
- print "--------".ljust(max_core_map_len),
-print ""
+ output += " --------".ljust(max_core_map_len)
+ output += " "
+print(output)
for c in cores:
- print "Core %s" % str(c).ljust(max_core_id_len),
+ output = "Core %s" % str(c).ljust(max_core_id_len)
for s in sockets:
- print str(core_map[(s, c)]).ljust(max_core_map_len),
- print ""
+ output += " " + str(core_map[(s, c)]).ljust(max_core_map_len)
+ print(output)
diff --git a/tools/dpdk-devbind.py b/tools/dpdk-devbind.py
index 4f51a4b..e057b87 100755
--- a/tools/dpdk-devbind.py
+++ b/tools/dpdk-devbind.py
@@ -1,4 +1,4 @@
-#! /usr/bin/python
+#! /usr/bin/env python
#
# BSD LICENSE
#
diff --git a/tools/dpdk-pmdinfo.py b/tools/dpdk-pmdinfo.py
index 3d3ad7d..d4e51aa 100755
--- a/tools/dpdk-pmdinfo.py
+++ b/tools/dpdk-pmdinfo.py
@@ -1,9 +1,11 @@
#!/usr/bin/env python
+
# -------------------------------------------------------------------------
#
# Utility to dump PMD_INFO_STRING support from an object file
#
# -------------------------------------------------------------------------
+from __future__ import print_function
import json
import os
import platform
@@ -54,7 +56,7 @@ def addDevice(self, deviceStr):
self.devices[devID] = Device(deviceStr)
def report(self):
- print self.ID, self.name
+ print(self.ID, self.name)
for id, dev in self.devices.items():
dev.report()
@@ -80,7 +82,7 @@ def __init__(self, deviceStr):
self.subdevices = {}
def report(self):
- print "\t%s\t%s" % (self.ID, self.name)
+ print("\t%s\t%s" % (self.ID, self.name))
for subID, subdev in self.subdevices.items():
subdev.report()
@@ -126,7 +128,7 @@ def __init__(self, vendor, device, name):
self.name = name
def report(self):
- print "\t\t%s\t%s\t%s" % (self.vendorID, self.deviceID, self.name)
+ print("\t\t%s\t%s\t%s" % (self.vendorID, self.deviceID, self.name))
class PCIIds:
@@ -154,7 +156,7 @@ def reportVendors(self):
"""Reports the vendors
"""
for vid, v in self.vendors.items():
- print v.ID, v.name
+ print(v.ID, v.name)
def report(self, vendor=None):
"""
@@ -185,7 +187,7 @@ def findDate(self, content):
def parse(self):
if len(self.contents) < 1:
- print "data/%s-pci.ids not found" % self.date
+ print("data/%s-pci.ids not found" % self.date)
else:
vendorID = ""
deviceID = ""
@@ -432,7 +434,7 @@ def process_dt_needed_entries(self):
for tag in dynsec.iter_tags():
if tag.entry.d_tag == 'DT_NEEDED':
- rc = tag.needed.find("librte_pmd")
+ rc = tag.needed.find(b"librte_pmd")
if (rc != -1):
library = search_file(tag.needed,
runpath + ":" + ldlibpath +
--
2.7.4
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v3 3/3] doc: add required python versions to docs
2016-12-08 15:51 [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant John McNamara
` (15 preceding siblings ...)
2016-12-18 14:32 ` [dpdk-dev] [PATCH v3 2/3] app: make python apps python2/3 compliant John McNamara
@ 2016-12-18 14:32 ` John McNamara
2016-12-21 15:03 ` [dpdk-dev] [PATCH v4 0/3] app: make python apps python2/3 compliant John McNamara
` (3 subsequent siblings)
20 siblings, 0 replies; 28+ messages in thread
From: John McNamara @ 2016-12-18 14:32 UTC (permalink / raw)
To: dev; +Cc: mkletzan, thomas.monjalon, nhorman, John McNamara
Add a requirement to support both Python 2 and 3 to the
DPDK Python Coding Standards and Getting started Guide.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
doc/guides/contributing/coding_style.rst | 3 ++-
doc/guides/linux_gsg/sys_reqs.rst | 2 +-
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/doc/guides/contributing/coding_style.rst b/doc/guides/contributing/coding_style.rst
index 1eb67f3..4163960 100644
--- a/doc/guides/contributing/coding_style.rst
+++ b/doc/guides/contributing/coding_style.rst
@@ -690,6 +690,7 @@ Control Statements
Python Code
-----------
-All python code should be compliant with `PEP8 (Style Guide for Python Code) <https://www.python.org/dev/peps/pep-0008/>`_.
+All Python code should work with Python 2.7+ and 3.2+ and be compliant with
+`PEP8 (Style Guide for Python Code) <https://www.python.org/dev/peps/pep-0008/>`_.
The ``pep8`` tool can be used for testing compliance with the guidelines.
diff --git a/doc/guides/linux_gsg/sys_reqs.rst b/doc/guides/linux_gsg/sys_reqs.rst
index 3d74342..9653a13 100644
--- a/doc/guides/linux_gsg/sys_reqs.rst
+++ b/doc/guides/linux_gsg/sys_reqs.rst
@@ -86,7 +86,7 @@ Compilation of the DPDK
.. note::
- Python, version 2.6 or 2.7, to use various helper scripts included in the DPDK package.
+ Python, version 2.7+ or 3.2+, to use various helper scripts included in the DPDK package.
**Optional Tools:**
--
2.7.4
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v4 0/3] app: make python apps python2/3 compliant
2016-12-08 15:51 [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant John McNamara
` (16 preceding siblings ...)
2016-12-18 14:32 ` [dpdk-dev] [PATCH v3 3/3] doc: add required python versions to docs John McNamara
@ 2016-12-21 15:03 ` John McNamara
2017-01-04 20:15 ` Thomas Monjalon
2016-12-21 15:03 ` [dpdk-dev] [PATCH v4 1/3] app: make python apps pep8 compliant John McNamara
` (2 subsequent siblings)
20 siblings, 1 reply; 28+ messages in thread
From: John McNamara @ 2016-12-21 15:03 UTC (permalink / raw)
To: dev; +Cc: mkletzan, nhorman, John McNamara
These patches refactor the DPDK Python applications to make them Python 2/3
compatible.
In order to do this the patchset starts by making the apps PEP8 compliant in
accordance with the DPDK Coding guidelines:
http://dpdk.org/doc/guides/contributing/coding_style.html#python-code
Implementing PEP8 and Python 2/3 compliance means that we can check all future
Python patches for consistency. Python 2/3 support also makes downstream
packaging easier as more distros move to Python 3 as the system python.
See the previous discussion about Python2/3 compatibilty here:
http://dpdk.org/ml/archives/dev/2016-December/051683.html
V4: * Rebase to latest HEAD.
V3: * Squash shebang patch into Python 3 patch.
* Only add /usr/bin/env shebang line to code that is executable.
V2: * Fix broken rebase.
John McNamara (3):
app: make python apps pep8 compliant
app: make python apps python2/3 compliant
doc: add required python versions to docs
app/cmdline_test/cmdline_test.py | 87 ++-
app/cmdline_test/cmdline_test_data.py | 403 +++++-----
app/test/autotest.py | 46 +-
app/test/autotest_data.py | 831 ++++++++++-----------
app/test/autotest_runner.py | 740 +++++++++---------
app/test/autotest_test_funcs.py | 481 ++++++------
doc/guides/conf.py | 9 +-
doc/guides/contributing/coding_style.rst | 3 +-
doc/guides/linux_gsg/sys_reqs.rst | 2 +-
examples/ip_pipeline/config/diagram-generator.py | 13 +-
.../ip_pipeline/config/pipeline-to-core-mapping.py | 11 +-
tools/cpu_layout.py | 79 +-
tools/dpdk-devbind.py | 25 +-
tools/dpdk-pmdinfo.py | 75 +-
14 files changed, 1405 insertions(+), 1400 deletions(-)
--
2.7.4
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v4 1/3] app: make python apps pep8 compliant
2016-12-08 15:51 [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant John McNamara
` (17 preceding siblings ...)
2016-12-21 15:03 ` [dpdk-dev] [PATCH v4 0/3] app: make python apps python2/3 compliant John McNamara
@ 2016-12-21 15:03 ` John McNamara
2016-12-21 15:03 ` [dpdk-dev] [PATCH v4 2/3] app: make python apps python2/3 compliant John McNamara
2016-12-21 15:03 ` [dpdk-dev] [PATCH v4 3/3] doc: add required python versions to docs John McNamara
20 siblings, 0 replies; 28+ messages in thread
From: John McNamara @ 2016-12-21 15:03 UTC (permalink / raw)
To: dev; +Cc: mkletzan, nhorman, John McNamara
Make all DPDK python application compliant with the PEP8 standard
to allow for consistency checking of patches and to allow further
refactoring.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
app/cmdline_test/cmdline_test.py | 81 +-
app/cmdline_test/cmdline_test_data.py | 401 +++++-----
app/test/autotest.py | 40 +-
app/test/autotest_data.py | 831 +++++++++++----------
app/test/autotest_runner.py | 739 +++++++++---------
app/test/autotest_test_funcs.py | 479 ++++++------
doc/guides/conf.py | 9 +-
examples/ip_pipeline/config/diagram-generator.py | 13 +-
.../ip_pipeline/config/pipeline-to-core-mapping.py | 11 +-
tools/cpu_layout.py | 55 +-
tools/dpdk-devbind.py | 23 +-
tools/dpdk-pmdinfo.py | 61 +-
12 files changed, 1376 insertions(+), 1367 deletions(-)
diff --git a/app/cmdline_test/cmdline_test.py b/app/cmdline_test/cmdline_test.py
index 8efc5ea..4729987 100755
--- a/app/cmdline_test/cmdline_test.py
+++ b/app/cmdline_test/cmdline_test.py
@@ -33,16 +33,21 @@
# Script that runs cmdline_test app and feeds keystrokes into it.
-import sys, pexpect, string, os, cmdline_test_data
+import cmdline_test_data
+import os
+import pexpect
+import sys
+
#
# function to run test
#
-def runTest(child,test):
- child.send(test["Sequence"])
- if test["Result"] == None:
- return 0
- child.expect(test["Result"],1)
+def runTest(child, test):
+ child.send(test["Sequence"])
+ if test["Result"] is None:
+ return 0
+ child.expect(test["Result"], 1)
+
#
# history test is a special case
@@ -57,57 +62,57 @@ def runTest(child,test):
# This is a self-contained test, it needs only a pexpect child
#
def runHistoryTest(child):
- # find out history size
- child.sendline(cmdline_test_data.CMD_GET_BUFSIZE)
- child.expect("History buffer size: \\d+", timeout=1)
- history_size = int(child.after[len(cmdline_test_data.BUFSIZE_TEMPLATE):])
- i = 0
+ # find out history size
+ child.sendline(cmdline_test_data.CMD_GET_BUFSIZE)
+ child.expect("History buffer size: \\d+", timeout=1)
+ history_size = int(child.after[len(cmdline_test_data.BUFSIZE_TEMPLATE):])
+ i = 0
- # fill the history with numbers
- while i < history_size / 10:
- # add 1 to prevent from parsing as octals
- child.send("1" + str(i).zfill(8) + cmdline_test_data.ENTER)
- # the app will simply print out the number
- child.expect(str(i + 100000000), timeout=1)
- i += 1
- # scroll back history
- child.send(cmdline_test_data.UP * (i + 2) + cmdline_test_data.ENTER)
- child.expect("100000000", timeout=1)
+ # fill the history with numbers
+ while i < history_size / 10:
+ # add 1 to prevent from parsing as octals
+ child.send("1" + str(i).zfill(8) + cmdline_test_data.ENTER)
+ # the app will simply print out the number
+ child.expect(str(i + 100000000), timeout=1)
+ i += 1
+ # scroll back history
+ child.send(cmdline_test_data.UP * (i + 2) + cmdline_test_data.ENTER)
+ child.expect("100000000", timeout=1)
# the path to cmdline_test executable is supplied via command-line.
if len(sys.argv) < 2:
- print "Error: please supply cmdline_test app path"
- sys.exit(1)
+ print "Error: please supply cmdline_test app path"
+ sys.exit(1)
test_app_path = sys.argv[1]
if not os.path.exists(test_app_path):
- print "Error: please supply cmdline_test app path"
- sys.exit(1)
+ print "Error: please supply cmdline_test app path"
+ sys.exit(1)
child = pexpect.spawn(test_app_path)
print "Running command-line tests..."
for test in cmdline_test_data.tests:
- print (test["Name"] + ":").ljust(30),
- try:
- runTest(child,test)
- print "PASS"
- except:
- print "FAIL"
- print child
- sys.exit(1)
+ print (test["Name"] + ":").ljust(30),
+ try:
+ runTest(child, test)
+ print "PASS"
+ except:
+ print "FAIL"
+ print child
+ sys.exit(1)
# since last test quits the app, run new instance
child = pexpect.spawn(test_app_path)
print ("History fill test:").ljust(30),
try:
- runHistoryTest(child)
- print "PASS"
+ runHistoryTest(child)
+ print "PASS"
except:
- print "FAIL"
- print child
- sys.exit(1)
+ print "FAIL"
+ print child
+ sys.exit(1)
child.close()
sys.exit(0)
diff --git a/app/cmdline_test/cmdline_test_data.py b/app/cmdline_test/cmdline_test_data.py
index b1945a5..3ce6cbc 100644
--- a/app/cmdline_test/cmdline_test_data.py
+++ b/app/cmdline_test/cmdline_test_data.py
@@ -33,8 +33,6 @@
# collection of static data
-import sys
-
# keycode constants
CTRL_A = chr(1)
CTRL_B = chr(2)
@@ -95,217 +93,220 @@
# and expected output (if any).
tests = [
-# test basic commands
- {"Name" : "command test 1",
- "Sequence" : "ambiguous first" + ENTER,
- "Result" : CMD1},
- {"Name" : "command test 2",
- "Sequence" : "ambiguous second" + ENTER,
- "Result" : CMD2},
- {"Name" : "command test 3",
- "Sequence" : "ambiguous ambiguous" + ENTER,
- "Result" : AMBIG},
- {"Name" : "command test 4",
- "Sequence" : "ambiguous ambiguous2" + ENTER,
- "Result" : AMBIG},
+ # test basic commands
+ {"Name": "command test 1",
+ "Sequence": "ambiguous first" + ENTER,
+ "Result": CMD1},
+ {"Name": "command test 2",
+ "Sequence": "ambiguous second" + ENTER,
+ "Result": CMD2},
+ {"Name": "command test 3",
+ "Sequence": "ambiguous ambiguous" + ENTER,
+ "Result": AMBIG},
+ {"Name": "command test 4",
+ "Sequence": "ambiguous ambiguous2" + ENTER,
+ "Result": AMBIG},
- {"Name" : "invalid command test 1",
- "Sequence" : "ambiguous invalid" + ENTER,
- "Result" : BAD_ARG},
-# test invalid commands
- {"Name" : "invalid command test 2",
- "Sequence" : "invalid" + ENTER,
- "Result" : NOT_FOUND},
- {"Name" : "invalid command test 3",
- "Sequence" : "ambiguousinvalid" + ENTER2,
- "Result" : NOT_FOUND},
+ {"Name": "invalid command test 1",
+ "Sequence": "ambiguous invalid" + ENTER,
+ "Result": BAD_ARG},
+ # test invalid commands
+ {"Name": "invalid command test 2",
+ "Sequence": "invalid" + ENTER,
+ "Result": NOT_FOUND},
+ {"Name": "invalid command test 3",
+ "Sequence": "ambiguousinvalid" + ENTER2,
+ "Result": NOT_FOUND},
-# test arrows and deletes
- {"Name" : "arrows & delete test 1",
- "Sequence" : "singlebad" + LEFT*2 + CTRL_B + DEL*3 + ENTER,
- "Result" : SINGLE},
- {"Name" : "arrows & delete test 2",
- "Sequence" : "singlebad" + LEFT*5 + RIGHT + CTRL_F + DEL*3 + ENTER,
- "Result" : SINGLE},
+ # test arrows and deletes
+ {"Name": "arrows & delete test 1",
+ "Sequence": "singlebad" + LEFT*2 + CTRL_B + DEL*3 + ENTER,
+ "Result": SINGLE},
+ {"Name": "arrows & delete test 2",
+ "Sequence": "singlebad" + LEFT*5 + RIGHT + CTRL_F + DEL*3 + ENTER,
+ "Result": SINGLE},
-# test backspace
- {"Name" : "backspace test",
- "Sequence" : "singlebad" + BKSPACE*3 + ENTER,
- "Result" : SINGLE},
+ # test backspace
+ {"Name": "backspace test",
+ "Sequence": "singlebad" + BKSPACE*3 + ENTER,
+ "Result": SINGLE},
-# test goto left and goto right
- {"Name" : "goto left test",
- "Sequence" : "biguous first" + CTRL_A + "am" + ENTER,
- "Result" : CMD1},
- {"Name" : "goto right test",
- "Sequence" : "biguous fir" + CTRL_A + "am" + CTRL_E + "st" + ENTER,
- "Result" : CMD1},
+ # test goto left and goto right
+ {"Name": "goto left test",
+ "Sequence": "biguous first" + CTRL_A + "am" + ENTER,
+ "Result": CMD1},
+ {"Name": "goto right test",
+ "Sequence": "biguous fir" + CTRL_A + "am" + CTRL_E + "st" + ENTER,
+ "Result": CMD1},
-# test goto words
- {"Name" : "goto left word test",
- "Sequence" : "ambiguous st" + ALT_B + "fir" + ENTER,
- "Result" : CMD1},
- {"Name" : "goto right word test",
- "Sequence" : "ambig first" + CTRL_A + ALT_F + "uous" + ENTER,
- "Result" : CMD1},
+ # test goto words
+ {"Name": "goto left word test",
+ "Sequence": "ambiguous st" + ALT_B + "fir" + ENTER,
+ "Result": CMD1},
+ {"Name": "goto right word test",
+ "Sequence": "ambig first" + CTRL_A + ALT_F + "uous" + ENTER,
+ "Result": CMD1},
-# test removing words
- {"Name" : "remove left word 1",
- "Sequence" : "single invalid" + CTRL_W + ENTER,
- "Result" : SINGLE},
- {"Name" : "remove left word 2",
- "Sequence" : "single invalid" + ALT_BKSPACE + ENTER,
- "Result" : SINGLE},
- {"Name" : "remove right word",
- "Sequence" : "single invalid" + ALT_B + ALT_D + ENTER,
- "Result" : SINGLE},
+ # test removing words
+ {"Name": "remove left word 1",
+ "Sequence": "single invalid" + CTRL_W + ENTER,
+ "Result": SINGLE},
+ {"Name": "remove left word 2",
+ "Sequence": "single invalid" + ALT_BKSPACE + ENTER,
+ "Result": SINGLE},
+ {"Name": "remove right word",
+ "Sequence": "single invalid" + ALT_B + ALT_D + ENTER,
+ "Result": SINGLE},
-# test kill buffer (copy and paste)
- {"Name" : "killbuffer test 1",
- "Sequence" : "ambiguous" + CTRL_A + CTRL_K + " first" + CTRL_A + CTRL_Y + ENTER,
- "Result" : CMD1},
- {"Name" : "killbuffer test 2",
- "Sequence" : "ambiguous" + CTRL_A + CTRL_K + CTRL_Y*26 + ENTER,
- "Result" : NOT_FOUND},
+ # test kill buffer (copy and paste)
+ {"Name": "killbuffer test 1",
+ "Sequence": "ambiguous" + CTRL_A + CTRL_K + " first" + CTRL_A +
+ CTRL_Y + ENTER,
+ "Result": CMD1},
+ {"Name": "killbuffer test 2",
+ "Sequence": "ambiguous" + CTRL_A + CTRL_K + CTRL_Y*26 + ENTER,
+ "Result": NOT_FOUND},
-# test newline
- {"Name" : "newline test",
- "Sequence" : "invalid" + CTRL_C + "single" + ENTER,
- "Result" : SINGLE},
+ # test newline
+ {"Name": "newline test",
+ "Sequence": "invalid" + CTRL_C + "single" + ENTER,
+ "Result": SINGLE},
-# test redisplay (nothing should really happen)
- {"Name" : "redisplay test",
- "Sequence" : "single" + CTRL_L + ENTER,
- "Result" : SINGLE},
+ # test redisplay (nothing should really happen)
+ {"Name": "redisplay test",
+ "Sequence": "single" + CTRL_L + ENTER,
+ "Result": SINGLE},
-# test autocomplete
- {"Name" : "autocomplete test 1",
- "Sequence" : "si" + TAB + ENTER,
- "Result" : SINGLE},
- {"Name" : "autocomplete test 2",
- "Sequence" : "si" + TAB + "_" + TAB + ENTER,
- "Result" : SINGLE_LONG},
- {"Name" : "autocomplete test 3",
- "Sequence" : "in" + TAB + ENTER,
- "Result" : NOT_FOUND},
- {"Name" : "autocomplete test 4",
- "Sequence" : "am" + TAB + ENTER,
- "Result" : BAD_ARG},
- {"Name" : "autocomplete test 5",
- "Sequence" : "am" + TAB + "fir" + TAB + ENTER,
- "Result" : CMD1},
- {"Name" : "autocomplete test 6",
- "Sequence" : "am" + TAB + "fir" + TAB + TAB + ENTER,
- "Result" : CMD1},
- {"Name" : "autocomplete test 7",
- "Sequence" : "am" + TAB + "fir" + TAB + " " + TAB + ENTER,
- "Result" : CMD1},
- {"Name" : "autocomplete test 8",
- "Sequence" : "am" + TAB + " am" + TAB + " " + ENTER,
- "Result" : AMBIG},
- {"Name" : "autocomplete test 9",
- "Sequence" : "am" + TAB + "inv" + TAB + ENTER,
- "Result" : BAD_ARG},
- {"Name" : "autocomplete test 10",
- "Sequence" : "au" + TAB + ENTER,
- "Result" : NOT_FOUND},
- {"Name" : "autocomplete test 11",
- "Sequence" : "au" + TAB + "1" + ENTER,
- "Result" : AUTO1},
- {"Name" : "autocomplete test 12",
- "Sequence" : "au" + TAB + "2" + ENTER,
- "Result" : AUTO2},
- {"Name" : "autocomplete test 13",
- "Sequence" : "au" + TAB + "2" + TAB + ENTER,
- "Result" : AUTO2},
- {"Name" : "autocomplete test 14",
- "Sequence" : "au" + TAB + "2 " + TAB + ENTER,
- "Result" : AUTO2},
- {"Name" : "autocomplete test 15",
- "Sequence" : "24" + TAB + ENTER,
- "Result" : "24"},
+ # test autocomplete
+ {"Name": "autocomplete test 1",
+ "Sequence": "si" + TAB + ENTER,
+ "Result": SINGLE},
+ {"Name": "autocomplete test 2",
+ "Sequence": "si" + TAB + "_" + TAB + ENTER,
+ "Result": SINGLE_LONG},
+ {"Name": "autocomplete test 3",
+ "Sequence": "in" + TAB + ENTER,
+ "Result": NOT_FOUND},
+ {"Name": "autocomplete test 4",
+ "Sequence": "am" + TAB + ENTER,
+ "Result": BAD_ARG},
+ {"Name": "autocomplete test 5",
+ "Sequence": "am" + TAB + "fir" + TAB + ENTER,
+ "Result": CMD1},
+ {"Name": "autocomplete test 6",
+ "Sequence": "am" + TAB + "fir" + TAB + TAB + ENTER,
+ "Result": CMD1},
+ {"Name": "autocomplete test 7",
+ "Sequence": "am" + TAB + "fir" + TAB + " " + TAB + ENTER,
+ "Result": CMD1},
+ {"Name": "autocomplete test 8",
+ "Sequence": "am" + TAB + " am" + TAB + " " + ENTER,
+ "Result": AMBIG},
+ {"Name": "autocomplete test 9",
+ "Sequence": "am" + TAB + "inv" + TAB + ENTER,
+ "Result": BAD_ARG},
+ {"Name": "autocomplete test 10",
+ "Sequence": "au" + TAB + ENTER,
+ "Result": NOT_FOUND},
+ {"Name": "autocomplete test 11",
+ "Sequence": "au" + TAB + "1" + ENTER,
+ "Result": AUTO1},
+ {"Name": "autocomplete test 12",
+ "Sequence": "au" + TAB + "2" + ENTER,
+ "Result": AUTO2},
+ {"Name": "autocomplete test 13",
+ "Sequence": "au" + TAB + "2" + TAB + ENTER,
+ "Result": AUTO2},
+ {"Name": "autocomplete test 14",
+ "Sequence": "au" + TAB + "2 " + TAB + ENTER,
+ "Result": AUTO2},
+ {"Name": "autocomplete test 15",
+ "Sequence": "24" + TAB + ENTER,
+ "Result": "24"},
-# test history
- {"Name" : "history test 1",
- "Sequence" : "invalid" + ENTER + "single" + ENTER + "invalid" + ENTER + UP + CTRL_P + ENTER,
- "Result" : SINGLE},
- {"Name" : "history test 2",
- "Sequence" : "invalid" + ENTER + "ambiguous first" + ENTER + "invalid" + ENTER + "single" + ENTER + UP * 3 + CTRL_N + DOWN + ENTER,
- "Result" : SINGLE},
+ # test history
+ {"Name": "history test 1",
+ "Sequence": "invalid" + ENTER + "single" + ENTER + "invalid" +
+ ENTER + UP + CTRL_P + ENTER,
+ "Result": SINGLE},
+ {"Name": "history test 2",
+ "Sequence": "invalid" + ENTER + "ambiguous first" + ENTER + "invalid" +
+ ENTER + "single" + ENTER + UP * 3 + CTRL_N + DOWN + ENTER,
+ "Result": SINGLE},
-#
-# tests that improve coverage
-#
+ #
+ # tests that improve coverage
+ #
-# empty space tests
- {"Name" : "empty space test 1",
- "Sequence" : RIGHT + LEFT + CTRL_B + CTRL_F + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 2",
- "Sequence" : BKSPACE + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 3",
- "Sequence" : CTRL_E*2 + CTRL_A*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 4",
- "Sequence" : ALT_F*2 + ALT_B*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 5",
- "Sequence" : " " + CTRL_E*2 + CTRL_A*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 6",
- "Sequence" : " " + CTRL_A + ALT_F*2 + ALT_B*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 7",
- "Sequence" : " " + CTRL_A + CTRL_D + CTRL_E + CTRL_D + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 8",
- "Sequence" : " space" + CTRL_W*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 9",
- "Sequence" : " space" + ALT_BKSPACE*2 + ENTER,
- "Result" : PROMPT},
- {"Name" : "empty space test 10",
- "Sequence" : " space " + CTRL_A + ALT_D*3 + ENTER,
- "Result" : PROMPT},
+ # empty space tests
+ {"Name": "empty space test 1",
+ "Sequence": RIGHT + LEFT + CTRL_B + CTRL_F + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 2",
+ "Sequence": BKSPACE + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 3",
+ "Sequence": CTRL_E*2 + CTRL_A*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 4",
+ "Sequence": ALT_F*2 + ALT_B*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 5",
+ "Sequence": " " + CTRL_E*2 + CTRL_A*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 6",
+ "Sequence": " " + CTRL_A + ALT_F*2 + ALT_B*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 7",
+ "Sequence": " " + CTRL_A + CTRL_D + CTRL_E + CTRL_D + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 8",
+ "Sequence": " space" + CTRL_W*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 9",
+ "Sequence": " space" + ALT_BKSPACE*2 + ENTER,
+ "Result": PROMPT},
+ {"Name": "empty space test 10",
+ "Sequence": " space " + CTRL_A + ALT_D*3 + ENTER,
+ "Result": PROMPT},
-# non-printable char tests
- {"Name" : "non-printable test 1",
- "Sequence" : chr(27) + chr(47) + ENTER,
- "Result" : PROMPT},
- {"Name" : "non-printable test 2",
- "Sequence" : chr(27) + chr(128) + ENTER*7,
- "Result" : PROMPT},
- {"Name" : "non-printable test 3",
- "Sequence" : chr(27) + chr(91) + chr(127) + ENTER*6,
- "Result" : PROMPT},
+ # non-printable char tests
+ {"Name": "non-printable test 1",
+ "Sequence": chr(27) + chr(47) + ENTER,
+ "Result": PROMPT},
+ {"Name": "non-printable test 2",
+ "Sequence": chr(27) + chr(128) + ENTER*7,
+ "Result": PROMPT},
+ {"Name": "non-printable test 3",
+ "Sequence": chr(27) + chr(91) + chr(127) + ENTER*6,
+ "Result": PROMPT},
-# miscellaneous tests
- {"Name" : "misc test 1",
- "Sequence" : ENTER,
- "Result" : PROMPT},
- {"Name" : "misc test 2",
- "Sequence" : "single #comment" + ENTER,
- "Result" : SINGLE},
- {"Name" : "misc test 3",
- "Sequence" : "#empty line" + ENTER,
- "Result" : PROMPT},
- {"Name" : "misc test 4",
- "Sequence" : " single " + ENTER,
- "Result" : SINGLE},
- {"Name" : "misc test 5",
- "Sequence" : "single#" + ENTER,
- "Result" : SINGLE},
- {"Name" : "misc test 6",
- "Sequence" : 'a' * 257 + ENTER,
- "Result" : NOT_FOUND},
- {"Name" : "misc test 7",
- "Sequence" : "clear_history" + UP*5 + DOWN*5 + ENTER,
- "Result" : PROMPT},
- {"Name" : "misc test 8",
- "Sequence" : "a" + HELP + CTRL_C,
- "Result" : PROMPT},
- {"Name" : "misc test 9",
- "Sequence" : CTRL_D*3,
- "Result" : None},
+ # miscellaneous tests
+ {"Name": "misc test 1",
+ "Sequence": ENTER,
+ "Result": PROMPT},
+ {"Name": "misc test 2",
+ "Sequence": "single #comment" + ENTER,
+ "Result": SINGLE},
+ {"Name": "misc test 3",
+ "Sequence": "#empty line" + ENTER,
+ "Result": PROMPT},
+ {"Name": "misc test 4",
+ "Sequence": " single " + ENTER,
+ "Result": SINGLE},
+ {"Name": "misc test 5",
+ "Sequence": "single#" + ENTER,
+ "Result": SINGLE},
+ {"Name": "misc test 6",
+ "Sequence": 'a' * 257 + ENTER,
+ "Result": NOT_FOUND},
+ {"Name": "misc test 7",
+ "Sequence": "clear_history" + UP*5 + DOWN*5 + ENTER,
+ "Result": PROMPT},
+ {"Name": "misc test 8",
+ "Sequence": "a" + HELP + CTRL_C,
+ "Result": PROMPT},
+ {"Name": "misc test 9",
+ "Sequence": CTRL_D*3,
+ "Result": None},
]
diff --git a/app/test/autotest.py b/app/test/autotest.py
index b9fd6b6..3a00538 100644
--- a/app/test/autotest.py
+++ b/app/test/autotest.py
@@ -33,44 +33,46 @@
# Script that uses either test app or qemu controlled by python-pexpect
-import sys, autotest_data, autotest_runner
-
+import autotest_data
+import autotest_runner
+import sys
def usage():
- print"Usage: autotest.py [test app|test iso image]",
- print "[target] [whitelist|-blacklist]"
+ print"Usage: autotest.py [test app|test iso image]",
+ print "[target] [whitelist|-blacklist]"
if len(sys.argv) < 3:
- usage()
- sys.exit(1)
+ usage()
+ sys.exit(1)
target = sys.argv[2]
-test_whitelist=None
-test_blacklist=None
+test_whitelist = None
+test_blacklist = None
# get blacklist/whitelist
if len(sys.argv) > 3:
- testlist = sys.argv[3].split(',')
- testlist = [test.lower() for test in testlist]
- if testlist[0].startswith('-'):
- testlist[0] = testlist[0].lstrip('-')
- test_blacklist = testlist
- else:
- test_whitelist = testlist
+ testlist = sys.argv[3].split(',')
+ testlist = [test.lower() for test in testlist]
+ if testlist[0].startswith('-'):
+ testlist[0] = testlist[0].lstrip('-')
+ test_blacklist = testlist
+ else:
+ test_whitelist = testlist
-cmdline = "%s -c f -n 4"%(sys.argv[1])
+cmdline = "%s -c f -n 4" % (sys.argv[1])
print cmdline
-runner = autotest_runner.AutotestRunner(cmdline, target, test_blacklist, test_whitelist)
+runner = autotest_runner.AutotestRunner(cmdline, target, test_blacklist,
+ test_whitelist)
for test_group in autotest_data.parallel_test_group_list:
- runner.add_parallel_test_group(test_group)
+ runner.add_parallel_test_group(test_group)
for test_group in autotest_data.non_parallel_test_group_list:
- runner.add_non_parallel_test_group(test_group)
+ runner.add_non_parallel_test_group(test_group)
num_fails = runner.run_all_tests()
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 9e8fd94..0cf4cfd 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# BSD LICENSE
#
@@ -36,12 +36,14 @@
from glob import glob
from autotest_test_funcs import *
+
# quick and dirty function to find out number of sockets
def num_sockets():
- result = len(glob("/sys/devices/system/node/node*"))
- if result == 0:
- return 1
- return result
+ result = len(glob("/sys/devices/system/node/node*"))
+ if result == 0:
+ return 1
+ return result
+
# Assign given number to each socket
# e.g. 32 becomes 32,32 or 32,32,32,32
@@ -51,420 +53,419 @@ def per_sockets(num):
# groups of tests that can be run in parallel
# the grouping has been found largely empirically
parallel_test_group_list = [
-
-{
- "Prefix": "group_1",
- "Memory" : per_sockets(8),
- "Tests" :
- [
- {
- "Name" : "Cycles autotest",
- "Command" : "cycles_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Timer autotest",
- "Command" : "timer_autotest",
- "Func" : timer_autotest,
- "Report" : None,
- },
- {
- "Name" : "Debug autotest",
- "Command" : "debug_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Errno autotest",
- "Command" : "errno_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Meter autotest",
- "Command" : "meter_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Common autotest",
- "Command" : "common_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Resource autotest",
- "Command" : "resource_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "group_2",
- "Memory" : "16",
- "Tests" :
- [
- {
- "Name" : "Memory autotest",
- "Command" : "memory_autotest",
- "Func" : memory_autotest,
- "Report" : None,
- },
- {
- "Name" : "Read/write lock autotest",
- "Command" : "rwlock_autotest",
- "Func" : rwlock_autotest,
- "Report" : None,
- },
- {
- "Name" : "Logs autotest",
- "Command" : "logs_autotest",
- "Func" : logs_autotest,
- "Report" : None,
- },
- {
- "Name" : "CPU flags autotest",
- "Command" : "cpuflags_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Version autotest",
- "Command" : "version_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "EAL filesystem autotest",
- "Command" : "eal_fs_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "EAL flags autotest",
- "Command" : "eal_flags_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Hash autotest",
- "Command" : "hash_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ],
-},
-{
- "Prefix": "group_3",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "LPM autotest",
- "Command" : "lpm_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "LPM6 autotest",
- "Command" : "lpm6_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Memcpy autotest",
- "Command" : "memcpy_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Memzone autotest",
- "Command" : "memzone_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "String autotest",
- "Command" : "string_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Alarm autotest",
- "Command" : "alarm_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "group_4",
- "Memory" : per_sockets(128),
- "Tests" :
- [
- {
- "Name" : "PCI autotest",
- "Command" : "pci_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Malloc autotest",
- "Command" : "malloc_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Multi-process autotest",
- "Command" : "multiprocess_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Mbuf autotest",
- "Command" : "mbuf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Per-lcore autotest",
- "Command" : "per_lcore_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Ring autotest",
- "Command" : "ring_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "group_5",
- "Memory" : "32",
- "Tests" :
- [
- {
- "Name" : "Spinlock autotest",
- "Command" : "spinlock_autotest",
- "Func" : spinlock_autotest,
- "Report" : None,
- },
- {
- "Name" : "Byte order autotest",
- "Command" : "byteorder_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "TAILQ autotest",
- "Command" : "tailq_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Command-line autotest",
- "Command" : "cmdline_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Interrupts autotest",
- "Command" : "interrupt_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "group_6",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "Function reentrancy autotest",
- "Command" : "func_reentrancy_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Mempool autotest",
- "Command" : "mempool_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Atomics autotest",
- "Command" : "atomic_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Prefetch autotest",
- "Command" : "prefetch_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" :"Red autotest",
- "Command" : "red_autotest",
- "Func" :default_autotest,
- "Report" :None,
- },
- ]
-},
-{
- "Prefix" : "group_7",
- "Memory" : "64",
- "Tests" :
- [
- {
- "Name" : "PMD ring autotest",
- "Command" : "ring_pmd_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" : "Access list control autotest",
- "Command" : "acl_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- {
- "Name" :"Sched autotest",
- "Command" : "sched_autotest",
- "Func" :default_autotest,
- "Report" :None,
- },
- ]
-},
+ {
+ "Prefix": "group_1",
+ "Memory": per_sockets(8),
+ "Tests":
+ [
+ {
+ "Name": "Cycles autotest",
+ "Command": "cycles_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Timer autotest",
+ "Command": "timer_autotest",
+ "Func": timer_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Debug autotest",
+ "Command": "debug_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Errno autotest",
+ "Command": "errno_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Meter autotest",
+ "Command": "meter_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Common autotest",
+ "Command": "common_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Resource autotest",
+ "Command": "resource_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "group_2",
+ "Memory": "16",
+ "Tests":
+ [
+ {
+ "Name": "Memory autotest",
+ "Command": "memory_autotest",
+ "Func": memory_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Read/write lock autotest",
+ "Command": "rwlock_autotest",
+ "Func": rwlock_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Logs autotest",
+ "Command": "logs_autotest",
+ "Func": logs_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "CPU flags autotest",
+ "Command": "cpuflags_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Version autotest",
+ "Command": "version_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "EAL filesystem autotest",
+ "Command": "eal_fs_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "EAL flags autotest",
+ "Command": "eal_flags_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Hash autotest",
+ "Command": "hash_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ],
+ },
+ {
+ "Prefix": "group_3",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "LPM autotest",
+ "Command": "lpm_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "LPM6 autotest",
+ "Command": "lpm6_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Memcpy autotest",
+ "Command": "memcpy_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Memzone autotest",
+ "Command": "memzone_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "String autotest",
+ "Command": "string_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Alarm autotest",
+ "Command": "alarm_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "group_4",
+ "Memory": per_sockets(128),
+ "Tests":
+ [
+ {
+ "Name": "PCI autotest",
+ "Command": "pci_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Malloc autotest",
+ "Command": "malloc_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Multi-process autotest",
+ "Command": "multiprocess_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Mbuf autotest",
+ "Command": "mbuf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Per-lcore autotest",
+ "Command": "per_lcore_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Ring autotest",
+ "Command": "ring_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "group_5",
+ "Memory": "32",
+ "Tests":
+ [
+ {
+ "Name": "Spinlock autotest",
+ "Command": "spinlock_autotest",
+ "Func": spinlock_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Byte order autotest",
+ "Command": "byteorder_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "TAILQ autotest",
+ "Command": "tailq_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Command-line autotest",
+ "Command": "cmdline_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Interrupts autotest",
+ "Command": "interrupt_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "group_6",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "Function reentrancy autotest",
+ "Command": "func_reentrancy_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Mempool autotest",
+ "Command": "mempool_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Atomics autotest",
+ "Command": "atomic_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Prefetch autotest",
+ "Command": "prefetch_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Red autotest",
+ "Command": "red_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "group_7",
+ "Memory": "64",
+ "Tests":
+ [
+ {
+ "Name": "PMD ring autotest",
+ "Command": "ring_pmd_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Access list control autotest",
+ "Command": "acl_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "Sched autotest",
+ "Command": "sched_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
]
# tests that should not be run when any other tests are running
non_parallel_test_group_list = [
-{
- "Prefix" : "kni",
- "Memory" : "512",
- "Tests" :
- [
- {
- "Name" : "KNI autotest",
- "Command" : "kni_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "mempool_perf",
- "Memory" : per_sockets(256),
- "Tests" :
- [
- {
- "Name" : "Mempool performance autotest",
- "Command" : "mempool_perf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "memcpy_perf",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "Memcpy performance autotest",
- "Command" : "memcpy_perf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "hash_perf",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "Hash performance autotest",
- "Command" : "hash_perf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix" : "power",
- "Memory" : "16",
- "Tests" :
- [
- {
- "Name" : "Power autotest",
- "Command" : "power_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix" : "power_acpi_cpufreq",
- "Memory" : "16",
- "Tests" :
- [
- {
- "Name" : "Power ACPI cpufreq autotest",
- "Command" : "power_acpi_cpufreq_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix" : "power_kvm_vm",
- "Memory" : "16",
- "Tests" :
- [
- {
- "Name" : "Power KVM VM autotest",
- "Command" : "power_kvm_vm_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
-{
- "Prefix": "timer_perf",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "Timer performance autotest",
- "Command" : "timer_perf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
+ {
+ "Prefix": "kni",
+ "Memory": "512",
+ "Tests":
+ [
+ {
+ "Name": "KNI autotest",
+ "Command": "kni_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "mempool_perf",
+ "Memory": per_sockets(256),
+ "Tests":
+ [
+ {
+ "Name": "Mempool performance autotest",
+ "Command": "mempool_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "memcpy_perf",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "Memcpy performance autotest",
+ "Command": "memcpy_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "hash_perf",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "Hash performance autotest",
+ "Command": "hash_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "power",
+ "Memory": "16",
+ "Tests":
+ [
+ {
+ "Name": "Power autotest",
+ "Command": "power_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "power_acpi_cpufreq",
+ "Memory": "16",
+ "Tests":
+ [
+ {
+ "Name": "Power ACPI cpufreq autotest",
+ "Command": "power_acpi_cpufreq_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "power_kvm_vm",
+ "Memory": "16",
+ "Tests":
+ [
+ {
+ "Name": "Power KVM VM autotest",
+ "Command": "power_kvm_vm_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
+ {
+ "Prefix": "timer_perf",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "Timer performance autotest",
+ "Command": "timer_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
-#
-# Please always make sure that ring_perf is the last test!
-#
-{
- "Prefix": "ring_perf",
- "Memory" : per_sockets(512),
- "Tests" :
- [
- {
- "Name" : "Ring performance autotest",
- "Command" : "ring_perf_autotest",
- "Func" : default_autotest,
- "Report" : None,
- },
- ]
-},
+ #
+ # Please always make sure that ring_perf is the last test!
+ #
+ {
+ "Prefix": "ring_perf",
+ "Memory": per_sockets(512),
+ "Tests":
+ [
+ {
+ "Name": "Ring performance autotest",
+ "Command": "ring_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ ]
+ },
]
diff --git a/app/test/autotest_runner.py b/app/test/autotest_runner.py
index 21d3be2..55b63a8 100644
--- a/app/test/autotest_runner.py
+++ b/app/test/autotest_runner.py
@@ -33,20 +33,29 @@
# The main logic behind running autotests in parallel
-import multiprocessing, subprocess, sys, pexpect, re, time, os, StringIO, csv
+import StringIO
+import csv
+import multiprocessing
+import pexpect
+import re
+import subprocess
+import sys
+import time
# wait for prompt
+
+
def wait_prompt(child):
- try:
- child.sendline()
- result = child.expect(["RTE>>", pexpect.TIMEOUT, pexpect.EOF],
- timeout = 120)
- except:
- return False
- if result == 0:
- return True
- else:
- return False
+ try:
+ child.sendline()
+ result = child.expect(["RTE>>", pexpect.TIMEOUT, pexpect.EOF],
+ timeout=120)
+ except:
+ return False
+ if result == 0:
+ return True
+ else:
+ return False
# run a test group
# each result tuple in results list consists of:
@@ -60,363 +69,363 @@ def wait_prompt(child):
# this function needs to be outside AutotestRunner class
# because otherwise Pool won't work (or rather it will require
# quite a bit of effort to make it work).
-def run_test_group(cmdline, test_group):
- results = []
- child = None
- start_time = time.time()
- startuplog = None
-
- # run test app
- try:
- # prepare logging of init
- startuplog = StringIO.StringIO()
-
- print >>startuplog, "\n%s %s\n" % ("="*20, test_group["Prefix"])
- print >>startuplog, "\ncmdline=%s" % cmdline
-
- child = pexpect.spawn(cmdline, logfile=startuplog)
-
- # wait for target to boot
- if not wait_prompt(child):
- child.close()
-
- results.append((-1, "Fail [No prompt]", "Start %s" % test_group["Prefix"],
- time.time() - start_time, startuplog.getvalue(), None))
-
- # mark all tests as failed
- for test in test_group["Tests"]:
- results.append((-1, "Fail [No prompt]", test["Name"],
- time.time() - start_time, "", None))
- # exit test
- return results
-
- except:
- results.append((-1, "Fail [Can't run]", "Start %s" % test_group["Prefix"],
- time.time() - start_time, startuplog.getvalue(), None))
-
- # mark all tests as failed
- for t in test_group["Tests"]:
- results.append((-1, "Fail [Can't run]", t["Name"],
- time.time() - start_time, "", None))
- # exit test
- return results
-
- # startup was successful
- results.append((0, "Success", "Start %s" % test_group["Prefix"],
- time.time() - start_time, startuplog.getvalue(), None))
-
- # parse the binary for available test commands
- binary = cmdline.split()[0]
- stripped = 'not stripped' not in subprocess.check_output(['file', binary])
- if not stripped:
- symbols = subprocess.check_output(['nm', binary]).decode('utf-8')
- avail_cmds = re.findall('test_register_(\w+)', symbols)
-
- # run all tests in test group
- for test in test_group["Tests"]:
-
- # create log buffer for each test
- # in multiprocessing environment, the logging would be
- # interleaved and will create a mess, hence the buffering
- logfile = StringIO.StringIO()
- child.logfile = logfile
-
- result = ()
-
- # make a note when the test started
- start_time = time.time()
-
- try:
- # print test name to log buffer
- print >>logfile, "\n%s %s\n" % ("-"*20, test["Name"])
-
- # run test function associated with the test
- if stripped or test["Command"] in avail_cmds:
- result = test["Func"](child, test["Command"])
- else:
- result = (0, "Skipped [Not Available]")
-
- # make a note when the test was finished
- end_time = time.time()
-
- # append test data to the result tuple
- result += (test["Name"], end_time - start_time,
- logfile.getvalue())
-
- # call report function, if any defined, and supply it with
- # target and complete log for test run
- if test["Report"]:
- report = test["Report"](self.target, log)
-
- # append report to results tuple
- result += (report,)
- else:
- # report is None
- result += (None,)
- except:
- # make a note when the test crashed
- end_time = time.time()
-
- # mark test as failed
- result = (-1, "Fail [Crash]", test["Name"],
- end_time - start_time, logfile.getvalue(), None)
- finally:
- # append the results to the results list
- results.append(result)
-
- # regardless of whether test has crashed, try quitting it
- try:
- child.sendline("quit")
- child.close()
- # if the test crashed, just do nothing instead
- except:
- # nop
- pass
-
- # return test results
- return results
-
+def run_test_group(cmdline, test_group):
+ results = []
+ child = None
+ start_time = time.time()
+ startuplog = None
+
+ # run test app
+ try:
+ # prepare logging of init
+ startuplog = StringIO.StringIO()
+
+ print >>startuplog, "\n%s %s\n" % ("=" * 20, test_group["Prefix"])
+ print >>startuplog, "\ncmdline=%s" % cmdline
+
+ child = pexpect.spawn(cmdline, logfile=startuplog)
+
+ # wait for target to boot
+ if not wait_prompt(child):
+ child.close()
+
+ results.append((-1,
+ "Fail [No prompt]",
+ "Start %s" % test_group["Prefix"],
+ time.time() - start_time,
+ startuplog.getvalue(),
+ None))
+
+ # mark all tests as failed
+ for test in test_group["Tests"]:
+ results.append((-1, "Fail [No prompt]", test["Name"],
+ time.time() - start_time, "", None))
+ # exit test
+ return results
+
+ except:
+ results.append((-1,
+ "Fail [Can't run]",
+ "Start %s" % test_group["Prefix"],
+ time.time() - start_time,
+ startuplog.getvalue(),
+ None))
+
+ # mark all tests as failed
+ for t in test_group["Tests"]:
+ results.append((-1, "Fail [Can't run]", t["Name"],
+ time.time() - start_time, "", None))
+ # exit test
+ return results
+
+ # startup was successful
+ results.append((0, "Success", "Start %s" % test_group["Prefix"],
+ time.time() - start_time, startuplog.getvalue(), None))
+
+ # parse the binary for available test commands
+ binary = cmdline.split()[0]
+ stripped = 'not stripped' not in subprocess.check_output(['file', binary])
+ if not stripped:
+ symbols = subprocess.check_output(['nm', binary]).decode('utf-8')
+ avail_cmds = re.findall('test_register_(\w+)', symbols)
+
+ # run all tests in test group
+ for test in test_group["Tests"]:
+
+ # create log buffer for each test
+ # in multiprocessing environment, the logging would be
+ # interleaved and will create a mess, hence the buffering
+ logfile = StringIO.StringIO()
+ child.logfile = logfile
+
+ result = ()
+
+ # make a note when the test started
+ start_time = time.time()
+
+ try:
+ # print test name to log buffer
+ print >>logfile, "\n%s %s\n" % ("-" * 20, test["Name"])
+
+ # run test function associated with the test
+ if stripped or test["Command"] in avail_cmds:
+ result = test["Func"](child, test["Command"])
+ else:
+ result = (0, "Skipped [Not Available]")
+
+ # make a note when the test was finished
+ end_time = time.time()
+
+ # append test data to the result tuple
+ result += (test["Name"], end_time - start_time,
+ logfile.getvalue())
+
+ # call report function, if any defined, and supply it with
+ # target and complete log for test run
+ if test["Report"]:
+ report = test["Report"](self.target, log)
+
+ # append report to results tuple
+ result += (report,)
+ else:
+ # report is None
+ result += (None,)
+ except:
+ # make a note when the test crashed
+ end_time = time.time()
+
+ # mark test as failed
+ result = (-1, "Fail [Crash]", test["Name"],
+ end_time - start_time, logfile.getvalue(), None)
+ finally:
+ # append the results to the results list
+ results.append(result)
+
+ # regardless of whether test has crashed, try quitting it
+ try:
+ child.sendline("quit")
+ child.close()
+ # if the test crashed, just do nothing instead
+ except:
+ # nop
+ pass
+
+ # return test results
+ return results
# class representing an instance of autotests run
class AutotestRunner:
- cmdline = ""
- parallel_test_groups = []
- non_parallel_test_groups = []
- logfile = None
- csvwriter = None
- target = ""
- start = None
- n_tests = 0
- fails = 0
- log_buffers = []
- blacklist = []
- whitelist = []
-
-
- def __init__(self, cmdline, target, blacklist, whitelist):
- self.cmdline = cmdline
- self.target = target
- self.blacklist = blacklist
- self.whitelist = whitelist
-
- # log file filename
- logfile = "%s.log" % target
- csvfile = "%s.csv" % target
-
- self.logfile = open(logfile, "w")
- csvfile = open(csvfile, "w")
- self.csvwriter = csv.writer(csvfile)
-
- # prepare results table
- self.csvwriter.writerow(["test_name","test_result","result_str"])
-
-
-
- # set up cmdline string
- def __get_cmdline(self, test):
- cmdline = self.cmdline
-
- # append memory limitations for each test
- # otherwise tests won't run in parallel
- if not "i686" in self.target:
- cmdline += " --socket-mem=%s"% test["Memory"]
- else:
- # affinitize startup so that tests don't fail on i686
- cmdline = "taskset 1 " + cmdline
- cmdline += " -m " + str(sum(map(int,test["Memory"].split(","))))
-
- # set group prefix for autotest group
- # otherwise they won't run in parallel
- cmdline += " --file-prefix=%s"% test["Prefix"]
-
- return cmdline
-
-
-
- def add_parallel_test_group(self,test_group):
- self.parallel_test_groups.append(test_group)
-
- def add_non_parallel_test_group(self,test_group):
- self.non_parallel_test_groups.append(test_group)
-
-
- def __process_results(self, results):
- # this iterates over individual test results
- for i, result in enumerate(results):
-
- # increase total number of tests that were run
- # do not include "start" test
- if i > 0:
- self.n_tests += 1
-
- # unpack result tuple
- test_result, result_str, test_name, \
- test_time, log, report = result
-
- # get total run time
- cur_time = time.time()
- total_time = int(cur_time - self.start)
-
- # print results, test run time and total time since start
- print ("%s:" % test_name).ljust(30),
- print result_str.ljust(29),
- print "[%02dm %02ds]" % (test_time / 60, test_time % 60),
-
- # don't print out total time every line, it's the same anyway
- if i == len(results) - 1:
- print "[%02dm %02ds]" % (total_time / 60, total_time % 60)
- else:
- print ""
-
- # if test failed and it wasn't a "start" test
- if test_result < 0 and not i == 0:
- self.fails += 1
-
- # collect logs
- self.log_buffers.append(log)
-
- # create report if it exists
- if report:
- try:
- f = open("%s_%s_report.rst" % (self.target,test_name), "w")
- except IOError:
- print "Report for %s could not be created!" % test_name
- else:
- with f:
- f.write(report)
-
- # write test result to CSV file
- if i != 0:
- self.csvwriter.writerow([test_name, test_result, result_str])
-
-
-
-
- # this function iterates over test groups and removes each
- # test that is not in whitelist/blacklist
- def __filter_groups(self, test_groups):
- groups_to_remove = []
-
- # filter out tests from parallel test groups
- for i, test_group in enumerate(test_groups):
-
- # iterate over a copy so that we could safely delete individual tests
- for test in test_group["Tests"][:]:
- test_id = test["Command"]
-
- # dump tests are specified in full e.g. "Dump_mempool"
- if "_autotest" in test_id:
- test_id = test_id[:-len("_autotest")]
-
- # filter out blacklisted/whitelisted tests
- if self.blacklist and test_id in self.blacklist:
- test_group["Tests"].remove(test)
- continue
- if self.whitelist and test_id not in self.whitelist:
- test_group["Tests"].remove(test)
- continue
-
- # modify or remove original group
- if len(test_group["Tests"]) > 0:
- test_groups[i] = test_group
- else:
- # remember which groups should be deleted
- # put the numbers backwards so that we start
- # deleting from the end, not from the beginning
- groups_to_remove.insert(0, i)
-
- # remove test groups that need to be removed
- for i in groups_to_remove:
- del test_groups[i]
-
- return test_groups
-
-
-
- # iterate over test groups and run tests associated with them
- def run_all_tests(self):
- # filter groups
- self.parallel_test_groups = \
- self.__filter_groups(self.parallel_test_groups)
- self.non_parallel_test_groups = \
- self.__filter_groups(self.non_parallel_test_groups)
-
- # create a pool of worker threads
- pool = multiprocessing.Pool(processes=1)
-
- results = []
-
- # whatever happens, try to save as much logs as possible
- try:
-
- # create table header
- print ""
- print "Test name".ljust(30),
- print "Test result".ljust(29),
- print "Test".center(9),
- print "Total".center(9)
- print "=" * 80
-
- # make a note of tests start time
- self.start = time.time()
-
- # assign worker threads to run test groups
- for test_group in self.parallel_test_groups:
- result = pool.apply_async(run_test_group,
- [self.__get_cmdline(test_group), test_group])
- results.append(result)
-
- # iterate while we have group execution results to get
- while len(results) > 0:
-
- # iterate over a copy to be able to safely delete results
- # this iterates over a list of group results
- for group_result in results[:]:
-
- # if the thread hasn't finished yet, continue
- if not group_result.ready():
- continue
-
- res = group_result.get()
-
- self.__process_results(res)
-
- # remove result from results list once we're done with it
- results.remove(group_result)
-
- # run non_parallel tests. they are run one by one, synchronously
- for test_group in self.non_parallel_test_groups:
- group_result = run_test_group(self.__get_cmdline(test_group), test_group)
-
- self.__process_results(group_result)
-
- # get total run time
- cur_time = time.time()
- total_time = int(cur_time - self.start)
-
- # print out summary
- print "=" * 80
- print "Total run time: %02dm %02ds" % (total_time / 60, total_time % 60)
- if self.fails != 0:
- print "Number of failed tests: %s" % str(self.fails)
-
- # write summary to logfile
- self.logfile.write("Summary\n")
- self.logfile.write("Target: ".ljust(15) + "%s\n" % self.target)
- self.logfile.write("Tests: ".ljust(15) + "%i\n" % self.n_tests)
- self.logfile.write("Failed tests: ".ljust(15) + "%i\n" % self.fails)
- except:
- print "Exception occured"
- print sys.exc_info()
- self.fails = 1
-
- # drop logs from all executions to a logfile
- for buf in self.log_buffers:
- self.logfile.write(buf.replace("\r",""))
-
- log_buffers = []
-
- return self.fails
+ cmdline = ""
+ parallel_test_groups = []
+ non_parallel_test_groups = []
+ logfile = None
+ csvwriter = None
+ target = ""
+ start = None
+ n_tests = 0
+ fails = 0
+ log_buffers = []
+ blacklist = []
+ whitelist = []
+
+ def __init__(self, cmdline, target, blacklist, whitelist):
+ self.cmdline = cmdline
+ self.target = target
+ self.blacklist = blacklist
+ self.whitelist = whitelist
+
+ # log file filename
+ logfile = "%s.log" % target
+ csvfile = "%s.csv" % target
+
+ self.logfile = open(logfile, "w")
+ csvfile = open(csvfile, "w")
+ self.csvwriter = csv.writer(csvfile)
+
+ # prepare results table
+ self.csvwriter.writerow(["test_name", "test_result", "result_str"])
+
+ # set up cmdline string
+ def __get_cmdline(self, test):
+ cmdline = self.cmdline
+
+ # append memory limitations for each test
+ # otherwise tests won't run in parallel
+ if "i686" not in self.target:
+ cmdline += " --socket-mem=%s" % test["Memory"]
+ else:
+ # affinitize startup so that tests don't fail on i686
+ cmdline = "taskset 1 " + cmdline
+ cmdline += " -m " + str(sum(map(int, test["Memory"].split(","))))
+
+ # set group prefix for autotest group
+ # otherwise they won't run in parallel
+ cmdline += " --file-prefix=%s" % test["Prefix"]
+
+ return cmdline
+
+ def add_parallel_test_group(self, test_group):
+ self.parallel_test_groups.append(test_group)
+
+ def add_non_parallel_test_group(self, test_group):
+ self.non_parallel_test_groups.append(test_group)
+
+ def __process_results(self, results):
+ # this iterates over individual test results
+ for i, result in enumerate(results):
+
+ # increase total number of tests that were run
+ # do not include "start" test
+ if i > 0:
+ self.n_tests += 1
+
+ # unpack result tuple
+ test_result, result_str, test_name, \
+ test_time, log, report = result
+
+ # get total run time
+ cur_time = time.time()
+ total_time = int(cur_time - self.start)
+
+ # print results, test run time and total time since start
+ print ("%s:" % test_name).ljust(30),
+ print result_str.ljust(29),
+ print "[%02dm %02ds]" % (test_time / 60, test_time % 60),
+
+ # don't print out total time every line, it's the same anyway
+ if i == len(results) - 1:
+ print "[%02dm %02ds]" % (total_time / 60, total_time % 60)
+ else:
+ print ""
+
+ # if test failed and it wasn't a "start" test
+ if test_result < 0 and not i == 0:
+ self.fails += 1
+
+ # collect logs
+ self.log_buffers.append(log)
+
+ # create report if it exists
+ if report:
+ try:
+ f = open("%s_%s_report.rst" %
+ (self.target, test_name), "w")
+ except IOError:
+ print "Report for %s could not be created!" % test_name
+ else:
+ with f:
+ f.write(report)
+
+ # write test result to CSV file
+ if i != 0:
+ self.csvwriter.writerow([test_name, test_result, result_str])
+
+ # this function iterates over test groups and removes each
+ # test that is not in whitelist/blacklist
+ def __filter_groups(self, test_groups):
+ groups_to_remove = []
+
+ # filter out tests from parallel test groups
+ for i, test_group in enumerate(test_groups):
+
+ # iterate over a copy so that we could safely delete individual
+ # tests
+ for test in test_group["Tests"][:]:
+ test_id = test["Command"]
+
+ # dump tests are specified in full e.g. "Dump_mempool"
+ if "_autotest" in test_id:
+ test_id = test_id[:-len("_autotest")]
+
+ # filter out blacklisted/whitelisted tests
+ if self.blacklist and test_id in self.blacklist:
+ test_group["Tests"].remove(test)
+ continue
+ if self.whitelist and test_id not in self.whitelist:
+ test_group["Tests"].remove(test)
+ continue
+
+ # modify or remove original group
+ if len(test_group["Tests"]) > 0:
+ test_groups[i] = test_group
+ else:
+ # remember which groups should be deleted
+ # put the numbers backwards so that we start
+ # deleting from the end, not from the beginning
+ groups_to_remove.insert(0, i)
+
+ # remove test groups that need to be removed
+ for i in groups_to_remove:
+ del test_groups[i]
+
+ return test_groups
+
+ # iterate over test groups and run tests associated with them
+ def run_all_tests(self):
+ # filter groups
+ self.parallel_test_groups = \
+ self.__filter_groups(self.parallel_test_groups)
+ self.non_parallel_test_groups = \
+ self.__filter_groups(self.non_parallel_test_groups)
+
+ # create a pool of worker threads
+ pool = multiprocessing.Pool(processes=1)
+
+ results = []
+
+ # whatever happens, try to save as much logs as possible
+ try:
+
+ # create table header
+ print ""
+ print "Test name".ljust(30),
+ print "Test result".ljust(29),
+ print "Test".center(9),
+ print "Total".center(9)
+ print "=" * 80
+
+ # make a note of tests start time
+ self.start = time.time()
+
+ # assign worker threads to run test groups
+ for test_group in self.parallel_test_groups:
+ result = pool.apply_async(run_test_group,
+ [self.__get_cmdline(test_group),
+ test_group])
+ results.append(result)
+
+ # iterate while we have group execution results to get
+ while len(results) > 0:
+
+ # iterate over a copy to be able to safely delete results
+ # this iterates over a list of group results
+ for group_result in results[:]:
+
+ # if the thread hasn't finished yet, continue
+ if not group_result.ready():
+ continue
+
+ res = group_result.get()
+
+ self.__process_results(res)
+
+ # remove result from results list once we're done with it
+ results.remove(group_result)
+
+ # run non_parallel tests. they are run one by one, synchronously
+ for test_group in self.non_parallel_test_groups:
+ group_result = run_test_group(
+ self.__get_cmdline(test_group), test_group)
+
+ self.__process_results(group_result)
+
+ # get total run time
+ cur_time = time.time()
+ total_time = int(cur_time - self.start)
+
+ # print out summary
+ print "=" * 80
+ print "Total run time: %02dm %02ds" % (total_time / 60,
+ total_time % 60)
+ if self.fails != 0:
+ print "Number of failed tests: %s" % str(self.fails)
+
+ # write summary to logfile
+ self.logfile.write("Summary\n")
+ self.logfile.write("Target: ".ljust(15) + "%s\n" % self.target)
+ self.logfile.write("Tests: ".ljust(15) + "%i\n" % self.n_tests)
+ self.logfile.write("Failed tests: ".ljust(
+ 15) + "%i\n" % self.fails)
+ except:
+ print "Exception occurred"
+ print sys.exc_info()
+ self.fails = 1
+
+ # drop logs from all executions to a logfile
+ for buf in self.log_buffers:
+ self.logfile.write(buf.replace("\r", ""))
+
+ return self.fails
diff --git a/app/test/autotest_test_funcs.py b/app/test/autotest_test_funcs.py
index 14cffd0..c482ea8 100644
--- a/app/test/autotest_test_funcs.py
+++ b/app/test/autotest_test_funcs.py
@@ -33,257 +33,272 @@
# Test functions
-import sys, pexpect, time, os, re
+import pexpect
# default autotest, used to run most tests
# waits for "Test OK"
+
+
def default_autotest(child, test_name):
- child.sendline(test_name)
- result = child.expect(["Test OK", "Test Failed",
- "Command not found", pexpect.TIMEOUT], timeout = 900)
- if result == 1:
- return -1, "Fail"
- elif result == 2:
- return -1, "Fail [Not found]"
- elif result == 3:
- return -1, "Fail [Timeout]"
- return 0, "Success"
+ child.sendline(test_name)
+ result = child.expect(["Test OK", "Test Failed",
+ "Command not found", pexpect.TIMEOUT], timeout=900)
+ if result == 1:
+ return -1, "Fail"
+ elif result == 2:
+ return -1, "Fail [Not found]"
+ elif result == 3:
+ return -1, "Fail [Timeout]"
+ return 0, "Success"
# autotest used to run dump commands
# just fires the command
+
+
def dump_autotest(child, test_name):
- child.sendline(test_name)
- return 0, "Success"
+ child.sendline(test_name)
+ return 0, "Success"
# memory autotest
# reads output and waits for Test OK
+
+
def memory_autotest(child, test_name):
- child.sendline(test_name)
- regexp = "phys:0x[0-9a-f]*, len:([0-9]*), virt:0x[0-9a-f]*, socket_id:[0-9]*"
- index = child.expect([regexp, pexpect.TIMEOUT], timeout = 180)
- if index != 0:
- return -1, "Fail [Timeout]"
- size = int(child.match.groups()[0], 16)
- if size <= 0:
- return -1, "Fail [Bad size]"
- index = child.expect(["Test OK", "Test Failed",
- pexpect.TIMEOUT], timeout = 10)
- if index == 1:
- return -1, "Fail"
- elif index == 2:
- return -1, "Fail [Timeout]"
- return 0, "Success"
+ child.sendline(test_name)
+ regexp = "phys:0x[0-9a-f]*, len:([0-9]*), virt:0x[0-9a-f]*, " \
+ "socket_id:[0-9]*"
+ index = child.expect([regexp, pexpect.TIMEOUT], timeout=180)
+ if index != 0:
+ return -1, "Fail [Timeout]"
+ size = int(child.match.groups()[0], 16)
+ if size <= 0:
+ return -1, "Fail [Bad size]"
+ index = child.expect(["Test OK", "Test Failed",
+ pexpect.TIMEOUT], timeout=10)
+ if index == 1:
+ return -1, "Fail"
+ elif index == 2:
+ return -1, "Fail [Timeout]"
+ return 0, "Success"
+
def spinlock_autotest(child, test_name):
- i = 0
- ir = 0
- child.sendline(test_name)
- while True:
- index = child.expect(["Test OK",
- "Test Failed",
- "Hello from core ([0-9]*) !",
- "Hello from within recursive locks from ([0-9]*) !",
- pexpect.TIMEOUT], timeout = 5)
- # ok
- if index == 0:
- break
-
- # message, check ordering
- elif index == 2:
- if int(child.match.groups()[0]) < i:
- return -1, "Fail [Bad order]"
- i = int(child.match.groups()[0])
- elif index == 3:
- if int(child.match.groups()[0]) < ir:
- return -1, "Fail [Bad order]"
- ir = int(child.match.groups()[0])
-
- # fail
- elif index == 4:
- return -1, "Fail [Timeout]"
- elif index == 1:
- return -1, "Fail"
-
- return 0, "Success"
+ i = 0
+ ir = 0
+ child.sendline(test_name)
+ while True:
+ index = child.expect(["Test OK",
+ "Test Failed",
+ "Hello from core ([0-9]*) !",
+ "Hello from within recursive locks "
+ "from ([0-9]*) !",
+ pexpect.TIMEOUT], timeout=5)
+ # ok
+ if index == 0:
+ break
+
+ # message, check ordering
+ elif index == 2:
+ if int(child.match.groups()[0]) < i:
+ return -1, "Fail [Bad order]"
+ i = int(child.match.groups()[0])
+ elif index == 3:
+ if int(child.match.groups()[0]) < ir:
+ return -1, "Fail [Bad order]"
+ ir = int(child.match.groups()[0])
+
+ # fail
+ elif index == 4:
+ return -1, "Fail [Timeout]"
+ elif index == 1:
+ return -1, "Fail"
+
+ return 0, "Success"
+
def rwlock_autotest(child, test_name):
- i = 0
- child.sendline(test_name)
- while True:
- index = child.expect(["Test OK",
- "Test Failed",
- "Hello from core ([0-9]*) !",
- "Global write lock taken on master core ([0-9]*)",
- pexpect.TIMEOUT], timeout = 10)
- # ok
- if index == 0:
- if i != 0xffff:
- return -1, "Fail [Message is missing]"
- break
-
- # message, check ordering
- elif index == 2:
- if int(child.match.groups()[0]) < i:
- return -1, "Fail [Bad order]"
- i = int(child.match.groups()[0])
-
- # must be the last message, check ordering
- elif index == 3:
- i = 0xffff
-
- elif index == 4:
- return -1, "Fail [Timeout]"
-
- # fail
- else:
- return -1, "Fail"
-
- return 0, "Success"
+ i = 0
+ child.sendline(test_name)
+ while True:
+ index = child.expect(["Test OK",
+ "Test Failed",
+ "Hello from core ([0-9]*) !",
+ "Global write lock taken on master "
+ "core ([0-9]*)",
+ pexpect.TIMEOUT], timeout=10)
+ # ok
+ if index == 0:
+ if i != 0xffff:
+ return -1, "Fail [Message is missing]"
+ break
+
+ # message, check ordering
+ elif index == 2:
+ if int(child.match.groups()[0]) < i:
+ return -1, "Fail [Bad order]"
+ i = int(child.match.groups()[0])
+
+ # must be the last message, check ordering
+ elif index == 3:
+ i = 0xffff
+
+ elif index == 4:
+ return -1, "Fail [Timeout]"
+
+ # fail
+ else:
+ return -1, "Fail"
+
+ return 0, "Success"
+
def logs_autotest(child, test_name):
- i = 0
- child.sendline(test_name)
-
- log_list = [
- "TESTAPP1: error message",
- "TESTAPP1: critical message",
- "TESTAPP2: critical message",
- "TESTAPP1: error message",
- ]
-
- for log_msg in log_list:
- index = child.expect([log_msg,
- "Test OK",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 10)
-
- if index == 3:
- return -1, "Fail [Timeout]"
- # not ok
- elif index != 0:
- return -1, "Fail"
-
- index = child.expect(["Test OK",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 10)
-
- return 0, "Success"
+ child.sendline(test_name)
+
+ log_list = [
+ "TESTAPP1: error message",
+ "TESTAPP1: critical message",
+ "TESTAPP2: critical message",
+ "TESTAPP1: error message",
+ ]
+
+ for log_msg in log_list:
+ index = child.expect([log_msg,
+ "Test OK",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=10)
+
+ if index == 3:
+ return -1, "Fail [Timeout]"
+ # not ok
+ elif index != 0:
+ return -1, "Fail"
+
+ index = child.expect(["Test OK",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=10)
+
+ return 0, "Success"
+
def timer_autotest(child, test_name):
- i = 0
- child.sendline(test_name)
-
- index = child.expect(["Start timer stress tests",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 5)
-
- if index == 1:
- return -1, "Fail"
- elif index == 2:
- return -1, "Fail [Timeout]"
-
- index = child.expect(["Start timer stress tests 2",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 5)
-
- if index == 1:
- return -1, "Fail"
- elif index == 2:
- return -1, "Fail [Timeout]"
-
- index = child.expect(["Start timer basic tests",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 5)
-
- if index == 1:
- return -1, "Fail"
- elif index == 2:
- return -1, "Fail [Timeout]"
-
- prev_lcore_timer1 = -1
-
- lcore_tim0 = -1
- lcore_tim1 = -1
- lcore_tim2 = -1
- lcore_tim3 = -1
-
- while True:
- index = child.expect(["TESTTIMER: ([0-9]*): callback id=([0-9]*) count=([0-9]*) on core ([0-9]*)",
- "Test OK",
- "Test Failed",
- pexpect.TIMEOUT], timeout = 10)
-
- if index == 1:
- break
-
- if index == 2:
- return -1, "Fail"
- elif index == 3:
- return -1, "Fail [Timeout]"
-
- try:
- t = int(child.match.groups()[0])
- id = int(child.match.groups()[1])
- cnt = int(child.match.groups()[2])
- lcore = int(child.match.groups()[3])
- except:
- return -1, "Fail [Cannot parse]"
-
- # timer0 always expires on the same core when cnt < 20
- if id == 0:
- if lcore_tim0 == -1:
- lcore_tim0 = lcore
- elif lcore != lcore_tim0 and cnt < 20:
- return -1, "Fail [lcore != lcore_tim0 (%d, %d)]"%(lcore, lcore_tim0)
- if cnt > 21:
- return -1, "Fail [tim0 cnt > 21]"
-
- # timer1 each time expires on a different core
- if id == 1:
- if lcore == lcore_tim1:
- return -1, "Fail [lcore == lcore_tim1 (%d, %d)]"%(lcore, lcore_tim1)
- lcore_tim1 = lcore
- if cnt > 10:
- return -1, "Fail [tim1 cnt > 30]"
-
- # timer0 always expires on the same core
- if id == 2:
- if lcore_tim2 == -1:
- lcore_tim2 = lcore
- elif lcore != lcore_tim2:
- return -1, "Fail [lcore != lcore_tim2 (%d, %d)]"%(lcore, lcore_tim2)
- if cnt > 30:
- return -1, "Fail [tim2 cnt > 30]"
-
- # timer0 always expires on the same core
- if id == 3:
- if lcore_tim3 == -1:
- lcore_tim3 = lcore
- elif lcore != lcore_tim3:
- return -1, "Fail [lcore_tim3 changed (%d -> %d)]"%(lcore, lcore_tim3)
- if cnt > 30:
- return -1, "Fail [tim3 cnt > 30]"
-
- # must be 2 different cores
- if lcore_tim0 == lcore_tim3:
- return -1, "Fail [lcore_tim0 (%d) == lcore_tim3 (%d)]"%(lcore_tim0, lcore_tim3)
-
- return 0, "Success"
+ child.sendline(test_name)
+
+ index = child.expect(["Start timer stress tests",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=5)
+
+ if index == 1:
+ return -1, "Fail"
+ elif index == 2:
+ return -1, "Fail [Timeout]"
+
+ index = child.expect(["Start timer stress tests 2",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=5)
+
+ if index == 1:
+ return -1, "Fail"
+ elif index == 2:
+ return -1, "Fail [Timeout]"
+
+ index = child.expect(["Start timer basic tests",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=5)
+
+ if index == 1:
+ return -1, "Fail"
+ elif index == 2:
+ return -1, "Fail [Timeout]"
+
+ lcore_tim0 = -1
+ lcore_tim1 = -1
+ lcore_tim2 = -1
+ lcore_tim3 = -1
+
+ while True:
+ index = child.expect(["TESTTIMER: ([0-9]*): callback id=([0-9]*) "
+ "count=([0-9]*) on core ([0-9]*)",
+ "Test OK",
+ "Test Failed",
+ pexpect.TIMEOUT], timeout=10)
+
+ if index == 1:
+ break
+
+ if index == 2:
+ return -1, "Fail"
+ elif index == 3:
+ return -1, "Fail [Timeout]"
+
+ try:
+ id = int(child.match.groups()[1])
+ cnt = int(child.match.groups()[2])
+ lcore = int(child.match.groups()[3])
+ except:
+ return -1, "Fail [Cannot parse]"
+
+ # timer0 always expires on the same core when cnt < 20
+ if id == 0:
+ if lcore_tim0 == -1:
+ lcore_tim0 = lcore
+ elif lcore != lcore_tim0 and cnt < 20:
+ return -1, "Fail [lcore != lcore_tim0 (%d, %d)]" \
+ % (lcore, lcore_tim0)
+ if cnt > 21:
+ return -1, "Fail [tim0 cnt > 21]"
+
+ # timer1 each time expires on a different core
+ if id == 1:
+ if lcore == lcore_tim1:
+ return -1, "Fail [lcore == lcore_tim1 (%d, %d)]" \
+ % (lcore, lcore_tim1)
+ lcore_tim1 = lcore
+ if cnt > 10:
+ return -1, "Fail [tim1 cnt > 30]"
+
+ # timer0 always expires on the same core
+ if id == 2:
+ if lcore_tim2 == -1:
+ lcore_tim2 = lcore
+ elif lcore != lcore_tim2:
+ return -1, "Fail [lcore != lcore_tim2 (%d, %d)]" \
+ % (lcore, lcore_tim2)
+ if cnt > 30:
+ return -1, "Fail [tim2 cnt > 30]"
+
+ # timer0 always expires on the same core
+ if id == 3:
+ if lcore_tim3 == -1:
+ lcore_tim3 = lcore
+ elif lcore != lcore_tim3:
+ return -1, "Fail [lcore_tim3 changed (%d -> %d)]" \
+ % (lcore, lcore_tim3)
+ if cnt > 30:
+ return -1, "Fail [tim3 cnt > 30]"
+
+ # must be 2 different cores
+ if lcore_tim0 == lcore_tim3:
+ return -1, "Fail [lcore_tim0 (%d) == lcore_tim3 (%d)]" \
+ % (lcore_tim0, lcore_tim3)
+
+ return 0, "Success"
+
def ring_autotest(child, test_name):
- child.sendline(test_name)
- index = child.expect(["Test OK", "Test Failed",
- pexpect.TIMEOUT], timeout = 2)
- if index == 1:
- return -1, "Fail"
- elif index == 2:
- return -1, "Fail [Timeout]"
-
- child.sendline("set_watermark test 100")
- child.sendline("dump_ring test")
- index = child.expect([" watermark=100",
- pexpect.TIMEOUT], timeout = 1)
- if index != 0:
- return -1, "Fail [Bad watermark]"
-
- return 0, "Success"
+ child.sendline(test_name)
+ index = child.expect(["Test OK", "Test Failed",
+ pexpect.TIMEOUT], timeout=2)
+ if index == 1:
+ return -1, "Fail"
+ elif index == 2:
+ return -1, "Fail [Timeout]"
+
+ child.sendline("set_watermark test 100")
+ child.sendline("dump_ring test")
+ index = child.expect([" watermark=100",
+ pexpect.TIMEOUT], timeout=1)
+ if index != 0:
+ return -1, "Fail [Bad watermark]"
+
+ return 0, "Success"
diff --git a/doc/guides/conf.py b/doc/guides/conf.py
index 29e8efb..34c62de 100644
--- a/doc/guides/conf.py
+++ b/doc/guides/conf.py
@@ -58,7 +58,8 @@
html_show_copyright = False
highlight_language = 'none'
-version = subprocess.check_output(['make', '-sRrC', '../../', 'showversion']).decode('utf-8').rstrip()
+version = subprocess.check_output(['make', '-sRrC', '../../', 'showversion'])
+version = version.decode('utf-8').rstrip()
release = version
master_doc = 'index'
@@ -94,6 +95,7 @@
'preamble': latex_preamble
}
+
# Override the default Latex formatter in order to modify the
# code/verbatim blocks.
class CustomLatexFormatter(LatexFormatter):
@@ -117,12 +119,12 @@ def __init__(self, **options):
("tools/devbind", "dpdk-devbind",
"check device status and bind/unbind them from drivers", "", 8)]
-######## :numref: fallback ########
+
+# ####### :numref: fallback ########
# The following hook functions add some simple handling for the :numref:
# directive for Sphinx versions prior to 1.3.1. The functions replace the
# :numref: reference with a link to the target (for all Sphinx doc types).
# It doesn't try to label figures/tables.
-
def numref_role(reftype, rawtext, text, lineno, inliner):
"""
Add a Sphinx role to handle numref references. Note, we can't convert
@@ -136,6 +138,7 @@ def numref_role(reftype, rawtext, text, lineno, inliner):
internal=True)
return [newnode], []
+
def process_numref(app, doctree, from_docname):
"""
Process the numref nodes once the doctree has been built and prior to
diff --git a/examples/ip_pipeline/config/diagram-generator.py b/examples/ip_pipeline/config/diagram-generator.py
index 6b7170b..1748833 100755
--- a/examples/ip_pipeline/config/diagram-generator.py
+++ b/examples/ip_pipeline/config/diagram-generator.py
@@ -36,7 +36,8 @@
# the DPDK ip_pipeline application.
#
# The input configuration file is translated to an output file in DOT syntax,
-# which is then used to create the image file using graphviz (www.graphviz.org).
+# which is then used to create the image file using graphviz
+# (www.graphviz.org).
#
from __future__ import print_function
@@ -94,6 +95,7 @@
# SOURCEx | SOURCEx | SOURCEx | PIPELINEy | SOURCEx
# SINKx | SINKx | PIPELINEy | SINKx | SINKx
+
#
# Parse the input configuration file to detect the graph nodes and edges
#
@@ -321,16 +323,17 @@ def process_config_file(cfgfile):
#
print('Creating image file "%s" ...' % imgfile)
if os.system('which dot > /dev/null'):
- print('Error: Unable to locate "dot" executable.' \
- 'Please install the "graphviz" package (www.graphviz.org).')
+ print('Error: Unable to locate "dot" executable.'
+ 'Please install the "graphviz" package (www.graphviz.org).')
return
os.system(dot_cmd)
if __name__ == '__main__':
- parser = argparse.ArgumentParser(description=\
- 'Create diagram for IP pipeline configuration file.')
+ parser = argparse.ArgumentParser(description='Create diagram for IP '
+ 'pipeline configuration '
+ 'file.')
parser.add_argument(
'-f',
diff --git a/examples/ip_pipeline/config/pipeline-to-core-mapping.py b/examples/ip_pipeline/config/pipeline-to-core-mapping.py
index c2050b8..7a4eaa2 100755
--- a/examples/ip_pipeline/config/pipeline-to-core-mapping.py
+++ b/examples/ip_pipeline/config/pipeline-to-core-mapping.py
@@ -39,15 +39,14 @@
#
from __future__ import print_function
-import sys
-import errno
-import os
-import re
+from collections import namedtuple
+import argparse
import array
+import errno
import itertools
+import os
import re
-import argparse
-from collections import namedtuple
+import sys
# default values
enable_stage0_traceout = 1
diff --git a/tools/cpu_layout.py b/tools/cpu_layout.py
index d38d0b5..ccc22ec 100755
--- a/tools/cpu_layout.py
+++ b/tools/cpu_layout.py
@@ -38,40 +38,40 @@
cores = []
core_map = {}
-fd=open("/proc/cpuinfo")
+fd = open("/proc/cpuinfo")
lines = fd.readlines()
fd.close()
core_details = []
core_lines = {}
for line in lines:
- if len(line.strip()) != 0:
- name, value = line.split(":", 1)
- core_lines[name.strip()] = value.strip()
- else:
- core_details.append(core_lines)
- core_lines = {}
+ if len(line.strip()) != 0:
+ name, value = line.split(":", 1)
+ core_lines[name.strip()] = value.strip()
+ else:
+ core_details.append(core_lines)
+ core_lines = {}
for core in core_details:
- for field in ["processor", "core id", "physical id"]:
- if field not in core:
- print "Error getting '%s' value from /proc/cpuinfo" % field
- sys.exit(1)
- core[field] = int(core[field])
+ for field in ["processor", "core id", "physical id"]:
+ if field not in core:
+ print "Error getting '%s' value from /proc/cpuinfo" % field
+ sys.exit(1)
+ core[field] = int(core[field])
- if core["core id"] not in cores:
- cores.append(core["core id"])
- if core["physical id"] not in sockets:
- sockets.append(core["physical id"])
- key = (core["physical id"], core["core id"])
- if key not in core_map:
- core_map[key] = []
- core_map[key].append(core["processor"])
+ if core["core id"] not in cores:
+ cores.append(core["core id"])
+ if core["physical id"] not in sockets:
+ sockets.append(core["physical id"])
+ key = (core["physical id"], core["core id"])
+ if key not in core_map:
+ core_map[key] = []
+ core_map[key].append(core["processor"])
print "============================================================"
print "Core and Socket Information (as reported by '/proc/cpuinfo')"
print "============================================================\n"
-print "cores = ",cores
+print "cores = ", cores
print "sockets = ", sockets
print ""
@@ -81,15 +81,16 @@
print " ".ljust(max_core_id_len + len('Core ')),
for s in sockets:
- print "Socket %s" % str(s).ljust(max_core_map_len - len('Socket ')),
+ print "Socket %s" % str(s).ljust(max_core_map_len - len('Socket ')),
print ""
+
print " ".ljust(max_core_id_len + len('Core ')),
for s in sockets:
- print "--------".ljust(max_core_map_len),
+ print "--------".ljust(max_core_map_len),
print ""
for c in cores:
- print "Core %s" % str(c).ljust(max_core_id_len),
- for s in sockets:
- print str(core_map[(s,c)]).ljust(max_core_map_len),
- print ""
+ print "Core %s" % str(c).ljust(max_core_id_len),
+ for s in sockets:
+ print str(core_map[(s, c)]).ljust(max_core_map_len),
+ print ""
diff --git a/tools/dpdk-devbind.py b/tools/dpdk-devbind.py
index f1d374d..4f51a4b 100755
--- a/tools/dpdk-devbind.py
+++ b/tools/dpdk-devbind.py
@@ -93,10 +93,10 @@ def usage():
Unbind a device (Equivalent to \"-b none\")
--force:
- By default, network devices which are used by Linux - as indicated by having
- routes in the routing table - cannot be modified. Using the --force
- flag overrides this behavior, allowing active links to be forcibly
- unbound.
+ By default, network devices which are used by Linux - as indicated by
+ having routes in the routing table - cannot be modified. Using the
+ --force flag overrides this behavior, allowing active links to be
+ forcibly unbound.
WARNING: This can lead to loss of network connection and should be used
with caution.
@@ -151,7 +151,7 @@ def find_module(mod):
# check for a copy based off current path
tools_dir = dirname(abspath(sys.argv[0]))
- if (tools_dir.endswith("tools")):
+ if tools_dir.endswith("tools"):
base_dir = dirname(tools_dir)
find_out = check_output(["find", base_dir, "-name", mod + ".ko"])
if len(find_out) > 0: # something matched
@@ -249,7 +249,7 @@ def get_nic_details():
dev = {}
dev_lines = check_output(["lspci", "-Dvmmn"]).splitlines()
for dev_line in dev_lines:
- if (len(dev_line) == 0):
+ if len(dev_line) == 0:
if dev["Class"][0:2] == NETWORK_BASE_CLASS:
# convert device and vendor ids to numbers, then add to global
dev["Vendor"] = int(dev["Vendor"], 16)
@@ -315,8 +315,8 @@ def get_crypto_details():
dev = {}
dev_lines = check_output(["lspci", "-Dvmmn"]).splitlines()
for dev_line in dev_lines:
- if (len(dev_line) == 0):
- if (dev["Class"][0:2] == CRYPTO_BASE_CLASS):
+ if len(dev_line) == 0:
+ if dev["Class"][0:2] == CRYPTO_BASE_CLASS:
# convert device and vendor ids to numbers, then add to global
dev["Vendor"] = int(dev["Vendor"], 16)
dev["Device"] = int(dev["Device"], 16)
@@ -513,7 +513,8 @@ def display_devices(title, dev_list, extra_params=None):
for dev in dev_list:
if extra_params is not None:
strings.append("%s '%s' %s" % (dev["Slot"],
- dev["Device_str"], extra_params % dev))
+ dev["Device_str"],
+ extra_params % dev))
else:
strings.append("%s '%s'" % (dev["Slot"], dev["Device_str"]))
# sort before printing, so that the entries appear in PCI order
@@ -532,7 +533,7 @@ def show_status():
# split our list of network devices into the three categories above
for d in devices.keys():
- if (NETWORK_BASE_CLASS in devices[d]["Class"]):
+ if NETWORK_BASE_CLASS in devices[d]["Class"]:
if not has_driver(d):
no_drv.append(devices[d])
continue
@@ -555,7 +556,7 @@ def show_status():
no_drv = []
for d in devices.keys():
- if (CRYPTO_BASE_CLASS in devices[d]["Class"]):
+ if CRYPTO_BASE_CLASS in devices[d]["Class"]:
if not has_driver(d):
no_drv.append(devices[d])
continue
diff --git a/tools/dpdk-pmdinfo.py b/tools/dpdk-pmdinfo.py
index 3db9819..3d3ad7d 100755
--- a/tools/dpdk-pmdinfo.py
+++ b/tools/dpdk-pmdinfo.py
@@ -4,52 +4,20 @@
# Utility to dump PMD_INFO_STRING support from an object file
#
# -------------------------------------------------------------------------
+import json
import os
+import platform
+import string
import sys
+from elftools.common.exceptions import ELFError
+from elftools.common.py3compat import (byte2int, bytes2str, str2bytes)
+from elftools.elf.elffile import ELFFile
from optparse import OptionParser
-import string
-import json
-import platform
# For running from development directory. It should take precedence over the
# installed pyelftools.
sys.path.insert(0, '.')
-
-from elftools import __version__
-from elftools.common.exceptions import ELFError
-from elftools.common.py3compat import (
- ifilter, byte2int, bytes2str, itervalues, str2bytes)
-from elftools.elf.elffile import ELFFile
-from elftools.elf.dynamic import DynamicSection, DynamicSegment
-from elftools.elf.enums import ENUM_D_TAG
-from elftools.elf.segments import InterpSegment
-from elftools.elf.sections import SymbolTableSection
-from elftools.elf.gnuversions import (
- GNUVerSymSection, GNUVerDefSection,
- GNUVerNeedSection,
-)
-from elftools.elf.relocation import RelocationSection
-from elftools.elf.descriptions import (
- describe_ei_class, describe_ei_data, describe_ei_version,
- describe_ei_osabi, describe_e_type, describe_e_machine,
- describe_e_version_numeric, describe_p_type, describe_p_flags,
- describe_sh_type, describe_sh_flags,
- describe_symbol_type, describe_symbol_bind, describe_symbol_visibility,
- describe_symbol_shndx, describe_reloc_type, describe_dyn_tag,
- describe_ver_flags,
-)
-from elftools.elf.constants import E_FLAGS
-from elftools.dwarf.dwarfinfo import DWARFInfo
-from elftools.dwarf.descriptions import (
- describe_reg_name, describe_attr_value, set_global_machine_arch,
- describe_CFI_instructions, describe_CFI_register_rule,
- describe_CFI_CFA_rule,
-)
-from elftools.dwarf.constants import (
- DW_LNS_copy, DW_LNS_set_file, DW_LNE_define_file)
-from elftools.dwarf.callframe import CIE, FDE
-
raw_output = False
pcidb = None
@@ -326,7 +294,7 @@ def parse_pmd_info_string(self, mystring):
for i in optional_pmd_info:
try:
print("%s: %s" % (i['tag'], pmdinfo[i['id']]))
- except KeyError as e:
+ except KeyError:
continue
if (len(pmdinfo["pci_ids"]) != 0):
@@ -475,7 +443,7 @@ def process_dt_needed_entries(self):
with open(library, 'rb') as file:
try:
libelf = ReadElf(file, sys.stdout)
- except ELFError as e:
+ except ELFError:
print("%s is no an ELF file" % library)
continue
libelf.process_dt_needed_entries()
@@ -491,7 +459,7 @@ def scan_autoload_path(autoload_path):
try:
dirs = os.listdir(autoload_path)
- except OSError as e:
+ except OSError:
# Couldn't read the directory, give up
return
@@ -503,10 +471,10 @@ def scan_autoload_path(autoload_path):
try:
file = open(dpath, 'rb')
readelf = ReadElf(file, sys.stdout)
- except ELFError as e:
+ except ELFError:
# this is likely not an elf file, skip it
continue
- except IOError as e:
+ except IOError:
# No permission to read the file, skip it
continue
@@ -531,7 +499,7 @@ def scan_for_autoload_pmds(dpdk_path):
file = open(dpdk_path, 'rb')
try:
readelf = ReadElf(file, sys.stdout)
- except ElfError as e:
+ except ElfError:
if raw_output is False:
print("Unable to parse %s" % file)
return
@@ -557,7 +525,7 @@ def main(stream=None):
global raw_output
global pcidb
- pcifile_default = "./pci.ids" # for unknown OS's assume local file
+ pcifile_default = "./pci.ids" # For unknown OS's assume local file
if platform.system() == 'Linux':
pcifile_default = "/usr/share/hwdata/pci.ids"
elif platform.system() == 'FreeBSD':
@@ -577,7 +545,8 @@ def main(stream=None):
"to get vendor names from",
default=pcifile_default, metavar="FILE")
optparser.add_option("-t", "--table", dest="tblout",
- help="output information on hw support as a hex table",
+ help="output information on hw support as a "
+ "hex table",
action='store_true')
optparser.add_option("-p", "--plugindir", dest="pdir",
help="scan dpdk for autoload plugins",
--
2.7.4
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v4 2/3] app: make python apps python2/3 compliant
2016-12-08 15:51 [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant John McNamara
` (18 preceding siblings ...)
2016-12-21 15:03 ` [dpdk-dev] [PATCH v4 1/3] app: make python apps pep8 compliant John McNamara
@ 2016-12-21 15:03 ` John McNamara
2016-12-21 15:03 ` [dpdk-dev] [PATCH v4 3/3] doc: add required python versions to docs John McNamara
20 siblings, 0 replies; 28+ messages in thread
From: John McNamara @ 2016-12-21 15:03 UTC (permalink / raw)
To: dev; +Cc: mkletzan, nhorman, John McNamara
Make all the DPDK Python apps work with Python 2 or 3 to
allow them to work with whatever is the system default.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
app/cmdline_test/cmdline_test.py | 26 ++++++++++++------------
app/cmdline_test/cmdline_test_data.py | 2 --
app/test/autotest.py | 10 ++++-----
app/test/autotest_data.py | 2 --
app/test/autotest_runner.py | 37 ++++++++++++++++------------------
app/test/autotest_test_funcs.py | 2 --
tools/cpu_layout.py | 38 ++++++++++++++++++-----------------
tools/dpdk-devbind.py | 2 +-
tools/dpdk-pmdinfo.py | 14 +++++++------
9 files changed, 64 insertions(+), 69 deletions(-)
diff --git a/app/cmdline_test/cmdline_test.py b/app/cmdline_test/cmdline_test.py
index 4729987..229f71f 100755
--- a/app/cmdline_test/cmdline_test.py
+++ b/app/cmdline_test/cmdline_test.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# BSD LICENSE
#
@@ -32,7 +32,7 @@
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# Script that runs cmdline_test app and feeds keystrokes into it.
-
+from __future__ import print_function
import cmdline_test_data
import os
import pexpect
@@ -81,38 +81,38 @@ def runHistoryTest(child):
# the path to cmdline_test executable is supplied via command-line.
if len(sys.argv) < 2:
- print "Error: please supply cmdline_test app path"
+ print("Error: please supply cmdline_test app path")
sys.exit(1)
test_app_path = sys.argv[1]
if not os.path.exists(test_app_path):
- print "Error: please supply cmdline_test app path"
+ print("Error: please supply cmdline_test app path")
sys.exit(1)
child = pexpect.spawn(test_app_path)
-print "Running command-line tests..."
+print("Running command-line tests...")
for test in cmdline_test_data.tests:
- print (test["Name"] + ":").ljust(30),
+ testname = (test["Name"] + ":").ljust(30)
try:
runTest(child, test)
- print "PASS"
+ print(testname, "PASS")
except:
- print "FAIL"
- print child
+ print(testname, "FAIL")
+ print(child)
sys.exit(1)
# since last test quits the app, run new instance
child = pexpect.spawn(test_app_path)
-print ("History fill test:").ljust(30),
+testname = ("History fill test:").ljust(30)
try:
runHistoryTest(child)
- print "PASS"
+ print(testname, "PASS")
except:
- print "FAIL"
- print child
+ print(testname, "FAIL")
+ print(child)
sys.exit(1)
child.close()
sys.exit(0)
diff --git a/app/cmdline_test/cmdline_test_data.py b/app/cmdline_test/cmdline_test_data.py
index 3ce6cbc..28dfefe 100644
--- a/app/cmdline_test/cmdline_test_data.py
+++ b/app/cmdline_test/cmdline_test_data.py
@@ -1,5 +1,3 @@
-#!/usr/bin/python
-
# BSD LICENSE
#
# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
diff --git a/app/test/autotest.py b/app/test/autotest.py
index 3a00538..5c19a02 100644
--- a/app/test/autotest.py
+++ b/app/test/autotest.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# BSD LICENSE
#
@@ -32,15 +32,15 @@
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# Script that uses either test app or qemu controlled by python-pexpect
-
+from __future__ import print_function
import autotest_data
import autotest_runner
import sys
def usage():
- print"Usage: autotest.py [test app|test iso image]",
- print "[target] [whitelist|-blacklist]"
+ print("Usage: autotest.py [test app|test iso image] ",
+ "[target] [whitelist|-blacklist]")
if len(sys.argv) < 3:
usage()
@@ -63,7 +63,7 @@ def usage():
cmdline = "%s -c f -n 4" % (sys.argv[1])
-print cmdline
+print(cmdline)
runner = autotest_runner.AutotestRunner(cmdline, target, test_blacklist,
test_whitelist)
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 0cf4cfd..0cd598b 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -1,5 +1,3 @@
-#!/usr/bin/env python
-
# BSD LICENSE
#
# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
diff --git a/app/test/autotest_runner.py b/app/test/autotest_runner.py
index 55b63a8..fc882ec 100644
--- a/app/test/autotest_runner.py
+++ b/app/test/autotest_runner.py
@@ -1,5 +1,3 @@
-#!/usr/bin/python
-
# BSD LICENSE
#
# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
@@ -271,15 +269,16 @@ def __process_results(self, results):
total_time = int(cur_time - self.start)
# print results, test run time and total time since start
- print ("%s:" % test_name).ljust(30),
- print result_str.ljust(29),
- print "[%02dm %02ds]" % (test_time / 60, test_time % 60),
+ result = ("%s:" % test_name).ljust(30)
+ result += result_str.ljust(29)
+ result += "[%02dm %02ds]" % (test_time / 60, test_time % 60)
# don't print out total time every line, it's the same anyway
if i == len(results) - 1:
- print "[%02dm %02ds]" % (total_time / 60, total_time % 60)
+ print(result,
+ "[%02dm %02ds]" % (total_time / 60, total_time % 60))
else:
- print ""
+ print(result)
# if test failed and it wasn't a "start" test
if test_result < 0 and not i == 0:
@@ -294,7 +293,7 @@ def __process_results(self, results):
f = open("%s_%s_report.rst" %
(self.target, test_name), "w")
except IOError:
- print "Report for %s could not be created!" % test_name
+ print("Report for %s could not be created!" % test_name)
else:
with f:
f.write(report)
@@ -360,12 +359,10 @@ def run_all_tests(self):
try:
# create table header
- print ""
- print "Test name".ljust(30),
- print "Test result".ljust(29),
- print "Test".center(9),
- print "Total".center(9)
- print "=" * 80
+ print("")
+ print("Test name".ljust(30), "Test result".ljust(29),
+ "Test".center(9), "Total".center(9))
+ print("=" * 80)
# make a note of tests start time
self.start = time.time()
@@ -407,11 +404,11 @@ def run_all_tests(self):
total_time = int(cur_time - self.start)
# print out summary
- print "=" * 80
- print "Total run time: %02dm %02ds" % (total_time / 60,
- total_time % 60)
+ print("=" * 80)
+ print("Total run time: %02dm %02ds" % (total_time / 60,
+ total_time % 60))
if self.fails != 0:
- print "Number of failed tests: %s" % str(self.fails)
+ print("Number of failed tests: %s" % str(self.fails))
# write summary to logfile
self.logfile.write("Summary\n")
@@ -420,8 +417,8 @@ def run_all_tests(self):
self.logfile.write("Failed tests: ".ljust(
15) + "%i\n" % self.fails)
except:
- print "Exception occurred"
- print sys.exc_info()
+ print("Exception occurred")
+ print(sys.exc_info())
self.fails = 1
# drop logs from all executions to a logfile
diff --git a/app/test/autotest_test_funcs.py b/app/test/autotest_test_funcs.py
index c482ea8..1c5f390 100644
--- a/app/test/autotest_test_funcs.py
+++ b/app/test/autotest_test_funcs.py
@@ -1,5 +1,3 @@
-#!/usr/bin/python
-
# BSD LICENSE
#
# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
diff --git a/tools/cpu_layout.py b/tools/cpu_layout.py
index ccc22ec..0e049a6 100755
--- a/tools/cpu_layout.py
+++ b/tools/cpu_layout.py
@@ -1,4 +1,5 @@
-#! /usr/bin/python
+#!/usr/bin/env python
+
#
# BSD LICENSE
#
@@ -31,7 +32,7 @@
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
-
+from __future__ import print_function
import sys
sockets = []
@@ -55,7 +56,7 @@
for core in core_details:
for field in ["processor", "core id", "physical id"]:
if field not in core:
- print "Error getting '%s' value from /proc/cpuinfo" % field
+ print("Error getting '%s' value from /proc/cpuinfo" % field)
sys.exit(1)
core[field] = int(core[field])
@@ -68,29 +69,30 @@
core_map[key] = []
core_map[key].append(core["processor"])
-print "============================================================"
-print "Core and Socket Information (as reported by '/proc/cpuinfo')"
-print "============================================================\n"
-print "cores = ", cores
-print "sockets = ", sockets
-print ""
+print("============================================================")
+print("Core and Socket Information (as reported by '/proc/cpuinfo')")
+print("============================================================\n")
+print("cores = ", cores)
+print("sockets = ", sockets)
+print("")
max_processor_len = len(str(len(cores) * len(sockets) * 2 - 1))
max_core_map_len = max_processor_len * 2 + len('[, ]') + len('Socket ')
max_core_id_len = len(str(max(cores)))
-print " ".ljust(max_core_id_len + len('Core ')),
+output = " ".ljust(max_core_id_len + len('Core '))
for s in sockets:
- print "Socket %s" % str(s).ljust(max_core_map_len - len('Socket ')),
-print ""
+ output += " Socket %s" % str(s).ljust(max_core_map_len - len('Socket '))
+print(output)
-print " ".ljust(max_core_id_len + len('Core ')),
+output = " ".ljust(max_core_id_len + len('Core '))
for s in sockets:
- print "--------".ljust(max_core_map_len),
-print ""
+ output += " --------".ljust(max_core_map_len)
+ output += " "
+print(output)
for c in cores:
- print "Core %s" % str(c).ljust(max_core_id_len),
+ output = "Core %s" % str(c).ljust(max_core_id_len)
for s in sockets:
- print str(core_map[(s, c)]).ljust(max_core_map_len),
- print ""
+ output += " " + str(core_map[(s, c)]).ljust(max_core_map_len)
+ print(output)
diff --git a/tools/dpdk-devbind.py b/tools/dpdk-devbind.py
index 4f51a4b..e057b87 100755
--- a/tools/dpdk-devbind.py
+++ b/tools/dpdk-devbind.py
@@ -1,4 +1,4 @@
-#! /usr/bin/python
+#! /usr/bin/env python
#
# BSD LICENSE
#
diff --git a/tools/dpdk-pmdinfo.py b/tools/dpdk-pmdinfo.py
index 3d3ad7d..d4e51aa 100755
--- a/tools/dpdk-pmdinfo.py
+++ b/tools/dpdk-pmdinfo.py
@@ -1,9 +1,11 @@
#!/usr/bin/env python
+
# -------------------------------------------------------------------------
#
# Utility to dump PMD_INFO_STRING support from an object file
#
# -------------------------------------------------------------------------
+from __future__ import print_function
import json
import os
import platform
@@ -54,7 +56,7 @@ def addDevice(self, deviceStr):
self.devices[devID] = Device(deviceStr)
def report(self):
- print self.ID, self.name
+ print(self.ID, self.name)
for id, dev in self.devices.items():
dev.report()
@@ -80,7 +82,7 @@ def __init__(self, deviceStr):
self.subdevices = {}
def report(self):
- print "\t%s\t%s" % (self.ID, self.name)
+ print("\t%s\t%s" % (self.ID, self.name))
for subID, subdev in self.subdevices.items():
subdev.report()
@@ -126,7 +128,7 @@ def __init__(self, vendor, device, name):
self.name = name
def report(self):
- print "\t\t%s\t%s\t%s" % (self.vendorID, self.deviceID, self.name)
+ print("\t\t%s\t%s\t%s" % (self.vendorID, self.deviceID, self.name))
class PCIIds:
@@ -154,7 +156,7 @@ def reportVendors(self):
"""Reports the vendors
"""
for vid, v in self.vendors.items():
- print v.ID, v.name
+ print(v.ID, v.name)
def report(self, vendor=None):
"""
@@ -185,7 +187,7 @@ def findDate(self, content):
def parse(self):
if len(self.contents) < 1:
- print "data/%s-pci.ids not found" % self.date
+ print("data/%s-pci.ids not found" % self.date)
else:
vendorID = ""
deviceID = ""
@@ -432,7 +434,7 @@ def process_dt_needed_entries(self):
for tag in dynsec.iter_tags():
if tag.entry.d_tag == 'DT_NEEDED':
- rc = tag.needed.find("librte_pmd")
+ rc = tag.needed.find(b"librte_pmd")
if (rc != -1):
library = search_file(tag.needed,
runpath + ":" + ldlibpath +
--
2.7.4
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v4 3/3] doc: add required python versions to docs
2016-12-08 15:51 [dpdk-dev] [PATCH v1 0/4] app: make python apps python2/3 compliant John McNamara
` (19 preceding siblings ...)
2016-12-21 15:03 ` [dpdk-dev] [PATCH v4 2/3] app: make python apps python2/3 compliant John McNamara
@ 2016-12-21 15:03 ` John McNamara
20 siblings, 0 replies; 28+ messages in thread
From: John McNamara @ 2016-12-21 15:03 UTC (permalink / raw)
To: dev; +Cc: mkletzan, nhorman, John McNamara
Add a requirement to support both Python 2 and 3 to the
DPDK Python Coding Standards and Getting started Guide.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
doc/guides/contributing/coding_style.rst | 3 ++-
doc/guides/linux_gsg/sys_reqs.rst | 2 +-
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/doc/guides/contributing/coding_style.rst b/doc/guides/contributing/coding_style.rst
index 1eb67f3..4163960 100644
--- a/doc/guides/contributing/coding_style.rst
+++ b/doc/guides/contributing/coding_style.rst
@@ -690,6 +690,7 @@ Control Statements
Python Code
-----------
-All python code should be compliant with `PEP8 (Style Guide for Python Code) <https://www.python.org/dev/peps/pep-0008/>`_.
+All Python code should work with Python 2.7+ and 3.2+ and be compliant with
+`PEP8 (Style Guide for Python Code) <https://www.python.org/dev/peps/pep-0008/>`_.
The ``pep8`` tool can be used for testing compliance with the guidelines.
diff --git a/doc/guides/linux_gsg/sys_reqs.rst b/doc/guides/linux_gsg/sys_reqs.rst
index 76d82e6..61222c6 100644
--- a/doc/guides/linux_gsg/sys_reqs.rst
+++ b/doc/guides/linux_gsg/sys_reqs.rst
@@ -84,7 +84,7 @@ Compilation of the DPDK
x86_x32 ABI is currently supported with distribution packages only on Ubuntu
higher than 13.10 or recent Debian distribution. The only supported compiler is gcc 4.9+.
-* Python, version 2.6 or 2.7, to use various helper scripts included in the DPDK package.
+* Python, version 2.7+ or 3.2+, to use various helper scripts included in the DPDK package.
**Optional Tools:**
--
2.7.4
^ permalink raw reply [flat|nested] 28+ messages in thread