* [dts][PATCH V1 0/4] test_plans/*: modify test plan to adapt meson build
@ 2022-01-22 18:20 Yu Jiang
2022-01-22 18:20 ` [dts][PATCH V1 1/4] " Yu Jiang
` (3 more replies)
0 siblings, 4 replies; 7+ messages in thread
From: Yu Jiang @ 2022-01-22 18:20 UTC (permalink / raw)
To: lijuan.tu, dts; +Cc: Yu Jiang
test_plans/*: modify test plan to adapt meson build
Yu Jiang (4):
test_plans/*: modify test plan to adapt meson build
test_plans/*: modify test plan to adapt meson build
test_plans/*: modify test plan to adapt meson build
test_plans/*: modify test plan to adapt meson build
test_plans/ABI_stable_test_plan.rst | 5 +-
test_plans/bbdev_test_plan.rst | 4 +-
test_plans/blocklist_test_plan.rst | 6 +-
test_plans/checksum_offload_test_plan.rst | 2 +-
.../cloud_filter_with_l4_port_test_plan.rst | 2 +-
test_plans/cmdline_test_plan.rst | 9 +-
test_plans/dcf_lifecycle_test_plan.rst | 52 ++---
test_plans/ddp_gtp_qregion_test_plan.rst | 2 +-
test_plans/ddp_gtp_test_plan.rst | 2 +-
test_plans/ddp_l2tpv3_test_plan.rst | 2 +-
test_plans/ddp_mpls_test_plan.rst | 2 +-
test_plans/ddp_ppp_l2tp_test_plan.rst | 2 +-
test_plans/dual_vlan_test_plan.rst | 2 +-
test_plans/dynamic_flowtype_test_plan.rst | 2 +-
test_plans/dynamic_queue_test_plan.rst | 2 +-
test_plans/eeprom_dump_test_plan.rst | 2 +-
test_plans/ethtool_stats_test_plan.rst | 34 ++--
test_plans/eventdev_perf_test_plan.rst | 36 ++--
.../eventdev_pipeline_perf_test_plan.rst | 25 ++-
test_plans/eventdev_pipeline_test_plan.rst | 24 ++-
test_plans/external_memory_test_plan.rst | 8 +-
.../external_mempool_handler_test_plan.rst | 23 ++-
test_plans/firmware_version_test_plan.rst | 2 +-
test_plans/interrupt_pmd_test_plan.rst | 15 +-
test_plans/ip_pipeline_test_plan.rst | 33 +--
test_plans/ipgre_test_plan.rst | 6 +-
test_plans/ipsec_gw_and_library_test_plan.rst | 12 +-
test_plans/ipv4_reassembly_test_plan.rst | 24 ++-
..._get_extra_queue_information_test_plan.rst | 4 +-
test_plans/jumboframes_test_plan.rst | 4 +-
test_plans/kernelpf_iavf_test_plan.rst | 12 +-
test_plans/kni_test_plan.rst | 14 +-
test_plans/l2fwd_jobstats_test_plan.rst | 11 +-
test_plans/l2tp_esp_coverage_test_plan.rst | 12 +-
test_plans/l3fwdacl_test_plan.rst | 39 ++--
test_plans/large_vf_test_plan.rst | 10 +-
test_plans/link_flowctrl_test_plan.rst | 2 +-
.../link_status_interrupt_test_plan.rst | 9 +-
test_plans/linux_modules_test_plan.rst | 10 +-
...ack_multi_paths_port_restart_test_plan.rst | 40 ++--
.../loopback_multi_queues_test_plan.rst | 80 ++++----
test_plans/mac_filter_test_plan.rst | 2 +-
test_plans/macsec_for_ixgbe_test_plan.rst | 10 +-
...ious_driver_event_indication_test_plan.rst | 8 +-
test_plans/mdd_test_plan.rst | 8 +-
.../metering_and_policing_test_plan.rst | 28 +--
test_plans/mtu_update_test_plan.rst | 2 +-
test_plans/multiple_pthread_test_plan.rst | 68 +++----
test_plans/ptpclient_test_plan.rst | 10 +-
test_plans/ptype_mapping_test_plan.rst | 2 +-
test_plans/qinq_filter_test_plan.rst | 16 +-
test_plans/qos_api_test_plan.rst | 18 +-
test_plans/qos_meter_test_plan.rst | 2 +-
test_plans/qos_sched_test_plan.rst | 24 +--
test_plans/queue_region_test_plan.rst | 2 +-
test_plans/queue_start_stop_test_plan.rst | 2 +-
test_plans/rss_key_update_test_plan.rst | 2 +-
test_plans/rss_to_rte_flow_test_plan.rst | 30 +--
test_plans/rte_flow_test_plan.rst | 190 +++++++++---------
test_plans/rteflow_priority_test_plan.rst | 16 +-
...ntime_vf_queue_number_kernel_test_plan.rst | 10 +-
...time_vf_queue_number_maxinum_test_plan.rst | 8 +-
.../runtime_vf_queue_number_test_plan.rst | 26 +--
test_plans/rxtx_callbacks_test_plan.rst | 11 +-
test_plans/rxtx_offload_test_plan.rst | 16 +-
test_plans/scatter_test_plan.rst | 2 +-
test_plans/speed_capabilities_test_plan.rst | 2 +-
.../vdev_primary_secondary_test_plan.rst | 4 +-
test_plans/veb_switch_test_plan.rst | 30 +--
test_plans/vf_daemon_test_plan.rst | 2 +-
test_plans/vf_jumboframe_test_plan.rst | 2 +-
test_plans/vf_kernel_test_plan.rst | 2 +-
test_plans/vf_l3fwd_test_plan.rst | 13 +-
test_plans/vf_single_core_perf_test_plan.rst | 2 +-
...tio_user_as_exceptional_path_test_plan.rst | 6 +-
...ser_for_container_networking_test_plan.rst | 8 +-
test_plans/vmdq_dcb_test_plan.rst | 14 +-
77 files changed, 638 insertions(+), 547 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [dts][PATCH V1 1/4] test_plans/*: modify test plan to adapt meson build
2022-01-22 18:20 [dts][PATCH V1 0/4] test_plans/*: modify test plan to adapt meson build Yu Jiang
@ 2022-01-22 18:20 ` Yu Jiang
2022-01-22 18:20 ` [dts][PATCH V1 2/4] " Yu Jiang
` (2 subsequent siblings)
3 siblings, 0 replies; 7+ messages in thread
From: Yu Jiang @ 2022-01-22 18:20 UTC (permalink / raw)
To: lijuan.tu, dts; +Cc: Yu Jiang
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=y, Size: 56487 bytes --]
test_plans/*: modify test plan to adapt meson build
Signed-off-by: Yu Jiang <yux.jiang@intel.com>
---
test_plans/blocklist_test_plan.rst | 6 +--
test_plans/checksum_offload_test_plan.rst | 2 +-
.../cloud_filter_with_l4_port_test_plan.rst | 2 +-
test_plans/cmdline_test_plan.rst | 9 +++-
test_plans/dcf_lifecycle_test_plan.rst | 52 +++++++++----------
test_plans/ddp_gtp_qregion_test_plan.rst | 2 +-
test_plans/ddp_gtp_test_plan.rst | 2 +-
test_plans/ddp_l2tpv3_test_plan.rst | 2 +-
test_plans/ddp_mpls_test_plan.rst | 2 +-
test_plans/ddp_ppp_l2tp_test_plan.rst | 2 +-
test_plans/dual_vlan_test_plan.rst | 2 +-
test_plans/dynamic_flowtype_test_plan.rst | 2 +-
test_plans/dynamic_queue_test_plan.rst | 2 +-
test_plans/eeprom_dump_test_plan.rst | 2 +-
test_plans/ethtool_stats_test_plan.rst | 34 ++++++------
test_plans/eventdev_pipeline_test_plan.rst | 24 +++++----
test_plans/external_memory_test_plan.rst | 8 +--
.../external_mempool_handler_test_plan.rst | 23 ++++----
test_plans/interrupt_pmd_test_plan.rst | 15 ++++--
test_plans/ip_pipeline_test_plan.rst | 33 +++++++-----
test_plans/ipgre_test_plan.rst | 6 +--
test_plans/ipv4_reassembly_test_plan.rst | 24 +++++----
| 4 +-
test_plans/jumboframes_test_plan.rst | 4 +-
test_plans/kernelpf_iavf_test_plan.rst | 12 ++---
test_plans/kni_test_plan.rst | 14 ++---
test_plans/l2fwd_jobstats_test_plan.rst | 11 +++-
27 files changed, 171 insertions(+), 130 deletions(-)
diff --git a/test_plans/blocklist_test_plan.rst b/test_plans/blocklist_test_plan.rst
index a284448d..f1231331 100644
--- a/test_plans/blocklist_test_plan.rst
+++ b/test_plans/blocklist_test_plan.rst
@@ -53,7 +53,7 @@ Test Case: Testpmd with no blocklisted device
Run testpmd in interactive mode and ensure that at least 2 ports
are bound and available::
- build/testpmd -c 3 -- -i
+ build/app/dpdk-testpmd -c 3 -- -i
....
EAL: unbind kernel driver /sys/bus/pci/devices/0000:01:00.0/driver/unbind
EAL: Core 1 is ready (tid=357fc700)
@@ -91,7 +91,7 @@ Test Case: Testpmd with one port blocklisted
Select first available port to be blocklisted and specify it with -b option. For the example above::
- build/testpmd -c 3 -b 0000:01:00.0 -- -i
+ build/app/dpdk-testpmd -c 3 -b 0000:01:00.0 -- -i
Check that corresponding device is skipped for binding, and
only 3 ports are available now:::
@@ -126,7 +126,7 @@ Test Case: Testpmd with all but one port blocklisted
Blocklist all devices except the last one.
For the example above:::
- build/testpmd -c 3 -b 0000:01:00.0 -b 0000:01:00.0 -b 0000:02:00.0 -- -i
+ build/app/dpdk-testpmd -c 3 -b 0000:01:00.0 -b 0000:01:00.0 -b 0000:02:00.0 -- -i
Check that 3 corresponding device is skipped for binding, and
only 1 ports is available now:::
diff --git a/test_plans/checksum_offload_test_plan.rst b/test_plans/checksum_offload_test_plan.rst
index f4b388c4..7b29b1ec 100644
--- a/test_plans/checksum_offload_test_plan.rst
+++ b/test_plans/checksum_offload_test_plan.rst
@@ -92,7 +92,7 @@ to the device under test::
Assuming that ports ``0`` and ``2`` are connected to a traffic generator,
launch the ``testpmd`` with the following arguments::
- ./build/app/testpmd -cffffff -n 1 -- -i --burst=1 --txpt=32 \
+ ./build/app/dpdk-testpmd -cffffff -n 1 -- -i --burst=1 --txpt=32 \
--txht=8 --txwt=0 --txfreet=0 --rxfreet=64 --mbcache=250 --portmask=0x5
enable-rx-cksum
diff --git a/test_plans/cloud_filter_with_l4_port_test_plan.rst b/test_plans/cloud_filter_with_l4_port_test_plan.rst
index ed2109eb..e9f226ac 100644
--- a/test_plans/cloud_filter_with_l4_port_test_plan.rst
+++ b/test_plans/cloud_filter_with_l4_port_test_plan.rst
@@ -49,7 +49,7 @@ Prerequisites
./usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:81:00.0
4.Launch the testpmd::
- ./testpmd -l 0-3 -n 4 -a 81:00.0 --file-prefix=test -- -i --rxq=16 --txq=16 --disable-rss
+ ./build/app/dpdk-testpmd -l 0-3 -n 4 -a 81:00.0 --file-prefix=test -- -i --rxq=16 --txq=16 --disable-rss
testpmd> set fwd rxonly
testpmd> set promisc all off
testpmd> set verbose 1
diff --git a/test_plans/cmdline_test_plan.rst b/test_plans/cmdline_test_plan.rst
index d1499991..70a17b00 100644
--- a/test_plans/cmdline_test_plan.rst
+++ b/test_plans/cmdline_test_plan.rst
@@ -66,9 +66,16 @@ to the device under test::
modprobe vfio-pci
usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
+Build dpdk and examples=cmdline:
+ CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static <build_target>
+ ninja -C <build_target>
+
+ meson configure -Dexamples=cmdline <build_target>
+ ninja -C <build_target>
+
Launch the ``cmdline`` with 24 logical cores in linuxapp environment::
- $ ./build/app/cmdline -cffffff
+ $ ./build/examples/dpdk-cmdline -cffffff
Test the 3 simple commands in below prompt ::
diff --git a/test_plans/dcf_lifecycle_test_plan.rst b/test_plans/dcf_lifecycle_test_plan.rst
index 4c010e76..2c8628f2 100644
--- a/test_plans/dcf_lifecycle_test_plan.rst
+++ b/test_plans/dcf_lifecycle_test_plan.rst
@@ -102,7 +102,7 @@ Set a VF as trust ::
Launch dpdk on the VF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=vf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=vf -- -i
Expected: VF get DCF mode. There are outputs in testpmd launching ::
@@ -128,8 +128,8 @@ Set a VF as trust on each PF ::
Launch dpdk on the VF on each PF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0 18:11.0
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf1 -- -i
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 11-15 -n 4 -a 18:11.0,cap=dcf --file-prefix=dcf2 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf1 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 11-15 -n 4 -a 18:11.0,cap=dcf --file-prefix=dcf2 -- -i
Expected: VF get DCF mode. There are outputs in each testpmd launching ::
@@ -152,7 +152,7 @@ Set a VF as trust ::
Launch dpdk on the VF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.1
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.1,cap=dcf --file-prefix=vf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.1,cap=dcf --file-prefix=vf -- -i
Expected: VF can NOT get DCF mode. testpmd should provide a friendly output ::
@@ -180,7 +180,7 @@ Set a VF as trust ::
Launch dpdk on the VF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=vf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=vf -- -i
Expected: VF can NOT get DCF mode. testpmd should provide a friendly output ::
@@ -208,11 +208,11 @@ Set a VF as trust ::
Launch dpdk on the VF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0 18:01.1
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
Launch another testpmd on the VF1, and start mac forward ::
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 11-14 -n 4 -a 18:01.1 --file-prefix=vf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 11-14 -n 4 -a 18:01.1 --file-prefix=vf -- -i
set verbose 1
set fwd mac
start
@@ -260,11 +260,11 @@ Set a VF as trust ::
Launch dpdk on the VF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0 18:01.1
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
Launch another testpmd on the VF1, and start mac forward ::
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 11-14 -n 4 -a 18:01.1 --file-prefix=vf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 11-14 -n 4 -a 18:01.1 --file-prefix=vf -- -i
set verbose 1
set fwd mac
start
@@ -309,11 +309,11 @@ Set a VF as trust ::
Launch dpdk on the VF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0 18:01.1
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
Launch another testpmd on the VF1, and start mac forward ::
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 11-14 -n 4 -a 18:01.1 --file-prefix=vf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 11-14 -n 4 -a 18:01.1 --file-prefix=vf -- -i
set verbose 1
set fwd mac
start
@@ -360,11 +360,11 @@ Set a VF as trust ::
Launch dpdk on the VF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0 18:01.1
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
Launch another testpmd on the DCF ::
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 11-14 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf2 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 11-14 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf2 -- -i
Expect: the second testpmd can't be launched
@@ -385,16 +385,16 @@ Set a VF as trust ::
Launch dpdk on the VF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0 18:01.1
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
Launch another testpmd on the VF1 and VF2, and start mac forward ::
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 11-14 -n 4 -a 18:01.1 --file-prefix=vf1 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 11-14 -n 4 -a 18:01.1 --file-prefix=vf1 -- -i
set verbose 1
set fwd mac
start
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 15-16 -n 4 -a 18:01.2 --file-prefix=vf2 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 15-16 -n 4 -a 18:01.2 --file-prefix=vf2 -- -i
set verbose 1
set fwd mac
start
@@ -453,11 +453,11 @@ Set a VF as trust ::
Launch dpdk on the VF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0 18:01.1 18:01.2
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
Launch another testpmd on the VF1, and start mac forward ::
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 11-14 -n 4 -a 18:01.1 -a 18:01.2 --file-prefix=vf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 11-14 -n 4 -a 18:01.1 -a 18:01.2 --file-prefix=vf -- -i
set verbose 1
set fwd mac
start
@@ -549,7 +549,7 @@ Set ADQ on PF ::
Try to launch dpdk on the VF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
Expect: testpmd can't be launched. PF should reject DCF mode.
@@ -565,7 +565,7 @@ Remove ADQ on PF ::
Launch dpdk on the VF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
Expect: testpmd can launch successfully. DCF mode can be grant ::
@@ -589,7 +589,7 @@ Set a VF as trust ::
Launch dpdk on the VF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
Set ADQ on PF ::
@@ -629,7 +629,7 @@ Set a VF as trust ::
Launch dpdk on the VF0 on PF1, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
Set ADQ on PF2 ::
@@ -973,7 +973,7 @@ TC31: add ACL rule by kernel, reject request for DCF functionality
3. launch testpmd on VF0 requesting for DCF funtionality::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc -n 4 -a 18:01.0,cap=dcf --log-level=ice,7 -- -i --port-topology=loop
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc -n 4 -a 18:01.0,cap=dcf --log-level=ice,7 -- -i --port-topology=loop
report error::
@@ -1015,7 +1015,7 @@ TC32: add ACL rule by kernel, accept request for DCF functionality of another PF
3. launch testpmd on VF0 of PF0 requesting for DCF funtionality successfully::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc -n 4 -a 18:01.0,cap=dcf --log-level=ice,7 -- -i --port-topology=loop
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc -n 4 -a 18:01.0,cap=dcf --log-level=ice,7 -- -i --port-topology=loop
show the port info::
@@ -1032,7 +1032,7 @@ TC33: ACL DCF mode is active, add ACL filters by way of host based configuration
2. launch testpmd on VF0 of PF0 requesting for DCF funtionality successfully::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc -n 4 -a 18:01.0,cap=dcf --log-level=ice,7 -- -i --port-topology=loop
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc -n 4 -a 18:01.0,cap=dcf --log-level=ice,7 -- -i --port-topology=loop
show the port info::
@@ -1061,7 +1061,7 @@ TC34: ACL DCF mode is active, add ACL filters by way of host based configuration
2. launch testpmd on VF0 of PF0 requesting for DCF funtionality successfully::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc -n 4 -a 18:01.0,cap=dcf --log-level=ice,7 -- -i --port-topology=loop
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc -n 4 -a 18:01.0,cap=dcf --log-level=ice,7 -- -i --port-topology=loop
show the port info::
diff --git a/test_plans/ddp_gtp_qregion_test_plan.rst b/test_plans/ddp_gtp_qregion_test_plan.rst
index 596f4855..7e2b1816 100644
--- a/test_plans/ddp_gtp_qregion_test_plan.rst
+++ b/test_plans/ddp_gtp_qregion_test_plan.rst
@@ -86,7 +86,7 @@ Prerequisites
--pkt-filter-mode=perfect on testpmd to enable flow director. In general,
PF's max queue is 64::
- ./testpmd -c f -n 4 -- -i --pkt-filter-mode=perfect
+ ./<build>/app/dpdk-testpmd -c f -n 4 -- -i --pkt-filter-mode=perfect
--port-topology=chained --txq=64 --rxq=64
diff --git a/test_plans/ddp_gtp_test_plan.rst b/test_plans/ddp_gtp_test_plan.rst
index ed5139bc..0fd5a50d 100644
--- a/test_plans/ddp_gtp_test_plan.rst
+++ b/test_plans/ddp_gtp_test_plan.rst
@@ -82,7 +82,7 @@ Prerequisites
port topology mode, add txq/rxq to enable multi-queues. In general, PF's
max queue is 64, VF's max queue is 4::
- ./testpmd -c f -n 4 -- -i --pkt-filter-mode=perfect --port-topology=chained --tx-offloads=0x8fff --txq=64 --rxq=64
+ ./<build>/app/dpdk-testpmd -c f -n 4 -- -i --pkt-filter-mode=perfect --port-topology=chained --tx-offloads=0x8fff --txq=64 --rxq=64
Test Case: Load dynamic device personalization
diff --git a/test_plans/ddp_l2tpv3_test_plan.rst b/test_plans/ddp_l2tpv3_test_plan.rst
index 8262da35..d4ae0f55 100644
--- a/test_plans/ddp_l2tpv3_test_plan.rst
+++ b/test_plans/ddp_l2tpv3_test_plan.rst
@@ -100,7 +100,7 @@ any DDP functionality*
5. Start the TESTPMD::
- ./x86_64-native-linuxapp-gcc/build/app/test-pmd/testpmd -c f -n 4 -a
+ ./<build>/app/dpdk-testpmd -c f -n 4 -a
<PCI address of device> -- -i --port-topology=chained --txq=64 --rxq=64
--pkt-filter-mode=perfect
diff --git a/test_plans/ddp_mpls_test_plan.rst b/test_plans/ddp_mpls_test_plan.rst
index d76934c1..6c4d0e01 100644
--- a/test_plans/ddp_mpls_test_plan.rst
+++ b/test_plans/ddp_mpls_test_plan.rst
@@ -70,7 +70,7 @@ Prerequisites
enable multi-queues. In general, PF's max queue is 64, VF's max queue
is 4::
- ./testpmd -c f -n 4 -- -i --port-topology=chained --tx-offloads=0x8fff
+ ./<build>/app/dpdk-testpmd -c f -n 4 -- -i --port-topology=chained --tx-offloads=0x8fff
--txq=4 --rxq=4
diff --git a/test_plans/ddp_ppp_l2tp_test_plan.rst b/test_plans/ddp_ppp_l2tp_test_plan.rst
index 8f51ff20..3f9c53b7 100644
--- a/test_plans/ddp_ppp_l2tp_test_plan.rst
+++ b/test_plans/ddp_ppp_l2tp_test_plan.rst
@@ -109,7 +109,7 @@ Prerequisites
--pkt-filter-mode=perfect on testpmd to enable flow director. In general,
PF's max queue is 64::
- ./testpmd -c f -n 4 -- -i --port-topology=chained --txq=64 --rxq=64
+ ./<build>/app/dpdk-testpmd -c f -n 4 -- -i --port-topology=chained --txq=64 --rxq=64
--pkt-filter-mode=perfect
Load/delete dynamic device personalization
diff --git a/test_plans/dual_vlan_test_plan.rst b/test_plans/dual_vlan_test_plan.rst
index 9955ef5d..a7e03bcc 100644
--- a/test_plans/dual_vlan_test_plan.rst
+++ b/test_plans/dual_vlan_test_plan.rst
@@ -56,7 +56,7 @@ to the device under test::
Assuming that ports ``0`` and ``1`` are connected to the traffic generator's port ``A`` and ``B``,
launch the ``testpmd`` with the following arguments::
- ./build/app/testpmd -c ffffff -n 3 -- -i --burst=1 --txpt=32 \
+ ./<build>/app/dpdk-testpmd -c ffffff -n 3 -- -i --burst=1 --txpt=32 \
--txht=8 --txwt=0 --txfreet=0 --rxfreet=64 --mbcache=250 --portmask=0x3
The -n command is used to select the number of memory channels. It should match the number of memory channels on that setup.
diff --git a/test_plans/dynamic_flowtype_test_plan.rst b/test_plans/dynamic_flowtype_test_plan.rst
index 1acf60c8..5fda715e 100644
--- a/test_plans/dynamic_flowtype_test_plan.rst
+++ b/test_plans/dynamic_flowtype_test_plan.rst
@@ -87,7 +87,7 @@ Prerequisites
2. Start testpmd on host, set chained port topology mode, add txq/rxq to
enable multi-queues. In general, PF's max queue is 64::
- ./testpmd -c f -n 4 -- -i --port-topology=chained --txq=64 --rxq=64
+ ./<build>/app/dpdk-testpmd -c f -n 4 -- -i --port-topology=chained --txq=64 --rxq=64
3. Set rxonly forwarding and enable output
diff --git a/test_plans/dynamic_queue_test_plan.rst b/test_plans/dynamic_queue_test_plan.rst
index 6be6ec74..dc1d350a 100644
--- a/test_plans/dynamic_queue_test_plan.rst
+++ b/test_plans/dynamic_queue_test_plan.rst
@@ -79,7 +79,7 @@ Prerequisites
2. Start testpmd on host, set chained port topology mode, add txq/rxq to
enable multi-queues::
- ./testpmd -c 0xf -n 4 -- -i --port-topology=chained --txq=64 --rxq=64
+ ./<build>/app/dpdk-testpmd -c 0xf -n 4 -- -i --port-topology=chained --txq=64 --rxq=64
Test Case: Rx queue setup at runtime
diff --git a/test_plans/eeprom_dump_test_plan.rst b/test_plans/eeprom_dump_test_plan.rst
index 3b169c39..3923f3fc 100644
--- a/test_plans/eeprom_dump_test_plan.rst
+++ b/test_plans/eeprom_dump_test_plan.rst
@@ -54,7 +54,7 @@ to the device under test::
Assuming that ports are up and working, then launch the ``testpmd`` application
with the following arguments::
- ./build/app/testpmd -- -i --portmask=0x3
+ ./<build>/app/dpdk-testpmd -- -i --portmask=0x3
Test Case : EEPROM Dump
=======================
diff --git a/test_plans/ethtool_stats_test_plan.rst b/test_plans/ethtool_stats_test_plan.rst
index 95f9e7a6..7947b68d 100644
--- a/test_plans/ethtool_stats_test_plan.rst
+++ b/test_plans/ethtool_stats_test_plan.rst
@@ -74,7 +74,7 @@ bind two ports::
Test Case: xstat options
------------------------
-check ``dpdk-procinfo`` tool support ``xstats`` command options.
+check ``dpdk-proc-info`` tool support ``xstats`` command options.
These options should be included::
@@ -87,17 +87,17 @@ steps:
#. boot up ``testpmd``::
- ./<target name>/app/testpmd -c 0x3 -n 4 -- -i --port-topology=loop
+ ./<target name>/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=loop
testpmd> set fwd io
testpmd> clear port xstats all
testpmd> start
-#. run ``dpdk-procinfo`` tool::
+#. run ``dpdk-proc-info`` tool::
- ./<target name>/app/dpdk-procinfo
+ ./<target name>/app/dpdk-proc-info
-#. check ``dpdk-procinfo`` tool output should contain upper options.
+#. check ``dpdk-proc-info`` tool output should contain upper options.
Test Case: xstat statistic integrity
------------------------------------
@@ -108,7 +108,7 @@ steps:
#. boot up ``testpmd``::
- ./<target name>/app/testpmd -c 0x3 -n 4 -- -i --port-topology=loop
+ ./<target name>/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=loop
testpmd> set fwd io
testpmd> clear port xstats all
@@ -118,11 +118,11 @@ steps:
sendp([Ether()/IP()/UDP()/Raw('\0'*60)], iface=<port 0 name>)
-#. run ``dpdk-procinfo`` tool with ``xstats`` option and check if all ports
+#. run ``dpdk-proc-info`` tool with ``xstats`` option and check if all ports
extended statistics can access by xstat name or xstat id::
- ./<target name>/app/dpdk-procinfo -- -p 3 --xstats-id <N>
- ./<target name>/app/dpdk-procinfo -- -p 3 --xstats-name <statistic name>
+ ./<target name>/app/dpdk-proc-info -- -p 3 --xstats-id <N>
+ ./<target name>/app/dpdk-proc-info -- -p 3 --xstats-name <statistic name>
Test Case: xstat-reset command
------------------------------
@@ -133,7 +133,7 @@ steps:
#. boot up ``testpmd``::
- ./<target name>/app/testpmd -c 0x3 -n 4 -- -i --port-topology=loop
+ ./<target name>/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=loop
testpmd> set fwd io
testpmd> clear port xstats all
@@ -143,10 +143,10 @@ steps:
sendp([Ether()/IP()/UDP()/Raw('\0'*60)], iface=<port 0 name>)
-#. run ``dpdk-procinfo`` tool with ``xstats-reset`` option and check if all port
+#. run ``dpdk-proc-info`` tool with ``xstats-reset`` option and check if all port
statistics have been cleared::
- ./<target name>/app/dpdk-procinfo -- -p 3 --xstats-reset
+ ./<target name>/app/dpdk-proc-info -- -p 3 --xstats-reset
Test Case: xstat single statistic
---------------------------------
@@ -158,7 +158,7 @@ steps:
#. boot up ``testpmd``::
- ./<target name>/app/testpmd -c 0x3 -n 4 -- -i --port-topology=loop
+ ./<target name>/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=loop
testpmd> set fwd io
testpmd> clear port xstats all
@@ -172,14 +172,14 @@ steps:
testpmd> show port xstats all
-#. run ``dpdk-procinfo`` tool with ``xstats-id`` option to get the statistic
+#. run ``dpdk-proc-info`` tool with ``xstats-id`` option to get the statistic
name corresponding with the index id::
- ./<target name>/app/dpdk-procinfo -- -p 3 --xstats-id 0,1,...N
+ ./<target name>/app/dpdk-proc-info -- -p 3 --xstats-id 0,1,...N
-#. run ``dpdk-procinfo`` tool with ``xstats-name`` option to get the statistic
+#. run ``dpdk-proc-info`` tool with ``xstats-name`` option to get the statistic
data corresponding with the statistic name::
- ./<target name>/app/dpdk-procinfo -- -p 3 --xstats-name <statistic name>
+ ./<target name>/app/dpdk-proc-info -- -p 3 --xstats-name <statistic name>
#. compare these proc info tool xstat values with testpmd xstat values.
\ No newline at end of file
diff --git a/test_plans/eventdev_pipeline_test_plan.rst b/test_plans/eventdev_pipeline_test_plan.rst
index 866eae72..4e4498d4 100644
--- a/test_plans/eventdev_pipeline_test_plan.rst
+++ b/test_plans/eventdev_pipeline_test_plan.rst
@@ -36,6 +36,12 @@ Eventdev Pipeline SW PMD Tests
Prerequisites
==============
+Build dpdk and examples=eventdev_pipeline
+ CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static <build_target>
+ ninja -C <build_target>
+
+ meson configure -Dexamples=eventdev_pipeline <build_target>
+ ninja -C <build_target>
Test Case: Keep the packets order with default stage in single-flow and multi-flow
====================================================================================
@@ -43,7 +49,7 @@ Description: the packets' order which will pass through a same flow should be gu
1. Run the sample with below command::
- # ./build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s1 -n0 -c32 -W1000 -D
+ # ./<build_target>/examples/dpdk-eventdev_pipeline /build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s1 -n0 -c32 -W1000 -D
Parameters:
-r2, -t4, -e8: allocate cores to rx, tx and shedular
@@ -62,7 +68,7 @@ Description: the sample only guarantee that keep the packets order with only one
1. Run the sample with below command::
- # ./build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s1 -n0 -c32 -W1000 -o -D
+ # ./<build_target>/examples/dpdk-eventdev_pipeline /build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s1 -n0 -c32 -W1000 -o -D
2. Send traffic from ixia device with same 5 tuple(single-link) and with different 5-tuple(multi-flow)
@@ -75,7 +81,7 @@ in single-flow, the load-balanced behavior is not guaranteed;
1. Run the sample with below command::
- # ./build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s1 -n0 -c32 -W1000 -D
+ # ./<build_target>/examples/dpdk-eventdev_pipeline /build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s1 -n0 -c32 -W1000 -D
2. Use traffic generator to send huge number of packets:
In single-flow situation, traffic generator will send packets with the same 5-tuple
@@ -90,7 +96,7 @@ Description: A good load-balanced behavior should be guaranteed in both single-f
1. Run the sample with below command::
- # ./build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s1 -n0 -c32 -W1000 -o -D
+ # ./<build_target>/examples/dpdk-eventdev_pipeline /build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s1 -n0 -c32 -W1000 -o -D
2. Use traffic generator to send huge number of packets:
In single-flow situation, traffic generator will send packets with the same 5-tuple
@@ -105,7 +111,7 @@ Description: A good load-balanced behavior should be guaranteed in both single-f
1. Run the sample with below command::
- # ./build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s1 -n0 -c32 -W1000 -p -D
+ # ./<build_target>/examples/dpdk-eventdev_pipeline /build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s1 -n0 -c32 -W1000 -p -D
2. Use traffic generator to send huge number of packets:
In single-flow situation, traffic generator will send packets with the same 5-tuple
@@ -121,7 +127,7 @@ We use 4 worker and 2 stage as the test background.
1. Run the sample with below command::
- # ./build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s2 -n0 -c32
+ # ./<build_target>/examples/dpdk-eventdev_pipeline /build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s2 -n0 -c32
2. use traffic generator to send huge number of packets(with same 5-tuple and different 5-tuple)
@@ -134,7 +140,7 @@ We use 4 worker and 2 stage as the test background.
1. Run the sample with below command::
- # ./build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s2 -n0 -c32 -p
+ # ./<build_target>/examples/dpdk-eventdev_pipeline /build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s2 -n0 -c32 -p
2. use traffic generator to send huge number of packets(with same 5-tuple and different 5-tuple)
@@ -147,7 +153,7 @@ We use 4 worker and 2 stage as the test background.
1. Run the sample with below command::
- # ./build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s2 -n0 -c32 -o
+ # ./<build_target>/examples/dpdk-eventdev_pipeline /build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s2 -n0 -c32 -o
2. use traffic generator to send huge number of packets(with same 5-tuple and different 5-tuple)
@@ -159,6 +165,6 @@ Description: Execute basic forward test with all type of stage.
1. Run the sample with below command::
- # ./build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s1 -n0 -c32
+ # ./<build_target>/examples/dpdk-eventdev_pipeline /build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s1 -n0 -c32
2. use traffic generator to send some packets and verify the sample could forward them normally
diff --git a/test_plans/external_memory_test_plan.rst b/test_plans/external_memory_test_plan.rst
index 42f57726..7109e337 100644
--- a/test_plans/external_memory_test_plan.rst
+++ b/test_plans/external_memory_test_plan.rst
@@ -46,7 +46,7 @@ Bind the ports to IGB_UIO driver
Start testpmd with --mp-alloc=xmem flag::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -- --mp-alloc=xmem -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -- --mp-alloc=xmem -i
Start forward in testpmd
@@ -60,7 +60,7 @@ Bind the ports to IGB_UIO driver
Start testpmd with --mp-alloc=xmemhuge flag::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -- --mp-alloc=xmemhuge -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -- --mp-alloc=xmemhuge -i
Start forward in testpmd
@@ -73,7 +73,7 @@ Bind the ports to vfio-pci driver
Start testpmd with --mp-alloc=xmem flag::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -- --mp-alloc=xmem -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -- --mp-alloc=xmem -i
Start forward in testpmd
@@ -86,7 +86,7 @@ Bind the ports to vfio-pci driver
Start testpmd with --mp-alloc=xmemhuge flag::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -- --mp-alloc=xmemhuge -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -- --mp-alloc=xmemhuge -i
Start forward in testpmd
diff --git a/test_plans/external_mempool_handler_test_plan.rst b/test_plans/external_mempool_handler_test_plan.rst
index 09ed4ca9..2f821364 100644
--- a/test_plans/external_mempool_handler_test_plan.rst
+++ b/test_plans/external_mempool_handler_test_plan.rst
@@ -42,13 +42,14 @@ systems and software based memory allocators to be used with DPDK.
Test Case 1: Multiple producers and multiple consumers
======================================================
-1. Change default mempool handler operations to "ring_mp_mc"::
+1. Default mempool handler operations is "ring_mp_mc"::
- sed -i 's/CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=.*$/CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=\"ring_mp_mc\"/' ./config/common_base
+ cat /root/dpdk/config/rte_config.h |grep MEMPOOL_OPS
+ #define RTE_MBUF_DEFAULT_MEMPOOL_OPS "ring_mp_mc"
2. Start test app and verify mempool autotest passed::
- test -n 4 -c f
+ ./<build_target>/app/test/dpdk-test -n 4 -c f
RTE>> mempool_autotest
3. Start testpmd with two ports and start forwarding::
@@ -65,11 +66,11 @@ Test Case 2: Single producer and Single consumer
1. Change default mempool operation to "ring_sp_sc"::
- sed -i 's/CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=.*$/CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=\"ring_sp_sc\"/' ./config/common_base
+ sed -i '$a\#define RTE_MBUF_DEFAULT_MEMPOOL_OPS \"ring_sp_sc\"' config/rte_config.h
2. Start test app and verify mempool autotest passed::
- test -n 4 -c f
+ ./<build_target>/app/test/dpdk-test -n 4 -c f
RTE>> mempool_autotest
3. Start testpmd with two ports and start forwarding::
@@ -86,11 +87,11 @@ Test Case 3: Single producer and Multiple consumers
1. Change default mempool operation to "ring_sp_mc"::
- sed -i 's/CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=.*$/CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=\"ring_sp_mc\"/' ./config/common_base
+ sed -i '$a\#define RTE_MBUF_DEFAULT_MEMPOOL_OPS \"ring_sp_mc\"' config/rte_config.h
2. Start test app and verify mempool autotest passed::
- test -n 4 -c f
+ ./<build_target>/app/test/dpdk-test -n 4 -c f
RTE>> mempool_autotest
3. Start testpmd with two ports and start forwarding::
@@ -107,11 +108,11 @@ Test Case 4: Multiple producers and single consumer
1. Change default mempool operation to "ring_mp_sc"::
- sed -i 's/CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=.*$/CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=\"ring_mp_sc\"/' ./config/common_base
+ sed -i '$a\#define RTE_MBUF_DEFAULT_MEMPOOL_OPS \"ring_mp_sc\"' config/rte_config.h
2. Start test app and verify mempool autotest passed::
- test -n 4 -c f
+ ./<build_target>/app/test/dpdk-test -n 4 -c f
RTE>> mempool_autotest
3. Start testpmd with two ports and start forwarding::
@@ -128,11 +129,11 @@ Test Case 4: Stack mempool handler
1. Change default mempool operation to "stack"::
- sed -i 's/CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=.*$/CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=\"stack\"/' ./config/common_base
+ sed -i '$a\#define RTE_MBUF_DEFAULT_MEMPOOL_OPS \"stack\"' config/rte_config.h
2. Start test app and verify mempool autotest passed::
- test -n 4 -c f
+ ./<build_target>/app/test/dpdk-test -n 4 -c f
RTE>> mempool_autotest
3. Start testpmd with two ports and start forwarding::
diff --git a/test_plans/interrupt_pmd_test_plan.rst b/test_plans/interrupt_pmd_test_plan.rst
index cb8b2f19..c89d68e2 100644
--- a/test_plans/interrupt_pmd_test_plan.rst
+++ b/test_plans/interrupt_pmd_test_plan.rst
@@ -60,12 +60,19 @@ Iommu pass through feature has been enabled in kernel::
Support igb_uio and vfio driver, if used vfio, kernel need 3.6+ and enable vt-d
in bios. When used vfio, requested to insmod two drivers vfio and vfio-pci.
+Build dpdk and examples=l3fwd-power:
+ CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static <build_target>
+ ninja -C <build_target>
+
+ meson configure -Dexamples=l3fwd-power <build_target>
+ ninja -C <build_target>
+
Test Case1: PF interrupt pmd with different queue
=================================================
Run l3fwd-power with one queue per port::
- l3fwd-power -c 0x7 -n 4 -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
+ ./<build_target>/examples/dpdk-l3fwd-power -c 0x7 -n 4 -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Send one packet to Port0 and Port1, check that thread on core1 and core2
waked up::
@@ -85,7 +92,7 @@ keep up awake.
Run l3fwd-power with random number queue per port, if is 4::
- l3fwd-power -c 0x7 -n 4 -- -p 0x3 -P --config="0,0,0),(0,1,1),\
+ ./<build_target>/examples/dpdk-l3fwd-power -c 0x7 -n 4 -- -p 0x3 -P --config="0,0,0),(0,1,1),\
(0,2,2),(0,3,3),(0,4,4)"
Send packet with increased dest IP to Port0, check that all threads waked up
@@ -95,7 +102,7 @@ keep up awake.
Run l3fwd-power with 15 queues per port::
- l3fwd-power -c 0xffffff -n 4 -- -p 0x3 -P --config="(0,0,0),(0,1,1),\
+ ./<build_target>/examples/dpdk-l3fwd-power -c 0xffffff -n 4 -- -p 0x3 -P --config="(0,0,0),(0,1,1),\
(0,2,2),(0,3,3),(0,4,4),(0,5,5),(0,6,6),(0,7,7),(1,0,8),\
(1,1,9),(1,2,10),(1,3,11),(1,4,12),(1,5,13),(1,6,14)"
@@ -109,7 +116,7 @@ Test Case2: PF lsc interrupt with vfio
Run l3fwd-power with one queue per port::
- l3fwd-power -c 0x7 -n 4 -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
+ ./<build_target>/examples/dpdk-l3fwd-power -c 0x7 -n 4 -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Plug out Port0 cable, check that link down interrupt captured and handled by
pmd driver.
diff --git a/test_plans/ip_pipeline_test_plan.rst b/test_plans/ip_pipeline_test_plan.rst
index 1c774e3c..5452bc90 100644
--- a/test_plans/ip_pipeline_test_plan.rst
+++ b/test_plans/ip_pipeline_test_plan.rst
@@ -76,6 +76,13 @@ Change pci device id of LINK0 to pci device id of dut_port_0.
There are two drivers supported now: aesni_gcm and aesni_mb.
Different drivers support different Algorithms.
+Build dpdk and examples=ip_pipeline:
+ CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static <build_target>
+ ninja -C <build_target>
+
+ meson configure -Dexamples=ip_pipeline <build_target>
+ ninja -C <build_target>
+
Test Case: l2fwd pipeline
===========================
1. Edit examples/ip_pipeline/examples/l2fwd.cli,
@@ -84,7 +91,7 @@ Test Case: l2fwd pipeline
2. Run ip_pipeline app as the following::
- ./build/ip_pipeline -c 0x3 -n 4 -- -s examples/l2fwd.cli
+ ./<build_target>/examples/dpdk-ip_pipeline -c 0x3 -n 4 -- -s examples/l2fwd.cli
3. Send packets at tester side with scapy, verify:
@@ -99,7 +106,7 @@ Test Case: flow classification pipeline
2. Run ip_pipeline app as the following::
- ./build/ip_pipeline -c 0x3 -n 4 –- -s examples/flow.cli
+ ./<build_target>/examples/dpdk-ip_pipeline -c 0x3 -n 4 –- -s examples/flow.cli
3. Send following packets with one test port::
@@ -121,7 +128,7 @@ Test Case: routing pipeline
2. Run ip_pipeline app as the following::
- ./build/ip_pipeline -c 0x3 -n 4 –- -s examples/route.cli,
+ ./<build_target>/examples/dpdk-ip_pipeline -c 0x3 -n 4 –- -s examples/route.cli,
3. Send following packets with one test port::
@@ -143,7 +150,7 @@ Test Case: firewall pipeline
2. Run ip_pipeline app as the following::
- ./build/ip_pipeline -c 0x3 -n 4 –- -s examples/firewall.cli
+ ./<build_target>/examples/dpdk-ip_pipeline -c 0x3 -n 4 –- -s examples/firewall.cli
3. Send following packets with one test port::
@@ -164,7 +171,7 @@ Test Case: pipeline with tap
2. Run ip_pipeline app as the following::
- ./build/ip_pipeline -c 0x3 -n 4 –- -s examples/tap.cli,
+ ./<build_target>/examples/dpdk-ip_pipeline -c 0x3 -n 4 –- -s examples/tap.cli,
3. Send packets at tester side with scapy, verify
packets sent from tester_port_0 can be received at tester_port_1, and vice versa.
@@ -178,7 +185,7 @@ Test Case: traffic management pipeline
3. Run ip_pipeline app as the following::
- ./build/ip_pipeline -c 0x3 -n 4 -a 0000:81:00.0 -- -s examples/traffic_manager.cli
+ ./<build_target>/examples/dpdk-ip_pipeline -c 0x3 -n 4 -a 0000:81:00.0 -- -s examples/traffic_manager.cli
4. Config traffic with dst ipaddr increase from 0.0.0.0 to 15.255.0.0, total 4096 streams,
also config flow tracked-by dst ipaddr, verify each flow's throughput is about linerate/4096.
@@ -191,7 +198,7 @@ Test Case: RSS pipeline
2. Run ip_pipeline app as the following::
- ./build/ip_pipeline -c 0x1f -n 4 –- -s examples/rss.cli
+ ./<build_target>/examples/dpdk-ip_pipeline -c 0x1f -n 4 –- -s examples/rss.cli
3. Send following packets with one test port::
@@ -220,7 +227,7 @@ Test Case: vf l2fwd pipeline(pf bound to dpdk driver)
2. Start testpmd with the four pf ports::
- ./testpmd -c 0xf0 -n 4 -a 05:00.0 -a 05:00.1 -a 05:00.2 -a 05:00.3 --file-prefix=pf --socket-mem 1024,1024 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 0xf0 -n 4 -a 05:00.0 -a 05:00.1 -a 05:00.2 -a 05:00.3 --file-prefix=pf --socket-mem 1024,1024 -- -i
Set vf mac address from pf port::
@@ -235,7 +242,7 @@ Test Case: vf l2fwd pipeline(pf bound to dpdk driver)
4. Run ip_pipeline app as the following::
- ./build/ip_pipeline -c 0x3 -n 4 -a 0000:05:02.0 -a 0000:05:06.0 \
+ ./<build_target>/examples/dpdk-ip_pipeline -c 0x3 -n 4 -a 0000:05:02.0 -a 0000:05:06.0 \
-a 0000:05:0a.0 -a 0000:05:0e.0 --file-prefix=vf --socket-mem 1024,1024 -- -s examples/vf.cli
The exact format of port allowlist: domain:bus:devid:func
@@ -290,7 +297,7 @@ Test Case: vf l2fwd pipeline(pf bound to kernel driver)
4. Run ip_pipeline app as the following::
- ./build/ip_pipeline -c 0x3 -n 4 -- -s examples/vf.cli
+ ./<build_target>/examples/dpdk-ip_pipeline -c 0x3 -n 4 -- -s examples/vf.cli
5. Send packets at tester side with scapy::
@@ -331,7 +338,7 @@ Test Case: crypto pipeline - AEAD algorithm in aesni_gcm
4. Run ip_pipeline app as the following::
- ./examples/ip_pipeline/build/ip_pipeline -a 0000:81:00.0 --vdev crypto_aesni_gcm0
+ ./<build_target>/examples/dpdk-ip_pipeline -a 0000:81:00.0 --vdev crypto_aesni_gcm0
--socket-mem 0,2048 -l 23,24,25 -- -s ./examples/ip_pipeline/examples/flow_crypto.cli
5. Send packets with IXIA port,
@@ -365,7 +372,7 @@ Test Case: crypto pipeline - cipher algorithm in aesni_mb
4. Run ip_pipeline app as the following::
- ./examples/ip_pipeline/build/ip_pipeline -a 0000:81:00.0 --vdev crypto_aesni_mb0 --socket-mem 0,2048 -l 23,24,25 -- -s ./examples/ip_pipeline/examples/flow_crypto.cli
+ ./<build_target>/examples/dpdk-ip_pipeline -a 0000:81:00.0 --vdev crypto_aesni_mb0 --socket-mem 0,2048 -l 23,24,25 -- -s ./examples/ip_pipeline/examples/flow_crypto.cli
5. Send packets with IXIA port,
Use a tool to caculate the ciphertext from plaintext and key as an expected value.
@@ -395,7 +402,7 @@ Test Case: crypto pipeline - cipher_auth algorithm in aesni_mb
4. Run ip_pipeline app as the following::
- ./examples/ip_pipeline/build/ip_pipeline -a 0000:81:00.0 --vdev crypto_aesni_mb0 --socket-mem 0,2048 -l 23,24,25 -- -s ./examples/ip_pipeline/examples/flow_crypto.cli
+ ./<build_target>/examples/dpdk-ip_pipeline -a 0000:81:00.0 --vdev crypto_aesni_mb0 --socket-mem 0,2048 -l 23,24,25 -- -s ./examples/ip_pipeline/examples/flow_crypto.cli
5. Send packets with IXIA port,
Use a tool to caculate the ciphertext from plaintext and cipher key with AES-CBC algorithm.
diff --git a/test_plans/ipgre_test_plan.rst b/test_plans/ipgre_test_plan.rst
index 3a466b75..2c652273 100644
--- a/test_plans/ipgre_test_plan.rst
+++ b/test_plans/ipgre_test_plan.rst
@@ -48,7 +48,7 @@ Test Case 1: GRE ipv4 packet detect
Start testpmd and enable rxonly forwarding mode::
- testpmd -c ffff -n 4 -- -i
+ ./<build_target>/app/dpdk-testpmd -c ffff -n 4 -- -i
testpmd> set fwd rxonly
testpmd> set verbose 1
testpmd> start
@@ -77,7 +77,7 @@ Test Case 2: GRE ipv6 packet detect
Start testpmd and enable rxonly forwarding mode::
- testpmd -c ffff -n 4 -- -i --enable-hw-vlan
+ ./<build_target>/app/dpdk-testpmd -c ffff -n 4 -- -i --enable-hw-vlan
testpmd> set fwd rxonly
testpmd> set verbose 1
testpmd> start
@@ -124,7 +124,7 @@ Test Case 4: GRE packet chksum offload
Start testpmd with hardware checksum offload enabled::
- testpmd -c ff -n 3 -- -i --enable-rx-cksum --port-topology=loop
+ ./<build_target>/app/dpdk-testpmd -c ff -n 3 -- -i --enable-rx-cksum --port-topology=loop
testpmd> set verbose 1
testpmd> set fwd csum
testpmd> csum set ip hw 0
diff --git a/test_plans/ipv4_reassembly_test_plan.rst b/test_plans/ipv4_reassembly_test_plan.rst
index 75aba16e..354dae51 100644
--- a/test_plans/ipv4_reassembly_test_plan.rst
+++ b/test_plans/ipv4_reassembly_test_plan.rst
@@ -56,13 +56,19 @@ to the device under test::
1x Intel® 82599 (Niantic) NICs (1x 10GbE full duplex optical ports per NIC)
plugged into the available PCIe Gen2 8-lane slots.
+Build dpdk and examples=ip_reassembly:
+ CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static <build_target>
+ ninja -C <build_target>
+
+ meson configure -Dexamples=ip_reassembly <build_target>
+ ninja -C <build_target>
Test Case: Send 1K packets, 4 fragments each and 1K maxflows
============================================================
Sample command::
- ./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
+ ./<build_target>/examples/dpdk-ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=1024 --flowttl=10s
Sends 1K packets split in 4 fragments each with a ``maxflows`` of 1K.
@@ -79,7 +85,7 @@ Test Case: Send 2K packets, 4 fragments each and 1K maxflows
Sample command::
- ./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
+ ./<build_target>/examples/dpdk-ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=1024 --flowttl=10s
Sends 2K packets split in 4 fragments each with a ``maxflows`` of 1K.
@@ -96,7 +102,7 @@ Test Case: Send 4K packets, 7 fragments each and 4K maxflows
Sample command::
- ./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
+ ./<build_target>/examples/dpdk-ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=4096 --flowttl=10s
Modifies the sample app source code to enable up to 7 fragments per packet,
@@ -116,7 +122,7 @@ Test Case: Send +1K packets and ttl 3s; wait +ttl; send 1K packets
Sample command::
- ./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
+ ./<build_target>/examples/dpdk-ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=1024 --flowttl=3s
Sends 1100 packets split in 4 fragments each.
@@ -142,7 +148,7 @@ Test Case: Send more packets than maxflows; only maxflows packets are forwarded
Sample command::
- ./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
+ ./<build_target>/examples/dpdk-ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=1023 --flowttl=5s
Sends 1K packets with ``maxflows`` equal to 1023.
@@ -175,7 +181,7 @@ Test Case: Send more fragments than supported
Sample command::
- ./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
+ ./<build_target>/examples/dpdk-ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=1024 --flowttl=10s
Sends 1 packet split in 5 fragments while the maximum number of supported
@@ -194,7 +200,7 @@ Test Case: Send 3 frames and delay the 4th; no frames are forwarded back
Sample command::
- ./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
+ ./<build_target>/examples/dpdk-ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=1024 --flowttl=3s
Creates 1 packet split in 4 fragments. Sends the first 3 fragments and waits
@@ -213,7 +219,7 @@ Test Case: Send jumbo frames
Sample command::
- ./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
+ ./<build_target>/examples/dpdk-ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=1024 --flowttl=10s --enable-jumbo --max-pkt-len=9500
Sets the NIC MTU to 9000 and sends 1K packets of 8900B split in 4 fragments of
@@ -232,7 +238,7 @@ Test Case: Send jumbo frames without enable them in the app
Sample command::
- ./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
+ ./<build_target>/examples/dpdk-ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=1024 --flowttl=10s
Sends jumbo packets in the same way the previous test case does but without
--git a/test_plans/ixgbe_vf_get_extra_queue_information_test_plan.rst b/test_plans/ixgbe_vf_get_extra_queue_information_test_plan.rst
index 146f04bb..07c67e76 100644
--- a/test_plans/ixgbe_vf_get_extra_queue_information_test_plan.rst
+++ b/test_plans/ixgbe_vf_get_extra_queue_information_test_plan.rst
@@ -89,7 +89,7 @@ Test case 1: DPDK PF, kernel VF, enable DCB mode with TC=4
1. start the testpmd on PF::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -- -i --rxq=4 --txq=4 --nb-cores=16
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 1ffff -n 4 -- -i --rxq=4 --txq=4 --nb-cores=16
testpmd> port stop 0
testpmd> port config 0 dcb vt on 4 pfc off
testpmd> port start 0
@@ -135,7 +135,7 @@ Test case 2: DPDK PF, kernel VF, disable DCB mode
1. start the testpmd on PF::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -- -i --rxq=2 --txq=2 --nb-cores=16
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 1ffff -n 4 -- -i --rxq=2 --txq=2 --nb-cores=16
2. check if VF port is linked. if vf port is down, up the port::
diff --git a/test_plans/jumboframes_test_plan.rst b/test_plans/jumboframes_test_plan.rst
index a713ee5d..65287cd1 100644
--- a/test_plans/jumboframes_test_plan.rst
+++ b/test_plans/jumboframes_test_plan.rst
@@ -59,7 +59,7 @@ Assuming that ports ``0`` and ``1`` of the test target are directly connected
to the traffic generator, launch the ``testpmd`` application with the following
arguments::
- ./build/app/testpmd -c ffffff -n 6 -- -i --portmask=0x3 --max-pkt-len=9600 \
+ ./<build_target>/app/dpdk-testpmd -c ffffff -n 6 -- -i --portmask=0x3 --max-pkt-len=9600 \
--tx-offloads=0x00008000
The -n command is used to select the number of memory channels. It should match the number of memory channels on that setup.
@@ -179,7 +179,7 @@ Test Case: Normal frames with jumbo frame support
Start testpmd with jumbo frame support enabled ::
- ./testpmd -c ffffff -n 3 -- -i --rxd=1024 --txd=1024 \
+ ./<build_target>/app/dpdk-testpmd -c ffffff -n 3 -- -i --rxd=1024 --txd=1024 \
--burst=144 --txpt=32 --txht=8 --txwt=8 --txfreet=0 --rxfreet=64 \
--mbcache=200 --portmask=0x3 --mbuf-size=2048 --max-pkt-len=9600
diff --git a/test_plans/kernelpf_iavf_test_plan.rst b/test_plans/kernelpf_iavf_test_plan.rst
index 72223c77..45c217e4 100644
--- a/test_plans/kernelpf_iavf_test_plan.rst
+++ b/test_plans/kernelpf_iavf_test_plan.rst
@@ -72,7 +72,7 @@ Bind VF device to igb_uio or vfio-pci
Start up VF port::
- ./testpmd -c f -n 4 -- -i
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -- -i
Test case: VF basic RX/TX
=========================
@@ -345,7 +345,7 @@ Ensure tester's port supports sending jumboframe::
Launch testpmd for VF port without enabling jumboframe option::
- ./testpmd -c f -n 4 -- -i
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -- -i
testpmd> set fwd mac
testpmd> start
@@ -363,7 +363,7 @@ Ensure tester's port supports sending jumboframe::
Launch testpmd for VF port with jumboframe option::
- ./testpmd -c f -n 4 -- -i --max-pkt-len=3000
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -- -i --max-pkt-len=3000
testpmd> set fwd mac
testpmd> start
@@ -380,7 +380,7 @@ Test case: VF RSS
Start command with multi-queues like below::
- ./testpmd -c f -n 4 -- -i --txq=4 --rxq=4
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -- -i --txq=4 --rxq=4
Show RSS RETA configuration::
@@ -424,7 +424,7 @@ Test case: VF RSS hash key
Start command with multi-queues like below::
- ./testpmd -c f -n 4 -- -i --txq=4 --rxq=4
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -- -i --txq=4 --rxq=4
Show port rss hash key::
@@ -518,7 +518,7 @@ Change mtu for large packet::
Launch the ``testpmd`` with the following arguments, add "--max-pkt-len"
for large packet::
- ./testpmd -c f -n 4 -- -i --port-topology=chained --max-pkt-len=9000
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -- -i --port-topology=chained --max-pkt-len=9000
Set csum forward::
diff --git a/test_plans/kni_test_plan.rst b/test_plans/kni_test_plan.rst
index 1d4736bb..1802f6ab 100644
--- a/test_plans/kni_test_plan.rst
+++ b/test_plans/kni_test_plan.rst
@@ -117,7 +117,7 @@ system to another)::
rmmod igb_uio
insmod ./x86_64-default-linuxapp-gcc/kmod/igb_uio.ko
insmod ./x86_64-default-linuxapp-gcc/kmod/rte_kni.ko
- ./examples/kni/build/app/kni -c 0xa0001e -n 4 -- -P -p 0x3 --config="(0,1,2,21),(1,3,4,23)" &
+ ./<build_target>/examples/dpdk-kni -c 0xa0001e -n 4 -- -P -p 0x3 --config="(0,1,2,21),(1,3,4,23)" &
Case config::
@@ -133,7 +133,7 @@ to write to NIC, threads 21 and 23 are used by the kernel.
As the kernel module is installed using ``"kthread_mode=single"`` the core
affinity is set using ``taskset``::
- ./build/app/kni -c 0xa0001e -n 4 -- -P -p 0xc --config="(2,1,2,21),(3,3,4,23)"
+ ./<build_target>/examples/dpdk-kni -c 0xa0001e -n 4 -- -P -p 0xc --config="(2,1,2,21),(3,3,4,23)"
Verify whether the interface has been added::
@@ -379,7 +379,7 @@ Assume that ``port 2 and 3`` are used by this application::
rmmod kni
insmod ./kmod/rte_kni.ko "lo_mode=lo_mode_ring_skb"
- ./build/app/kni -c 0xff -n 3 -- -p 0xf -i 0xf -o 0xf0
+ ./<build_target>/examples/dpdk-kni -c 0xff -n 3 -- -p 0xf -i 0xf -o 0xf0
Assume ``port A and B`` on tester connects to NIC ``port 2 and 3``.
@@ -407,7 +407,7 @@ successfully::
rmmod rte_kni
insmod ./kmod/rte_kni.ko <Changing Parameters>
- ./build/app/kni -c 0xa0001e -n 4 -- -P -p 0xc --config="(2,1,2,21),(3,3,4,23)"
+ ./<build_target>/examples/dpdk-kni -c 0xa0001e -n 4 -- -P -p 0xc --config="(2,1,2,21),(3,3,4,23)"
Using ``dmesg`` to check whether kernel module is loaded with the specified
@@ -437,7 +437,7 @@ Compare performance results for loopback mode using:
insmod ./x86_64-default-linuxapp-gcc/kmod/igb_uio.ko
insmod ./x86_64-default-linuxapp-gcc/kmod/rte_kni.ko <lo_mode and kthread_mode parameters>
- ./examples/kni/build/app/kni -c <Core mask> -n 4 -- -P -p <Port mask> --config="<Ports/Cores configuration>" &
+ ./<build_target>/examples/dpdk-kni -c <Core mask> -n 4 -- -P -p <Port mask> --config="<Ports/Cores configuration>" &
At this point, the throughput is measured and recorded for the different
@@ -474,7 +474,7 @@ Compare performance results for bridge mode using:
The application is launched and the bridge is setup using the commands below::
insmod ./x86_64-default-linuxapp-gcc/kmod/rte_kni.ko <kthread_mode parameter>
- ./build/app/kni -c <Core mask> -n 4 -- -P -p <Port mask> --config="<Ports/Cores configuration>" &
+ ./<build_target>/examples/dpdk-kni -c <Core mask> -n 4 -- -P -p <Port mask> --config="<Ports/Cores configuration>" &
ifconfig vEth2_0 up
ifconfig vEth3_0 up
@@ -560,7 +560,7 @@ The application is launched and the bridge is setup using the commands below::
echo 1 > /proc/sys/net/ipv4/ip_forward
insmod ./x86_64-default-linuxapp-gcc/kmod/rte_kni.ko <kthread_mode parameter>
- ./build/app/kni -c <Core mask> -n 4 -- -P -p <Port mask> --config="<Ports/Cores configuration>" &
+ ./<build_target>/examples/dpdk-kni -c <Core mask> -n 4 -- -P -p <Port mask> --config="<Ports/Cores configuration>" &
ifconfig vEth2_0 192.170.2.1
ifconfig vEth3_0 192.170.3.1
diff --git a/test_plans/l2fwd_jobstats_test_plan.rst b/test_plans/l2fwd_jobstats_test_plan.rst
index ba5a53f2..585f853a 100644
--- a/test_plans/l2fwd_jobstats_test_plan.rst
+++ b/test_plans/l2fwd_jobstats_test_plan.rst
@@ -64,7 +64,7 @@ note: If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.
The application requires a number of command line options::
- ./build/l2fwd-jobstats [EAL options] -- -p PORTMASK [-q NQ] [-l]
+ ./<build_target>/examples/dpdk-l2fwd-jobstats [EAL options] -- -p PORTMASK [-q NQ] [-l]
The ``l2fwd-jobstats`` application is run with EAL parameters and parameters for
the application itself. For details about the EAL parameters, see the relevant
@@ -75,6 +75,13 @@ itself.
- q NQ: A number of queues (=ports) per lcore (default is 1)
- l: Use locale thousands separator when formatting big numbers.
+Build dpdk and examples=l2fwd-jobstats:
+ CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static <build_target>
+ ninja -C <build_target>
+
+ meson configure -Dexamples=l2fwd-jobstats <build_target>
+ ninja -C <build_target>
+
Test Case: L2fwd jobstats check
================================================
@@ -82,7 +89,7 @@ Assume port 0 and 1 are connected to the traffic generator, to run the test
application in linuxapp environment with 2 lcores, 2 ports and 2 RX queues
per lcore::
- ./examples/l2fwd-jobstats/build/l2fwd-jobstats -c 0x03 -n 4 -- -q 2 -p 0x03 -l
+ ./<build_target>/examples/dpdk-l2fwd-jobstats -c 0x03 -n 4 -- -q 2 -p 0x03 -l
Then send 100, 000 packet to port 0 and 100, 000 packets to port 1, check the
NIC packets number reported by sample is the same with what we set at the packet
--
2.25.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [dts][PATCH V1 2/4] test_plans/*: modify test plan to adapt meson build
2022-01-22 18:20 [dts][PATCH V1 0/4] test_plans/*: modify test plan to adapt meson build Yu Jiang
2022-01-22 18:20 ` [dts][PATCH V1 1/4] " Yu Jiang
@ 2022-01-22 18:20 ` Yu Jiang
2022-01-22 18:20 ` [dts][PATCH V1 3/4] " Yu Jiang
2022-01-22 18:20 ` [dts][PATCH V1 4/4] " Yu Jiang
3 siblings, 0 replies; 7+ messages in thread
From: Yu Jiang @ 2022-01-22 18:20 UTC (permalink / raw)
To: lijuan.tu, dts; +Cc: Yu Jiang
test_plans/*: modify test plan to adapt meson build
Signed-off-by: Yu Jiang <yux.jiang@intel.com>
---
test_plans/l2tp_esp_coverage_test_plan.rst | 12 +--
test_plans/l3fwdacl_test_plan.rst | 39 +++++----
test_plans/large_vf_test_plan.rst | 10 +--
test_plans/link_flowctrl_test_plan.rst | 2 +-
.../link_status_interrupt_test_plan.rst | 9 ++-
...ack_multi_paths_port_restart_test_plan.rst | 40 +++++-----
.../loopback_multi_queues_test_plan.rst | 80 +++++++++----------
test_plans/mac_filter_test_plan.rst | 2 +-
test_plans/macsec_for_ixgbe_test_plan.rst | 10 +--
...ious_driver_event_indication_test_plan.rst | 8 +-
.../metering_and_policing_test_plan.rst | 28 +++----
test_plans/mtu_update_test_plan.rst | 2 +-
test_plans/multiple_pthread_test_plan.rst | 68 ++++++++--------
test_plans/ptpclient_test_plan.rst | 10 ++-
test_plans/ptype_mapping_test_plan.rst | 2 +-
test_plans/qinq_filter_test_plan.rst | 16 ++--
test_plans/qos_api_test_plan.rst | 18 ++---
test_plans/queue_region_test_plan.rst | 2 +-
18 files changed, 188 insertions(+), 170 deletions(-)
diff --git a/test_plans/l2tp_esp_coverage_test_plan.rst b/test_plans/l2tp_esp_coverage_test_plan.rst
index a768684f..f9edaee9 100644
--- a/test_plans/l2tp_esp_coverage_test_plan.rst
+++ b/test_plans/l2tp_esp_coverage_test_plan.rst
@@ -88,7 +88,7 @@ Test Case 1: test MAC_IPV4_L2TPv3 HW checksum offload
1. DUT enable rx checksum with "--enable-rx-cksum" when start testpmd::
- ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -a af:01.0 -- -i --enable-rx-cksum
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -a af:01.0 -- -i --enable-rx-cksum
2. DUT setup csum forwarding mode::
@@ -163,7 +163,7 @@ Test Case 2: test MAC_IPV4_ESP HW checksum offload
1. DUT enable rx checksum with "--enable-rx-cksum" when start testpmd, setup csum forwarding mode::
- ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -a af:01.0 -- -i --enable-rx-cksum
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -a af:01.0 -- -i --enable-rx-cksum
2. DUT setup csum forwarding mode::
@@ -1095,7 +1095,7 @@ Test Case 14: MAC_IPV4_L2TPv3 vlan strip on + HW checksum offload check
The pre-steps are as l2tp_esp_iavf_test_plan.
-1. ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-9 -n 4 -a af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum
+1. ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -a af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum
2. DUT create fdir rules for MAC_IPV4_L2TPv3 with queue index and mark::
@@ -1189,7 +1189,7 @@ The pre-steps are as l2tp_esp_iavf_test_plan.
Test Case 15: MAC_IPV4_L2TPv3 vlan insert on + SW checksum offload check
========================================================================
-1. ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-9 -n 4 -a af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum
+1. ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -a af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum
2. DUT create fdir rules for MAC_IPV4_L2TPv3 with queue index and mark::
@@ -1279,7 +1279,7 @@ Test Case 16: MAC_IPV4_ESP vlan strip on + HW checksum offload check
The pre-steps are as l2tp_esp_iavf_test_plan.
-1. ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-9 -n 4 -a af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum
+1. ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -a af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum
2. DUT create fdir rules for MAC_IPV4_ESP with queue index and mark::
@@ -1372,7 +1372,7 @@ The pre-steps are as l2tp_esp_iavf_test_plan.
Test Case 17: MAC_IPV6_NAT-T-ESP vlan insert on + SW checksum offload check
===========================================================================
-1. ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-9 -n 4 -a af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum
+1. ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -a af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum
2. DUT create fdir rules for MAC_IPV6_NAT-T-ESP with queue index and mark::
diff --git a/test_plans/l3fwdacl_test_plan.rst b/test_plans/l3fwdacl_test_plan.rst
index 7079308c..4ea60686 100644
--- a/test_plans/l3fwdacl_test_plan.rst
+++ b/test_plans/l3fwdacl_test_plan.rst
@@ -73,6 +73,13 @@ Prerequisites
insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
./usertools/dpdk-devbind.py --bind=igb_uio 04:00.0 04:00.1
+Build dpdk and examples=l3fwd-acl:
+ CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static <build_target>
+ ninja -C <build_target>
+
+ meson configure -Dexamples=l3fwd-acl <build_target>
+ ninja -C <build_target>
+
Test Case: packet match ACL rule
================================
@@ -85,7 +92,7 @@ Ipv4 packet match source ip address 200.10.0.1 will be dropped::
Add one default rule in rule file /root/rule_ipv6.db
R0:0:0:0:0:0:0:0/0 0:0:0:0:0:0:0:0/0 0 : 65535 0 : 65535 0x00/0x00 0
- ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
+ ./<build_target>/examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
--rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db"
Send one ipv4 packet with source ip address 200.10.0.1 will be dropped.
@@ -100,7 +107,7 @@ Ipv4 packet match destination ip address 100.10.0.1 will be dropped::
Add one default rule in rule file /root/rule_ipv6.db
R0:0:0:0:0:0:0:0/0 0:0:0:0:0:0:0:0/0 0 : 65535 0 : 65535 0x00/0x00 0
- ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
+ ./<build_target>/examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
--rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db"
Send one ipv4 packet with destination ip address 100.10.0.1 will be dropped.
@@ -115,7 +122,7 @@ Ipv4 packet match source port 11 will be dropped::
Add one default rule in rule file /root/rule_ipv6.db
R0:0:0:0:0:0:0:0/0 0:0:0:0:0:0:0:0/0 0 : 65535 0 : 65535 0x00/0x00 0
- ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
+ ./<build_target>/examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
--rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db"
Send one ipv4 packet with source port 11 will be dropped.
@@ -130,7 +137,7 @@ Ipv4 packet match destination port 101 will be dropped::
Add one default rule in rule file /root/rule_ipv6.db
R0:0:0:0:0:0:0:0/0 0:0:0:0:0:0:0:0/0 0 : 65535 0 : 65535 0x00/0x00 0
- ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
+ ./<build_target>/examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
--rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db"
Send one ipv4 packet with destination port 101 will be dropped.
@@ -145,7 +152,7 @@ Ipv4 packet match protocol TCP will be dropped::
Add one default rule in rule file /root/rule_ipv6.db
R0:0:0:0:0:0:0:0/0 0:0:0:0:0:0:0:0/0 0 : 65535 0 : 65535 0x00/0x00 0
- ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
+ ./<build_target>/examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
--rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db"
Send one TCP ipv4 packet will be dropped.
@@ -160,7 +167,7 @@ Ipv4 packet match 5-tuple will be dropped::
Add one default rule in rule file /root/rule_ipv6.db
R0:0:0:0:0:0:0:0/0 0:0:0:0:0:0:0:0/0 0 : 65535 0 : 65535 0x00/0x00 0
- ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
+ ./<build_target>/examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
--rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db"
Send one TCP ipv4 packet with source ip address 200.10.0.1,
@@ -180,7 +187,7 @@ Ipv6 packet match source ipv6 address 2001:0db8:85a3:08d3:1319:8a2e:0370:7344/12
Add one default rule in rule file /root/rule_ipv4.db
R0.0.0.0/0 0.0.0.0/0 0 : 65535 0 : 65535 0x00/0x00 0
- ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
+ ./<build_target>/examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
--rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db"
Send one ipv6 packet with source ip address 2001:0db8:85a3:08d3:1319:8a2e:0370:7344/128 will be dropped.
@@ -195,7 +202,7 @@ Ipv6 packet match destination ipv6 address 2002:0db8:85a3:08d3:1319:8a2e:0370:73
Add one default rule in rule file /root/rule_ipv4.db
R0.0.0.0/0 0.0.0.0/0 0 : 65535 0 : 65535 0x00/0x00 0
- ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
+ ./<build_target>/examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
--rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db"
Send one ipv6 packet with destination ip address 2002:0db8:85a3:08d3:1319:8a2e:0370:7344/128 will be dropped.
@@ -210,7 +217,7 @@ Ipv6 packet match source port 11 will be dropped::
Add one default rule in rule file /root/rule_ipv4.db
R0.0.0.0/0 0.0.0.0/0 0 : 65535 0 : 65535 0x00/0x00 0
- ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
+ ./<build_target>/examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
--rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db"
Send one ipv6 packet with source port 11 will be dropped.
@@ -225,7 +232,7 @@ Ipv6 packet match destination port 101 will be dropped::
Add one default rule in rule file /root/rule_ipv4.db
R0.0.0.0/0 0.0.0.0/0 0 : 65535 0 : 65535 0x00/0x00 0
- ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
+ ./<build_target>/examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
--rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db"
Send one ipv6 packet with destination port 101 will be dropped.
@@ -240,7 +247,7 @@ Ipv6 packet match protocol TCP will be dropped::
Add one default rule in rule file /root/rule_ipv4.db
R0.0.0.0/0 0.0.0.0/0 0 : 65535 0 : 65535 0x00/0x00 0
- ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
+ ./<build_target>/examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
--rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db"
Send one TCP ipv6 packet will be dropped.
@@ -255,7 +262,7 @@ Ipv6 packet match 5-tuple will be dropped::
Add one default rule in rule file /root/rule_ipv4.db
R0.0.0.0/0 0.0.0.0/0 0 : 65535 0 : 65535 0x00/0x00 0
- ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
+ ./<build_target>/examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
--rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db"
Send one TCP ipv6 packet with source ip address 2001:0db8:85a3:08d3:1319:8a2e:0370:7344/128,
@@ -281,7 +288,7 @@ Add two exact rule as below in rule_ipv6.db::
Start l3fwd-acl and send packet::
- ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
+ ./<build_target>/examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
--rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db"
Send one TCP ipv4 packet with source ip address 200.10.0.1, destination
@@ -312,7 +319,7 @@ Add two LPM rule as below in rule_ipv6.db::
Start l3fwd-acl and send packet::
- ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
+ ./<build_target>/examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
--rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db"
Send one TCP ipv4 packet with destination ip address 1.1.1.1 will be forward to PORT0.
@@ -333,7 +340,7 @@ Packet match 5-tuple will be dropped::
@2001:0db8:85a3:08d3:1319:8a2e:0370:7344/128 2002:0db8:85a3:08d3:1319:8a2e:0370:7344/101 11 : 11 101 : 101 0x06/0xff
R0:0:0:0:0:0:0:0/0 0:0:0:0:0:0:0:0/0 0 : 65535 0 : 65535 0x00/0x00 0
- ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
+ ./<build_target>/examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
--rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db" --scalar
Send one TCP ipv4 packet with source ip address 200.10.0.1, destination ip address 100.10.0.1,
@@ -363,7 +370,7 @@ Add two ACL rule as below in rule_ipv6.db::
Start l3fwd-acl::
- ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
+ ./<build_target>/examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)"
--rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db"
The l3fwdacl will not set up because of ivalid ACL rule.
diff --git a/test_plans/large_vf_test_plan.rst b/test_plans/large_vf_test_plan.rst
index 71e66bf9..4e2d0555 100644
--- a/test_plans/large_vf_test_plan.rst
+++ b/test_plans/large_vf_test_plan.rst
@@ -57,7 +57,7 @@ Prerequisites
6. Start testpmd with "--txq=256 --rxq=256" to setup 256 queues::
- ./dpdk-testpmd -c ff -n 4 -- -i --rxq=256 --txq=256 --total-num-mbufs=500000
+ ./<build_target>/app/dpdk-testpmd -c ff -n 4 -- -i --rxq=256 --txq=256 --total-num-mbufs=500000
Note::
@@ -325,10 +325,10 @@ Subcase 6: negative: fail to test exceed 256 queues
---------------------------------------------------
Start testpmd on VF0 with 512 queues::
- ./dpdk-testpmd -c f -n 4 -- -i --txq=512 --rxq=512
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -- -i --txq=512 --rxq=512
or::
- ./dpdk-testpmd -c f -n 4 -- -i --txq=256 --rxq=256
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -- -i --txq=256 --rxq=256
testpmd> port stop all
testpmd> port config all rxq 512
testpmd> port config all txq 512
@@ -408,11 +408,11 @@ Bind all VFs to vfio-pci, only have 32 ports, reached maximum number of ethernet
Start testpmd with queue exceed 4 queues::
- ./dpdk-testpmd -c f -n 4 -- -i --txq=8 --rxq=8
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -- -i --txq=8 --rxq=8
or::
- ./dpdktestpmd -c f -n 4 -- -i --txq=4 --rxq=4
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -- -i --txq=4 --rxq=4
testpmd> port stop all
testpmd> port config all rxq
testpmd> port config all rxq 8
diff --git a/test_plans/link_flowctrl_test_plan.rst b/test_plans/link_flowctrl_test_plan.rst
index d3bd8af8..373cd39a 100644
--- a/test_plans/link_flowctrl_test_plan.rst
+++ b/test_plans/link_flowctrl_test_plan.rst
@@ -91,7 +91,7 @@ Prerequisites
Assuming that ports ``0`` and ``2`` are connected to a traffic generator,
launch the ``testpmd`` with the following arguments::
- ./build/app/testpmd -cffffff -n 3 -- -i --burst=1 --txpt=32 \
+ ./build/app/dpdk-testpmd -cffffff -n 3 -- -i --burst=1 --txpt=32 \
--txht=8 --txwt=0 --txfreet=0 --rxfreet=64 --mbcache=250 --portmask=0x5
The -n command is used to select the number of memory channels.
diff --git a/test_plans/link_status_interrupt_test_plan.rst b/test_plans/link_status_interrupt_test_plan.rst
index 32dea9a4..fe210916 100644
--- a/test_plans/link_status_interrupt_test_plan.rst
+++ b/test_plans/link_status_interrupt_test_plan.rst
@@ -73,11 +73,18 @@ to the device under test::
The test app need add a cmdline, ``--vfio-intr=int_x``.
+Build dpdk and examples=link_status_interrupt:
+ CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static <build_target>
+ ninja -C <build_target>
+
+ meson configure -Dexamples=link_status_interrupt <build_target>
+ ninja -C <build_target>
+
Assume port 0 and 1 are connected to the remote ports, e.g. packet generator.
To run the test application in linuxapp environment with 4 lcores, 2 ports and
2 RX queues per lcore::
- $ ./link_status_interrupt -c f -- -q 2 -p 0x3
+ $ ./<build_target>/examples/dpdk-link_status_interrupt -c f -- -q 2 -p 0x3
Also, if the ports need to be tested are different, the port mask should be
changed. The lcore used to run the test application and the number of queues
diff --git a/test_plans/loopback_multi_paths_port_restart_test_plan.rst b/test_plans/loopback_multi_paths_port_restart_test_plan.rst
index 8418996b..ba765caf 100644
--- a/test_plans/loopback_multi_paths_port_restart_test_plan.rst
+++ b/test_plans/loopback_multi_paths_port_restart_test_plan.rst
@@ -45,13 +45,13 @@ Test Case 1: loopback test with packed ring mergeable path
1. Launch vhost by below command::
rm -rf vhost-net*
- ./testpmd -n 4 -l 2-4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 2-4 --no-pci \
--file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 5-6 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 5-6 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=1,in_order=0 \
-- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
@@ -86,13 +86,13 @@ Test Case 2: loopback test with packed ring non-mergeable path
1. Launch vhost by below command::
rm -rf vhost-net*
- ./testpmd -n 4 -l 2-4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 2-4 --no-pci \
--file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 5-6 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 5-6 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=0,in_order=0 \
-- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
@@ -127,13 +127,13 @@ Test Case 3: loopback test with packed ring inorder mergeable path
1. Launch vhost by below command::
rm -rf vhost-net*
- ./testpmd -n 4 -l 2-4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 2-4 --no-pci \
--file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 5-6 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 5-6 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=1,in_order=1 \
-- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
@@ -168,13 +168,13 @@ Test Case 4: loopback test with packed ring inorder non-mergeable path
1. Launch vhost by below command::
rm -rf vhost-net*
- ./testpmd -n 4 -l 2-4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 2-4 --no-pci \
--file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 5-6 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 5-6 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \
-- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
@@ -209,13 +209,13 @@ Test Case 5: loopback test with split ring inorder mergeable path
1. Launch vhost by below command::
rm -rf vhost-net*
- ./testpmd -n 4 -l 2-4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 2-4 --no-pci \
--file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 5-6 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 5-6 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=1,mrg_rxbuf=1 \
-- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
@@ -250,13 +250,13 @@ Test Case 6: loopback test with split ring inorder non-mergeable path
1. Launch vhost by below command::
rm -rf vhost-net*
- ./testpmd -n 4 -l 2-4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 2-4 --no-pci \
--file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 5-6 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 5-6 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=1,mrg_rxbuf=0 \
-- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
@@ -291,13 +291,13 @@ Test Case 7: loopback test with split ring mergeable path
1. Launch vhost by below command::
rm -rf vhost-net*
- ./testpmd -n 4 -l 2-4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 2-4 --no-pci \
--file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 5-6 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 5-6 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=1 \
-- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
@@ -332,13 +332,13 @@ Test Case 8: loopback test with split ring non-mergeable path
1. Launch vhost by below command::
rm -rf vhost-net*
- ./testpmd -n 4 -l 2-4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 2-4 --no-pci \
--file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 5-6 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 5-6 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=0,vectorized=1 \
-- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
@@ -373,13 +373,13 @@ Test Case 9: loopback test with split ring vector_rx path
1. Launch vhost by below command::
rm -rf vhost-net*
- ./testpmd -n 4 -l 2-4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 2-4 --no-pci \
--file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 5-6 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 5-6 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=0,vectorized=1 \
-- -i --nb-cores=1 --txd=1024 --rxd=1024
@@ -414,13 +414,13 @@ Test Case 10: loopback test with packed ring vectorized path
1. Launch vhost by below command::
rm -rf vhost-net*
- ./testpmd -n 4 -l 2-4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 2-4 --no-pci \
--file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,in_order=0,mrg_rxbuf=0,vectorized=1 \
-- -i --nb-cores=1 --txd=1024 --rxd=1024
>set fwd mac
diff --git a/test_plans/loopback_multi_queues_test_plan.rst b/test_plans/loopback_multi_queues_test_plan.rst
index 3d2851b8..fae367c6 100644
--- a/test_plans/loopback_multi_queues_test_plan.rst
+++ b/test_plans/loopback_multi_queues_test_plan.rst
@@ -45,14 +45,14 @@ Test Case 1: loopback with virtio 1.1 mergeable path using 1 queue and 8 queues
1. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -l 1-2 -n 4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \
--vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
-i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 5-6 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 5-6 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \
-- -i --nb-cores=1 --txd=1024 --rxd=1024
@@ -76,14 +76,14 @@ Test Case 1: loopback with virtio 1.1 mergeable path using 1 queue and 8 queues
6. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -l 1-9 -n 4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -l 1-9 -n 4 --no-pci \
--vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
-i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
testpmd>set fwd mac
7. Launch virtio-user by below command::
- ./testpmd -n 4 -l 10-18 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 10-18 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,packed_vq=1,mrg_rxbuf=1,in_order=0 \
-- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
@@ -105,14 +105,14 @@ Test Case 2: loopback with virtio 1.1 non-mergeable path using 1 queue and 8 que
1. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -l 1-2 -n 4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \
--vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
-i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 5-6 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 5-6 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \
-- -i --nb-cores=1 --txd=1024 --rxd=1024
@@ -136,14 +136,14 @@ Test Case 2: loopback with virtio 1.1 non-mergeable path using 1 queue and 8 que
6. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -l 1-9 -n 4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -l 1-9 -n 4 --no-pci \
--vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
-i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
testpmd>set fwd mac
7. Launch virtio-user by below command::
- ./testpmd -n 4 -l 10-18 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 10-18 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,packed_vq=1,mrg_rxbuf=0,in_order=0 \
-- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
@@ -165,14 +165,14 @@ Test Case 3: loopback with virtio 1.0 inorder mergeable path using 1 queue and 8
1. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -l 1-2 -n 4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \
--vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
-i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 5-6 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 5-6 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,mrg_rxbuf=1,in_order=1 \
-- -i --nb-cores=1 --txd=1024 --rxd=1024
@@ -196,14 +196,14 @@ Test Case 3: loopback with virtio 1.0 inorder mergeable path using 1 queue and 8
6. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -l 1-9 -n 4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -l 1-9 -n 4 --no-pci \
--vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
-i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
testpmd>set fwd mac
7. Launch virtio-user by below command::
- ./testpmd -n 4 -l 10-18 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 10-18 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,mrg_rxbuf=1,in_order=1 \
-- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
@@ -225,14 +225,14 @@ Test Case 4: loopback with virtio 1.0 inorder non-mergeable path using 1 queue a
1. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -l 1-2 -n 4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \
--vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
-i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 5-6 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 5-6 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,mrg_rxbuf=0,in_order=1 \
-- -i --nb-cores=1 --txd=1024 --rxd=1024
@@ -256,14 +256,14 @@ Test Case 4: loopback with virtio 1.0 inorder non-mergeable path using 1 queue a
6. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -l 1-9 -n 4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -l 1-9 -n 4 --no-pci \
--vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
-i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
testpmd>set fwd mac
7. Launch virtio-user by below command::
- ./testpmd -n 4 -l 10-18 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 10-18 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,mrg_rxbuf=0,in_order=1 \
-- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
@@ -285,14 +285,14 @@ Test Case 5: loopback with virtio 1.0 mergeable path using 1 queue and 8 queues
1. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -l 1-2 -n 4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \
--vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
-i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 5-6 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 5-6 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,mrg_rxbuf=1,in_order=0 \
-- -i --nb-cores=1 --txd=1024 --rxd=1024
@@ -316,14 +316,14 @@ Test Case 5: loopback with virtio 1.0 mergeable path using 1 queue and 8 queues
6. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -l 1-9 -n 4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -l 1-9 -n 4 --no-pci \
--vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
-i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
testpmd>set fwd mac
7. Launch virtio-user by below command::
- ./testpmd -n 4 -l 10-18 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 10-18 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,mrg_rxbuf=1,in_order=0 \
-- -i --enable-hw-vlan-strip --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
@@ -345,14 +345,14 @@ Test Case 6: loopback with virtio 1.0 non-mergeable path using 1 queue and 8 que
1. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -l 1-2 -n 4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \
--vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
-i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 5-6 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 5-6 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,mrg_rxbuf=0,in_order=0,vectorized=1 \
-- -i --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
@@ -376,14 +376,14 @@ Test Case 6: loopback with virtio 1.0 non-mergeable path using 1 queue and 8 que
6. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -l 1-9 -n 4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -l 1-9 -n 4 --no-pci \
--vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
-i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
testpmd>set fwd mac
7. Launch virtio-user by below command::
- ./testpmd -n 4 -l 10-18 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 10-18 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,mrg_rxbuf=0,in_order=0,vectorized=1 \
-- -i --enable-hw-vlan-strip --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
@@ -405,14 +405,14 @@ Test Case 7: loopback with virtio 1.0 vector_rx path using 1 queue and 8 queues
1. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -l 1-2 -n 4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \
--vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
-i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 5-6 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 5-6 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,mrg_rxbuf=0,in_order=0,vectorized=1 \
-- -i --nb-cores=1 --txd=1024 --rxd=1024
@@ -436,14 +436,14 @@ Test Case 7: loopback with virtio 1.0 vector_rx path using 1 queue and 8 queues
6. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -l 1-9 -n 4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -l 1-9 -n 4 --no-pci \
--vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
-i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
testpmd>set fwd mac
7. Launch virtio-user by below command::
- ./testpmd -n 4 -l 10-18 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 10-18 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,mrg_rxbuf=0,in_order=0,vectorized=1 \
-- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
@@ -465,14 +465,14 @@ Test Case 8: loopback with virtio 1.1 inorder mergeable path using 1 queue and 8
1. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -l 1-2 -n 4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \
--vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
-i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 5-6 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 5-6 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \
-- -i --nb-cores=1 --txd=1024 --rxd=1024
@@ -496,14 +496,14 @@ Test Case 8: loopback with virtio 1.1 inorder mergeable path using 1 queue and 8
6. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -l 1-9 -n 4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -l 1-9 -n 4 --no-pci \
--vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
-i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
testpmd>set fwd mac
7. Launch virtio-user by below command::
- ./testpmd -n 4 -l 10-18 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 10-18 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,packed_vq=1,mrg_rxbuf=1,in_order=1 \
-- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
@@ -525,13 +525,13 @@ Test Case 9: loopback with virtio 1.1 inorder non-mergeable path using 1 queue a
1. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -l 1-2 -n 4 --no-pci --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
+ ./<build_target>/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
-i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 5-6 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 5-6 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \
-- -i --rx-offloads=0x10 --nb-cores=1 --txd=1024 --rxd=1024
@@ -555,13 +555,13 @@ Test Case 9: loopback with virtio 1.1 inorder non-mergeable path using 1 queue a
6. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -l 1-9 -n 4 --no-pci --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
+ ./<build_target>/app/dpdk-testpmd -l 1-9 -n 4 --no-pci --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
-i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
testpmd>set fwd mac
7. Launch virtio-user by below command::
- ./testpmd -n 4 -l 10-18 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 10-18 \
--no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \
-- -i --rx-offloads=0x10 --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
@@ -583,14 +583,14 @@ Test Case 10: loopback with virtio 1.1 vectorized path using 1 queue and 8 queue
1. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -l 1-2 -n 4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \
--vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
-i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 5-6 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 5-6 \
--no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \
-- -i --nb-cores=1 --txd=1024 --rxd=1024
@@ -614,14 +614,14 @@ Test Case 10: loopback with virtio 1.1 vectorized path using 1 queue and 8 queue
6. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -l 1-9 -n 4 --no-pci \
+ ./<build_target>/app/dpdk-testpmd -l 1-9 -n 4 --no-pci \
--vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
-i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
testpmd>set fwd mac
7. Launch virtio-user by below command::
- ./testpmd -n 4 -l 10-18 \
+ ./<build_target>/app/dpdk-testpmd -n 4 -l 10-18 \
--no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \
-- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
diff --git a/test_plans/mac_filter_test_plan.rst b/test_plans/mac_filter_test_plan.rst
index a9695cfc..f40ed8b1 100644
--- a/test_plans/mac_filter_test_plan.rst
+++ b/test_plans/mac_filter_test_plan.rst
@@ -48,7 +48,7 @@ Prerequisites
Assuming that at least a port is connected to a traffic generator,
launch the ``testpmd`` with the following arguments::
- ./x86_64-default-linuxapp-gcc/build/app/test-pmd/testpmd -c 0xc3 -n 3 -- -i \
+ ./<build_target>/app/dpdk-testpmd -c 0xc3 -n 3 -- -i \
--burst=1 --rxpt=0 --rxht=0 --rxwt=0 --txpt=36 --txht=0 --txwt=0 \
--txfreet=32 --rxfreet=64 --mbcache=250 --portmask=0x3
diff --git a/test_plans/macsec_for_ixgbe_test_plan.rst b/test_plans/macsec_for_ixgbe_test_plan.rst
index 660c2fd1..68c2c2c8 100644
--- a/test_plans/macsec_for_ixgbe_test_plan.rst
+++ b/test_plans/macsec_for_ixgbe_test_plan.rst
@@ -113,7 +113,7 @@ Test Case 1: MACsec packets send and receive
1. Start the testpmd of rx port::
- ./testpmd -c 0xf --socket-mem 1024,0 --file-prefix=rx -a 0000:07:00.1 \
+ ./<build_target>/app/dpdk-testpmd -c 0xf --socket-mem 1024,0 --file-prefix=rx -a 0000:07:00.1 \
-- -i --port-topology=chained
2. Set MACsec offload on::
@@ -150,7 +150,7 @@ Test Case 1: MACsec packets send and receive
1. Start the testpmd of tx port::
- ./testpmd -c 0xf0 --socket-mem 1024,0 --file-prefix=tx -a 0000:07:00.0 \
+ ./<build_target>/app/dpdk-testpmd -c 0xf0 --socket-mem 1024,0 --file-prefix=tx -a 0000:07:00.0 \
-- -i --port-topology=chained
2. Set MACsec offload on::
@@ -403,7 +403,7 @@ Test Case 7: performance test of MACsec offload packets
Port0 connected to IXIA port5, port1 connected to IXIA port6, set port0
MACsec offload on, set fwd mac::
- ./testpmd -c 0xf --socket-mem 1024,0 -- -i \
+ ./<build_target>/app/dpdk-testpmd -c 0xf --socket-mem 1024,0 -- -i \
--port-topology=chained
testpmd> set macsec offload 0 on encrypt on replay-protect on
testpmd> set fwd mac
@@ -422,7 +422,7 @@ Test Case 7: performance test of MACsec offload packets
with cable, connect 05:00.0 to IXIA. Bind the three ports to dpdk driver.
Start two testpmd::
- ./testpmd -c 0xf --socket-mem 1024,0 --file-prefix=rx -a 0000:07:00.1 \
+ ./<build_target>/app/dpdk-testpmd -c 0xf --socket-mem 1024,0 --file-prefix=rx -a 0000:07:00.1 \
-- -i --port-topology=chained
testpmd> set macsec offload 0 on encrypt on replay-protect on
@@ -432,7 +432,7 @@ Test Case 7: performance test of MACsec offload packets
testpmd> set macsec sa tx 0 0 0 0 00112200000000000000000000000000
testpmd> set fwd rxonly
- ./testpmd -c 0xf0 --socket-mem 1024,0 --file-prefix=tx -b 0000:07:00.1 \
+ ./<build_target>/app/dpdk-testpmd -c 0xf0 --socket-mem 1024,0 --file-prefix=tx -b 0000:07:00.1 \
-- -i --port-topology=chained
testpmd> set macsec offload 1 on encrypt on replay-protect on
diff --git a/test_plans/malicious_driver_event_indication_test_plan.rst b/test_plans/malicious_driver_event_indication_test_plan.rst
index 1c9d244f..c97555ba 100644
--- a/test_plans/malicious_driver_event_indication_test_plan.rst
+++ b/test_plans/malicious_driver_event_indication_test_plan.rst
@@ -62,10 +62,10 @@ Test Case1: Check log output when malicious driver events is detected
echo 1 > /sys/bus/pci/devices/0000\:18\:00.1/max_vfs
2. Launch PF by testpmd
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x03 -n 4 --file-prefix=test1 -a [pci of PF] -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x03 -n 4 --file-prefix=test1 -a [pci of PF] -- -i
3. Launch VF by testpmd
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x03 -n 4 --file-prefix=lei1 -a [pci of VF] -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x03 -n 4 --file-prefix=lei1 -a [pci of VF] -- -i
> set fwd txonly
> start
@@ -83,11 +83,11 @@ Test Case2: Check the event counter number for malicious driver events
echo 1 > /sys/bus/pci/devices/0000\:18\:00.1/max_vfs
2. Launch PF by testpmd
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x03 -n 4 --file-prefix=test1 -a [pci of PF] -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x03 -n 4 --file-prefix=test1 -a [pci of PF] -- -i
3. launch VF by testpmd and start txonly mode 3 times:
repeat following step 3 times
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x03 -n 4 --file-prefix=lei1 -a [pci of VF] -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x03 -n 4 --file-prefix=lei1 -a [pci of VF] -- -i
> set fwd txonly
> start
> quit
diff --git a/test_plans/metering_and_policing_test_plan.rst b/test_plans/metering_and_policing_test_plan.rst
index e3fb308b..11142395 100644
--- a/test_plans/metering_and_policing_test_plan.rst
+++ b/test_plans/metering_and_policing_test_plan.rst
@@ -144,7 +144,7 @@ Bind them to dpdk igb_uio driver,
::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1f -s 0x10 -n 4 \
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1f -s 0x10 -n 4 \
--vdev 'net_softnic0,firmware=./drivers/net/softnic/firmware.cli' \
-- -i --portmask=0x10 --disable-rss
testpmd> start
@@ -153,7 +153,7 @@ Bind them to dpdk igb_uio driver,
::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1f -s 0x10 -n 4 \
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1f -s 0x10 -n 4 \
--vdev 'net_softnic0,firmware=./drivers/net/softnic/firmware.cli' \
-- -i --portmask=0x10 --disable-rss
testpmd> set port tm hierarchy default 1
@@ -173,7 +173,7 @@ Test Case 1: ipv4 ACL table RFC2698 GYR
::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss
Add rules to table, set CBS to 400 bytes, PBS to 500 bytes
@@ -226,7 +226,7 @@ Test Case 2: ipv4 ACL table RFC2698 GYD
::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss
Add rules to table, set CBS to 400 bytes, PBS to 500 bytes
::
@@ -275,7 +275,7 @@ Test Case 3: ipv4 ACL table RFC2698 GDR
::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss
Add rules to table, set CBS to 400 bytes, PBS to 500 bytes
::
@@ -327,7 +327,7 @@ Test Case 4: ipv4 ACL table RFC2698 DYR
::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss
Add rules to table, set CBS to 400 bytes, PBS to 500 bytes
::
@@ -378,7 +378,7 @@ Test Case 5: ipv4 ACL table RFC2698 DDD
::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss
Add rules to table, set CBS to 400 bytes, PBS to 500 bytes
::
@@ -426,7 +426,7 @@ Test Case 6: ipv4 with same CBS and PBS GDR
::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss
Add rules to table, set CBS to 400 bytes, PBS to 500 bytes
::
@@ -467,7 +467,7 @@ Test Case 7: ipv4 HASH table RFC2698
::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss
Add rules to table,
::
@@ -507,7 +507,7 @@ Test Case 8: ipv6 ACL table RFC2698
::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss
Add rules to table,
::
@@ -561,7 +561,7 @@ Test Case 9: multiple meter and profile
::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1f -s 0x10 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=4 --txq=4 --portmask=0x10 --disable-rss
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1f -s 0x10 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=4 --txq=4 --portmask=0x10 --disable-rss
Add rules to table, set CBS to 400 bytes, PBS to 500 bytes
::
@@ -664,7 +664,7 @@ Test Case 10: ipv4 RFC2698 pre-colored red by DSCP table
::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss
Add rules to table, set CBS to 400 bytes, PBS to 500 bytes
::
@@ -755,7 +755,7 @@ Test Case 11: ipv4 RFC2698 pre-colored yellow by DSCP table
::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss
Add rules to table, set CBS to 400 bytes, PBS to 500 bytes
::
@@ -848,7 +848,7 @@ Test Case 12: ipv4 RFC2698 pre-colored green by DSCP table
::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss
Add rules to table, set CBS to 400 bytes, PBS to 500 bytes
::
diff --git a/test_plans/mtu_update_test_plan.rst b/test_plans/mtu_update_test_plan.rst
index b62ec15a..5a60746a 100644
--- a/test_plans/mtu_update_test_plan.rst
+++ b/test_plans/mtu_update_test_plan.rst
@@ -59,7 +59,7 @@ Assuming that ports ``0`` and ``1`` of the test target are directly connected
to the traffic generator, launch the ``testpmd`` application with the following
arguments::
- ./build/app/testpmd -c ffffff -n 6 -- -i --portmask=0x3 --max-pkt-len=9600 \
+ ./build/app/dpdk-testpmd -c ffffff -n 6 -- -i --portmask=0x3 --max-pkt-len=9600 \
--tx-offloads=0x00008000
The -n command is used to select the number of memory channels. It should match the number of memory channels on that setup.
diff --git a/test_plans/multiple_pthread_test_plan.rst b/test_plans/multiple_pthread_test_plan.rst
index 8dad22d4..9603c494 100644
--- a/test_plans/multiple_pthread_test_plan.rst
+++ b/test_plans/multiple_pthread_test_plan.rst
@@ -81,7 +81,7 @@ Test Case 1: Basic operation
To run the application, start the testpmd with the lcores all running with
threads and also the unique core assigned, command as follows::
- ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='0@8,(4-5)@9' -n 4 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='0@8,(4-5)@9' -n 4 -- -i
Using the command to make sure the lcore are init on the correct cpu::
@@ -90,11 +90,11 @@ Using the command to make sure the lcore are init on the correct cpu::
Result as follows::
PID TID %CPU PSR COMMAND
- 31038 31038 22.5 8 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
- 31038 31039 0.0 8 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
- 31038 31040 0.0 9 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
- 31038 31041 0.0 9 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
- 31038 31042 0.0 8 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
+ 31038 31038 22.5 8 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
+ 31038 31039 0.0 8 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
+ 31038 31040 0.0 9 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
+ 31038 31041 0.0 9 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
+ 31038 31042 0.0 8 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
Their TIDs are for these threads as below::
@@ -134,11 +134,11 @@ Check forward configuration::
Send packets continuous::
PID TID %CPU PSR COMMAND
- 31038 31038 0.6 8 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
- 31038 31039 0.0 8 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
- 31038 31040 1.5 9 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
- 31038 31041 1.5 9 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
- 31038 31042 0.0 8 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
+ 31038 31038 0.6 8 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
+ 31038 31039 0.0 8 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
+ 31038 31040 1.5 9 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
+ 31038 31041 1.5 9 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
+ 31038 31042 0.0 8 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
You can see TID 31040(Lcore 4), 31041(Lore 5) are running.
@@ -150,7 +150,7 @@ Give examples, suppose DUT have 128 cpu core.
Case 1::
- ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='0@8,(4-5)@(8-11)' -n 4 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='0@8,(4-5)@(8-11)' -n 4 -- -i
It means start 3 EAL thread::
@@ -159,7 +159,7 @@ It means start 3 EAL thread::
Case 2::
- ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='1,2@(0-4,6),(3-4,6)@5,(7,8)' -n 4 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='1,2@(0-4,6),(3-4,6)@5,(7,8)' -n 4 -- -i
It means start 7 EAL thread::
@@ -171,7 +171,7 @@ It means start 7 EAL thread::
Case 3::
- ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,CONFIG_RTE_MAX_LCORE-1)@(4,5)' -n 4 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0,CONFIG_RTE_MAX_LCORE-1)@(4,5)' -n 4 -- -i
(default CONFIG_RTE_MAX_LCORE=128).
It means start 2 EAL thread::
@@ -180,7 +180,7 @@ It means start 2 EAL thread::
Case 4::
- ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,64-66)@(4,5)' -n 4 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0,64-66)@(4,5)' -n 4 -- -i
It means start 4 EAL thread::
@@ -188,7 +188,7 @@ It means start 4 EAL thread::
Case 5::
- ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='2-5,6,7-9' -n 4 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='2-5,6,7-9' -n 4 -- -i
It means start 8 EAL thread::
@@ -203,7 +203,7 @@ It means start 8 EAL thread::
Case 6::
- ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='2,(3-5)@3' -n 4 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='2,(3-5)@3' -n 4 -- -i
It means start 4 EAL thread::
@@ -212,7 +212,7 @@ It means start 4 EAL thread::
Case 7::
- ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,7-4)@(4,5)' -n 4 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0,7-4)@(4,5)' -n 4 -- -i
It means start 5 EAL thread::
@@ -224,19 +224,19 @@ Test Case 3: Negative Test
--------------------------
Input invalid commands to make sure the commands can't work::
- ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0-,4-7)@(4,5)' -n 4 -- -i
- ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(-1,4-7)@(4,5)' -n 4 -- -i
- ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,4-7-9)@(4,5)' -n 4 -- -i
- ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,abcd)@(4,5)' -n 4 -- -i
- ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,4-7)@(1-,5)' -n 4 -- -i
- ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,4-7)@(-1,5)' -n 4 -- -i
- ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,4-7)@(4,5-8-9)' -n 4 -- -i
- ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,4-7)@(abc,5)' -n 4 -- -i
- ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,4-7)@(4,xyz)' -n 4 -- -i
- ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,4-7)=(8,9)' -n 4 -- -i
- ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='2,3 at 4,(0-1,,4))' -n 4 -- -i
- ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='[0-,4-7]@(4,5)' -n 4 -- -i
- ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0-,4-7)@[4,5]' -n 4 -- -i
- ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='3-4 at 3,2 at 5-6' -n 4 -- -i
- ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='2,,3''2--3' -n 4 -- -i
- ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='2,,,3''2--3' -n 4 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0-,4-7)@(4,5)' -n 4 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(-1,4-7)@(4,5)' -n 4 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0,4-7-9)@(4,5)' -n 4 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0,abcd)@(4,5)' -n 4 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0,4-7)@(1-,5)' -n 4 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0,4-7)@(-1,5)' -n 4 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0,4-7)@(4,5-8-9)' -n 4 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0,4-7)@(abc,5)' -n 4 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0,4-7)@(4,xyz)' -n 4 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0,4-7)=(8,9)' -n 4 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='2,3 at 4,(0-1,,4))' -n 4 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='[0-,4-7]@(4,5)' -n 4 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0-,4-7)@[4,5]' -n 4 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='3-4 at 3,2 at 5-6' -n 4 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='2,,3''2--3' -n 4 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='2,,,3''2--3' -n 4 -- -i
diff --git a/test_plans/ptpclient_test_plan.rst b/test_plans/ptpclient_test_plan.rst
index 7781bffc..31ba2b15 100644
--- a/test_plans/ptpclient_test_plan.rst
+++ b/test_plans/ptpclient_test_plan.rst
@@ -45,7 +45,11 @@ Assume one port is connected to the tester and "linuxptp.x86_64"
has been installed on the tester.
Case Config::
- For support IEEE1588, need to set "CONFIG_RTE_LIBRTE_IEEE1588=y" in ./config/common_base and re-build DPDK.
+
+ Meson: For support IEEE1588, need to execute "sed -i '$a\#define RTE_LIBRTE_IEEE1588 1' config/rte_config.h",
+ and re-build DPDK.
+ $ CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static <build_target>
+ $ ninja -C <build_target>
The sample should be validated on Forville, Niantic and i350 Nics.
@@ -57,7 +61,7 @@ Start ptp server on tester with IEEE 802.3 network transport::
Start ptp client on DUT and wait few seconds::
- ./examples/ptpclient/build/ptpclient -c f -n 3 -- -T 0 -p 0x1
+ ./<build_target>/examples/dpdk-ptpclient -c f -n 3 -- -T 0 -p 0x1
Check that output message contained T1,T2,T3,T4 clock and time difference
between master and slave time is about 10us in niantic, 20us in Fortville,
@@ -79,7 +83,7 @@ Start ptp server on tester with IEEE 802.3 network transport::
Start ptp client on DUT and wait few seconds::
- ./examples/ptpclient/build/ptpclient -c f -n 3 -- -T 1 -p 0x1
+ ./<build_target>/examples/dpdk-ptpclient -c f -n 3 -- -T 1 -p 0x1
Make sure DUT system time has been changed to same as tester.
Check that output message contained T1,T2,T3,T4 clock and time difference
diff --git a/test_plans/ptype_mapping_test_plan.rst b/test_plans/ptype_mapping_test_plan.rst
index d157b670..fdabd191 100644
--- a/test_plans/ptype_mapping_test_plan.rst
+++ b/test_plans/ptype_mapping_test_plan.rst
@@ -61,7 +61,7 @@ Add print info to testpmd for case::
Start testpmd, enable rxonly and verbose mode::
- ./testpmd -c f -n 4 -- -i --port-topology=chained
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -- -i --port-topology=chained
Test Case 1: Get ptype mapping
==============================
diff --git a/test_plans/qinq_filter_test_plan.rst b/test_plans/qinq_filter_test_plan.rst
index 7b0a8d14..488596ed 100644
--- a/test_plans/qinq_filter_test_plan.rst
+++ b/test_plans/qinq_filter_test_plan.rst
@@ -58,7 +58,7 @@ Testpmd configuration - 4 RX/TX queues per port
#. set up testpmd with fortville NICs::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1f -n 4 -- -i --rxq=4 --txq=4 --disable-rss
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1f -n 4 -- -i --rxq=4 --txq=4 --disable-rss
#. enable qinq::
@@ -91,7 +91,7 @@ Testpmd configuration - 4 RX/TX queues per port
#. set up testpmd with fortville NICs::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1f -n 4 -- -i --rxq=4 --txq=4 --disable-rss
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1f -n 4 -- -i --rxq=4 --txq=4 --disable-rss
#. enable qinq::
@@ -134,7 +134,7 @@ Test Case 3: qinq packet filter to VF queues
#. set up testpmd with fortville PF NICs::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1f -n 4 --socket-mem=1024,1024 --file-prefix=pf -a 81:00.0 -- -i --rxq=4 --txq=4
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1f -n 4 --socket-mem=1024,1024 --file-prefix=pf -a 81:00.0 -- -i --rxq=4 --txq=4
#. enable qinq::
@@ -160,7 +160,7 @@ Test Case 3: qinq packet filter to VF queues
#. set up testpmd with fortville VF0 NICs::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3e0 -n 4 --socket-mem=1024,1024 --file-prefix=vf0 -a 81:02.0 -- -i --rxq=4 --txq=4
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3e0 -n 4 --socket-mem=1024,1024 --file-prefix=vf0 -a 81:02.0 -- -i --rxq=4 --txq=4
#. PMD fwd only receive the packets::
@@ -176,7 +176,7 @@ Test Case 3: qinq packet filter to VF queues
#. set up testpmd with fortville VF1 NICs::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7c0 -n 4 --socket-mem=1024,1024 --file-prefix=vf1 -a 81:02.1 -- -i --rxq=4 --txq=4
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7c0 -n 4 --socket-mem=1024,1024 --file-prefix=vf1 -a 81:02.1 -- -i --rxq=4 --txq=4
#. PMD fwd only receive the packets::
@@ -211,7 +211,7 @@ Test Case 4: qinq packet filter with different tpid
#. set up testpmd with fortville PF NICs::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1f -n 4 --socket-mem=1024,1024 --file-prefix=pf -a 81:00.0 -- -i --rxq=4 --txq=4
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1f -n 4 --socket-mem=1024,1024 --file-prefix=pf -a 81:00.0 -- -i --rxq=4 --txq=4
#. enable qinq::
@@ -241,7 +241,7 @@ Test Case 4: qinq packet filter with different tpid
#. set up testpmd with fortville VF0 NICs::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3e0 -n 4 --socket-mem=1024,1024 --file-prefix=vf0 -a 81:02.0 -- -i --rxq=4 --txq=4
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3e0 -n 4 --socket-mem=1024,1024 --file-prefix=vf0 -a 81:02.0 -- -i --rxq=4 --txq=4
#. PMD fwd only receive the packets::
@@ -257,7 +257,7 @@ Test Case 4: qinq packet filter with different tpid
#. set up testpmd with fortville VF1 NICs::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7c0 -n 4 --socket-mem=1024,1024 --file-prefix=vf1 -a 81:02.1 -- -i --rxq=4 --txq=4
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7c0 -n 4 --socket-mem=1024,1024 --file-prefix=vf1 -a 81:02.1 -- -i --rxq=4 --txq=4
#. PMD fwd only receive the packets::
diff --git a/test_plans/qos_api_test_plan.rst b/test_plans/qos_api_test_plan.rst
index f8a77d4c..9102907e 100644
--- a/test_plans/qos_api_test_plan.rst
+++ b/test_plans/qos_api_test_plan.rst
@@ -90,7 +90,7 @@ Test Case: dcb 4 tc queue mapping
=================================
1. Start testpmd and set DCB::
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 23-27 -n 4 --master-lcore=23 -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 23-27 -n 4 --master-lcore=23 -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip
testpmd> port stop all
testpmd> port config 0 dcb vt off 4 pfc off
testpmd> port config 1 dcb vt off 4 pfc off
@@ -115,7 +115,7 @@ Test Case: dcb 8 tc queue mapping
=================================
1. Start testpmd and set DCB::
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 23-31 -n 4 --master-lcore=23 -- -i --nb-cores=8 --rxq=8 --txq=8 --rss-ip
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 23-31 -n 4 --master-lcore=23 -- -i --nb-cores=8 --rxq=8 --txq=8 --rss-ip
testpmd> port stop all
testpmd> port config 0 dcb vt off 8 pfc off
testpmd> port config 1 dcb vt off 8 pfc off
@@ -148,7 +148,7 @@ Test Case: shaping 1 port 4 tc
==============================
1. Start testpmd and set DCB::
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 23-27 -n 4 --master-lcore=23 -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 23-27 -n 4 --master-lcore=23 -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip
testpmd> port stop all
testpmd> port config 0 dcb vt off 4 pfc off
testpmd> port config 1 dcb vt off 4 pfc off
@@ -191,7 +191,7 @@ Test Case: shaping 1 port 8 tc
===============================
1. Start testpmd and set DCB::
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 23-31 -n 4 --master-lcore=23 -- -i --nb-cores=8 --rxq=8 --txq=8 --rss-ip
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 23-31 -n 4 --master-lcore=23 -- -i --nb-cores=8 --rxq=8 --txq=8 --rss-ip
testpmd> port stop all
testpmd> port config 0 dcb vt off 8 pfc off
testpmd> port config 1 dcb vt off 8 pfc off
@@ -246,7 +246,7 @@ Test Case: shaping for port
===========================
1. Start testpmd::
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 23-27 -n 4 --master-lcore=23 -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 23-27 -n 4 --master-lcore=23 -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip
testpmd> port stop 1
1. Add private shaper 0::
@@ -273,7 +273,7 @@ Test Case: dcb 4 tc queue mapping
=================================
1. Start testpmd and set DCB::
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 3-7 -n 4 --master-lcore=3 -- -i --nb-cores=4 --rxq=4 --txq=4 --disable-rss
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-7 -n 4 --master-lcore=3 -- -i --nb-cores=4 --rxq=4 --txq=4 --disable-rss
testpmd> vlan set filter off 0
testpmd> vlan set filter off 1
testpmd> port stop all
@@ -300,7 +300,7 @@ Test Case: dcb 8 tc queue mapping
=================================
1. Start testpmd and set DCB::
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 3-11 -n 4 --master-lcore=3 -- -i --nb-cores=8 --rxq=8 --txq=8 --disable-rss
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-11 -n 4 --master-lcore=3 -- -i --nb-cores=8 --rxq=8 --txq=8 --disable-rss
testpmd> vlan set filter off 0
testpmd> vlan set filter off 1
testpmd> port stop all
@@ -335,7 +335,7 @@ Test Case: shaping for queue with 4 tc
======================================
1. Start testpmd and set DCB::
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 3-7 -n 4 --master-lcore=3 -- -i --nb-cores=4 --rxq=4 --txq=4 --disable-rss
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-7 -n 4 --master-lcore=3 -- -i --nb-cores=4 --rxq=4 --txq=4 --disable-rss
testpmd> vlan set filter off 0
testpmd> vlan set filter off 1
testpmd> port stop all
@@ -381,7 +381,7 @@ Test Case: shaping for queue with 8 tc
======================================
1. Start testpmd and set DCB::
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 3-11 -n 4 --master-lcore=3 -- -i --nb-cores=8 --rxq=8 --txq=8 --disable-rss
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-11 -n 4 --master-lcore=3 -- -i --nb-cores=8 --rxq=8 --txq=8 --disable-rss
testpmd> vlan set filter off 0
testpmd> vlan set filter off 1
testpmd> port stop all
diff --git a/test_plans/queue_region_test_plan.rst b/test_plans/queue_region_test_plan.rst
index 1db71094..7e4c9ca6 100644
--- a/test_plans/queue_region_test_plan.rst
+++ b/test_plans/queue_region_test_plan.rst
@@ -79,7 +79,7 @@ Prerequisites
4. start the testpmd::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -- -i --rxq=16 --txq=16
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 1ffff -n 4 -- -i --rxq=16 --txq=16
testpmd> port config all rss all
testpmd> set fwd rxonly
testpmd> set verbose 1
--
2.25.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [dts][PATCH V1 3/4] test_plans/*: modify test plan to adapt meson build
2022-01-22 18:20 [dts][PATCH V1 0/4] test_plans/*: modify test plan to adapt meson build Yu Jiang
2022-01-22 18:20 ` [dts][PATCH V1 1/4] " Yu Jiang
2022-01-22 18:20 ` [dts][PATCH V1 2/4] " Yu Jiang
@ 2022-01-22 18:20 ` Yu Jiang
2022-01-22 18:20 ` [dts][PATCH V1 4/4] " Yu Jiang
3 siblings, 0 replies; 7+ messages in thread
From: Yu Jiang @ 2022-01-22 18:20 UTC (permalink / raw)
To: lijuan.tu, dts; +Cc: Yu Jiang
test_plans/*: modify test plan to adapt meson build
Signed-off-by: Yu Jiang <yux.jiang@intel.com>
---
test_plans/queue_start_stop_test_plan.rst | 2 +-
| 30 +++++++++----------
test_plans/rteflow_priority_test_plan.rst | 16 +++++-----
...ntime_vf_queue_number_kernel_test_plan.rst | 10 +++----
.../runtime_vf_queue_number_test_plan.rst | 26 ++++++++--------
test_plans/rxtx_callbacks_test_plan.rst | 11 +++++--
test_plans/rxtx_offload_test_plan.rst | 16 +++++-----
test_plans/scatter_test_plan.rst | 2 +-
.../vdev_primary_secondary_test_plan.rst | 4 +--
test_plans/veb_switch_test_plan.rst | 30 +++++++++----------
test_plans/vf_daemon_test_plan.rst | 2 +-
test_plans/vf_jumboframe_test_plan.rst | 2 +-
test_plans/vf_kernel_test_plan.rst | 2 +-
test_plans/vf_l3fwd_test_plan.rst | 13 ++++++--
test_plans/vf_single_core_perf_test_plan.rst | 2 +-
...tio_user_as_exceptional_path_test_plan.rst | 6 ++--
...ser_for_container_networking_test_plan.rst | 8 ++---
17 files changed, 97 insertions(+), 85 deletions(-)
diff --git a/test_plans/queue_start_stop_test_plan.rst b/test_plans/queue_start_stop_test_plan.rst
index 3a2a7baf..82e35280 100644
--- a/test_plans/queue_start_stop_test_plan.rst
+++ b/test_plans/queue_start_stop_test_plan.rst
@@ -53,7 +53,7 @@ Assume port A and B are connected to the remote ports, e.g. packet generator.
To run the testpmd application in linuxapp environment with 4 lcores,
4 channels with other default parameters in interactive mode::
- $ ./testpmd -c 0xf -n 4 -- -i
+ $ ./<build_target>/app/dpdk-testpmd -c 0xf -n 4 -- -i
Test Case: queue start/stop
---------------------------
--git a/test_plans/rss_to_rte_flow_test_plan.rst b/test_plans/rss_to_rte_flow_test_plan.rst
index f770d975..cb006d80 100644
--- a/test_plans/rss_to_rte_flow_test_plan.rst
+++ b/test_plans/rss_to_rte_flow_test_plan.rst
@@ -84,7 +84,7 @@ Test case: set rss types on two ports (I40E)
1. Start the testpmd::
- ./testpmd -c 1ffff -n 4 -- -i --nb-cores=8 --rxq=4 --txq=4 --port-topology=chained
+ ./<build_target>/app/dpdk-testpmd -c 1ffff -n 4 -- -i --nb-cores=8 --rxq=4 --txq=4 --port-topology=chained
testpmd> set fwd rxonly
testpmd> set verbose 1
testpmd> start
@@ -210,7 +210,7 @@ Test case: set rss queues on two ports (I40E)
1. Start the testpmd::
- ./testpmd -c 1ffff -n 4 -- -i --nb-cores=8 --rxq=16 --txq=16 --port-topology=chained
+ ./<build_target>/app/dpdk-testpmd -c 1ffff -n 4 -- -i --nb-cores=8 --rxq=16 --txq=16 --port-topology=chained
testpmd> set fwd rxonly
testpmd> set verbose 1
testpmd> start
@@ -332,7 +332,7 @@ Test case: set rss types and rss queues on two ports (I40E)
1. Start the testpmd::
- ./testpmd -c 1ffff -n 4 -- -i --nb-cores=8 --rxq=8 --txq=8 --port-topology=chained
+ ./<build_target>/app/dpdk-testpmd -c 1ffff -n 4 -- -i --nb-cores=8 --rxq=8 --txq=8 --port-topology=chained
testpmd> set fwd rxonly
testpmd> set verbose 1
testpmd> start
@@ -399,7 +399,7 @@ Test case: disable rss in command-line (I40E)
1. Start the testpmd::
- ./testpmd -c 0x3 -n 4 -- -i --rxq=8 --txq=8 --disable-rss --port-topology=chained
+ ./<build_target>/app/dpdk-testpmd -c 0x3 -n 4 -- -i --rxq=8 --txq=8 --disable-rss --port-topology=chained
testpmd> set fwd rxonly
testpmd> set verbose 1
testpmd> start
@@ -468,7 +468,7 @@ Only i40e support key and key_len setting.
1. Start the testpmd::
- ./testpmd -c 1ffff -n 4 -- -i --nb-cores=8 --rxq=4 --txq=4 --port-topology=chained
+ ./<build_target>/app/dpdk-testpmd -c 1ffff -n 4 -- -i --nb-cores=8 --rxq=4 --txq=4 --port-topology=chained
testpmd> set fwd rxonly
testpmd> set verbose 1
testpmd> start
@@ -586,7 +586,7 @@ Test case: Flow directory rule and RSS rule combination (I40E)
1. Start the testpmd::
- ./testpmd -c 1ffff -n 4 -- -i --nb-cores=8 --rxq=16 --txq=16 --pkt-filter-mode=perfect
+ ./<build_target>/app/dpdk-testpmd -c 1ffff -n 4 -- -i --nb-cores=8 --rxq=16 --txq=16 --pkt-filter-mode=perfect
testpmd> set fwd rxonly
testpmd> set verbose 1
testpmd> start
@@ -616,7 +616,7 @@ Test case: Set queue-region with rte_flow api (I40E)
1. Start the testpmd::
- ./testpmd -c 1ffff -n 4 -- -i --nb-cores=16 --rxq=16 --txq=16 --port-topology=chained
+ ./<build_target>/app/dpdk-testpmd -c 1ffff -n 4 -- -i --nb-cores=16 --rxq=16 --txq=16 --port-topology=chained
testpmd> set fwd rxonly
testpmd> set verbose 1
testpmd> start
@@ -673,7 +673,7 @@ Test case: Set queue region in rte_flow with invalid parameter (I40E)
1. Start the testpmd::
- ./testpmd -c 1ffff -n 4 -- -i --nb-cores=16 --rxq=16 --txq=16 --port-topology=chained
+ ./<build_target>/app/dpdk-testpmd -c 1ffff -n 4 -- -i --nb-cores=16 --rxq=16 --txq=16 --port-topology=chained
testpmd> set fwd rxonly
testpmd> set verbose 1
testpmd> start
@@ -718,7 +718,7 @@ be implemented with fortville.
1. Start the testpmd::
- ./testpmd -c 1ffff -n 4 -- -i --nb-cores=8 --rxq=16 --txq=16 --port-topology=chained
+ ./<build_target>/app/dpdk-testpmd -c 1ffff -n 4 -- -i --nb-cores=8 --rxq=16 --txq=16 --port-topology=chained
testpmd> flow create 0 ingress pattern eth / ipv4 / udp / end actions rss types ipv4-udp end queues end / end
testpmd> set fwd rxonly
testpmd> set verbose 1
@@ -755,7 +755,7 @@ Test case: disable and enable rss
1. Start the testpmd::
- ./testpmd -c 1ffff -n 4 -- -i --nb-cores=8 --rxq=4 --txq=4 --port-topology=chained
+ ./<build_target>/app/dpdk-testpmd -c 1ffff -n 4 -- -i --nb-cores=8 --rxq=4 --txq=4 --port-topology=chained
testpmd> set fwd rxonly
testpmd> set verbose 1
testpmd> start
@@ -797,7 +797,7 @@ Test case: enable ipv4-udp rss
1. Start the testpmd::
- ./testpmd -c 1ffff -n 4 -- -i --nb-cores=8 --rxq=4 --txq=4 --port-topology=chained
+ ./<build_target>/app/dpdk-testpmd -c 1ffff -n 4 -- -i --nb-cores=8 --rxq=4 --txq=4 --port-topology=chained
testpmd> set fwd rxonly
testpmd> set verbose 1
testpmd> start
@@ -826,7 +826,7 @@ Test case: set rss valid/invalid queue rule
1. Start the testpmd::
- ./testpmd -c 1ffff -n 4 -- -i --nb-cores=8 --rxq=16 --txq=16 --port-topology=chained
+ ./<build_target>/app/dpdk-testpmd -c 1ffff -n 4 -- -i --nb-cores=8 --rxq=16 --txq=16 --port-topology=chained
testpmd> set fwd rxonly
testpmd> set verbose 1
testpmd> start
@@ -879,7 +879,7 @@ Test case: Different packet types
1. Start the testpmd::
- ./testpmd -c 1ffff -n 4 -- -i --nb-cores=8 --rxq=16 --txq=16 --port-topology=chained
+ ./<build_target>/app/dpdk-testpmd -c 1ffff -n 4 -- -i --nb-cores=8 --rxq=16 --txq=16 --port-topology=chained
testpmd> set fwd rxonly
testpmd> set verbose 1
testpmd> start
@@ -914,7 +914,7 @@ Test case: disable rss in command-line
1. Start the testpmd::
- ./testpmd -c 0x3 -n 4 -- -i --rxq=8 --txq=8 --disable-rss --port-topology=chained
+ ./<build_target>/app/dpdk-testpmd -c 0x3 -n 4 -- -i --rxq=8 --txq=8 --disable-rss --port-topology=chained
testpmd> set fwd rxonly
testpmd> set verbose 1
testpmd> start
@@ -950,7 +950,7 @@ Test case: Flow directory rule and RSS rule combination
1. Start the testpmd::
- ./testpmd -c 1ffff -n 4 -- -i --nb-cores=8 --rxq=16 --txq=16 --pkt-filter-mode=perfect
+ ./<build_target>/app/dpdk-testpmd -c 1ffff -n 4 -- -i --nb-cores=8 --rxq=16 --txq=16 --pkt-filter-mode=perfect
testpmd> set fwd rxonly
testpmd> set verbose 1
testpmd> start
diff --git a/test_plans/rteflow_priority_test_plan.rst b/test_plans/rteflow_priority_test_plan.rst
index 4ca1c22e..405a1bf1 100644
--- a/test_plans/rteflow_priority_test_plan.rst
+++ b/test_plans/rteflow_priority_test_plan.rst
@@ -71,7 +71,7 @@ Patterns in this case:
#. Start the ``testpmd`` application as follows::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -a 0000:af:00.0 --log-level="ice,7" -- -i --txq=8 --rxq=8
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -a 0000:af:00.0 --log-level="ice,7" -- -i --txq=8 --rxq=8
set fwd rxonly
set verbose 1
@@ -90,7 +90,7 @@ Patterns in this case:
#. Start the ``testpmd`` application as follows::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -a 0000:af:00.0,pipeline-mode-support=0 --log-level="ice,7" -- -i --txq=8 --rxq=8
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -a 0000:af:00.0,pipeline-mode-support=0 --log-level="ice,7" -- -i --txq=8 --rxq=8
set fwd rxonly
set verbose 1
@@ -119,7 +119,7 @@ Patterns in this case:
#. Start the ``testpmd`` application as follows::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -a 0000:af:00.0,pipeline-mode-support=1 --log-level="ice,7" -- -i --txq=8 --rxq=8
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -a 0000:af:00.0,pipeline-mode-support=1 --log-level="ice,7" -- -i --txq=8 --rxq=8
set fwd rxonly
set verbose 1
rx_vxlan_port add 4789 0
@@ -184,7 +184,7 @@ Patterns in this case:
#. Start the ``testpmd`` application as follows::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -a 0000:af:00.0,pipeline-mode-support=1 --log-level="ice,7" -- -i --txq=8 --rxq=8
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -a 0000:af:00.0,pipeline-mode-support=1 --log-level="ice,7" -- -i --txq=8 --rxq=8
set fwd rxonly
set verbose 1
@@ -206,7 +206,7 @@ Patterns in this case:
#. Start the ``testpmd`` application as follows::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -a 0000:af:00.0,pipeline-mode-support=1 --log-level="ice,7" -- -i --txq=8 --rxq=8
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -a 0000:af:00.0,pipeline-mode-support=1 --log-level="ice,7" -- -i --txq=8 --rxq=8
set fwd rxonly
set verbose 1
@@ -232,7 +232,7 @@ Patterns in this case:
#. Start the ``testpmd`` application as follows::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -a 0000:af:00.0,pipeline-mode-support=1 --log-level="ice,7" -- -i --txq=8 --rxq=8
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -a 0000:af:00.0,pipeline-mode-support=1 --log-level="ice,7" -- -i --txq=8 --rxq=8
set fwd rxonly
set verbose 1
@@ -257,7 +257,7 @@ Patterns in this case:
#. Start the ``testpmd`` application as follows::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -a 0000:af:00.0,pipeline-mode-support=1 --log-level="ice,7" -- -i --txq=8 --rxq=8
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -a 0000:af:00.0,pipeline-mode-support=1 --log-level="ice,7" -- -i --txq=8 --rxq=8
set fwd rxonly
set verbose 1
@@ -298,7 +298,7 @@ Patterns in this case:
#. Restart the ``testpmd`` application as follows::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -a 0000:af:00.0, pipeline-mode-support=1 --log-level="ice,7" -- -i --txq=8 --rxq=8
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -a 0000:af:00.0, pipeline-mode-support=1 --log-level="ice,7" -- -i --txq=8 --rxq=8
set fwd rxonly
set verbose 1
diff --git a/test_plans/runtime_vf_queue_number_kernel_test_plan.rst b/test_plans/runtime_vf_queue_number_kernel_test_plan.rst
index 5c8bab3e..d4f01a3b 100644
--- a/test_plans/runtime_vf_queue_number_kernel_test_plan.rst
+++ b/test_plans/runtime_vf_queue_number_kernel_test_plan.rst
@@ -127,7 +127,7 @@ Test Case 1: set valid VF queue number in testpmd command-line options
1. Start VF testpmd with "--rxq=[rxq] --txq=[txq]", and random valid values from 1 to 16, take 3 for example::
- ./testpmd -c 0xf0 -n 4 -a 00:04.0 --file-prefix=test2 \
+ ./<build_target>/app/dpdk-testpmd -c 0xf0 -n 4 -a 00:04.0 --file-prefix=test2 \
--socket-mem 1024,1024 -- -i --rxq=3 --txq=3
2. Configure vf forwarding prerequisits and start forwarding::
@@ -169,7 +169,7 @@ Test case 2: set invalid VF queue number in testpmd command-line options
1. Start VF testpmd with "--rxq=0 --txq=0" ::
- ./testpmd -c 0xf0 -n 4 -a 00:04.0 --file-prefix=test2 \
+ ./<build_target>/app/dpdk-testpmd -c 0xf0 -n 4 -a 00:04.0 --file-prefix=test2 \
--socket-mem 1024,1024 -- -i --rxq=0 --txq=0
Verify testpmd exited with error as below::
@@ -178,7 +178,7 @@ Test case 2: set invalid VF queue number in testpmd command-line options
2. Start VF testpmd with "--rxq=17 --txq=17" ::
- ./testpmd -c 0xf0 -n 4 -a 00:04.0 --file-prefix=test2 \
+ ./<build_target>/app/dpdk-testpmd -c 0xf0 -n 4 -a 00:04.0 --file-prefix=test2 \
--socket-mem 1024,1024 -- -i --rxq=17 --txq=17
Verify testpmd exited with error as below::
@@ -190,7 +190,7 @@ Test case 3: set valid VF queue number with testpmd function command
1. Start VF testpmd without setting "rxq" and "txq"::
- ./testpmd -c 0xf0 -n 4 -a 00:04.0 --socket-mem 1024,1024 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 0xf0 -n 4 -a 00:04.0 --socket-mem 1024,1024 -- -i
2. Configure vf forwarding prerequisits and start forwarding::
@@ -211,7 +211,7 @@ Test case 4: set invalid VF queue number with testpmd function command
1. Start VF testpmd without setting "rxq" and "txq"::
- ./testpmd -c 0xf0 -n 4 -a 00:04.0 --socket-mem 1024,1024 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 0xf0 -n 4 -a 00:04.0 --socket-mem 1024,1024 -- -i
2. Set rx queue number and tx queue number with 0 ::
diff --git a/test_plans/runtime_vf_queue_number_test_plan.rst b/test_plans/runtime_vf_queue_number_test_plan.rst
index b0c2c3a9..c4aaaed0 100644
--- a/test_plans/runtime_vf_queue_number_test_plan.rst
+++ b/test_plans/runtime_vf_queue_number_test_plan.rst
@@ -130,14 +130,14 @@ Test case 1: reserve valid vf queue number
1. Start PF testpmd with random queue-num-per-vf in [1, 2, 4, 8 ,16], for example, we use 4 as the reserved vf queue numbers::
- ./testpmd -c f -n 4 -a 18:00.0,queue-num-per-vf=4 \
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -a 18:00.0,queue-num-per-vf=4 \
--file-prefix=test1 --socket-mem 1024,1024 -- -i
Note testpmd can be started normally without any wrong or error.
2. Start VF testpmd::
- ./testpmd -c 0xf0 -n 4 -a 03:00.0 \
+ ./<build_target>/app/dpdk-testpmd -c 0xf0 -n 4 -a 03:00.0 \
--file-prefix=test2 --socket-mem 1024,1024 -- -i
3. VF request a queue number that is equal to reserved queue number, and we can not find VF reset while confiuring it::
@@ -195,7 +195,7 @@ Test case 2: reserve invalid VF queue number
1. Start PF testpmd with random queue-num-per-vf in [0, 3, 5-7 , 9-15, 17], for example, we use 0 as the reserved vf queue numbers::
- ./testpmd -c f -n 4 -a 18:00.0,queue-num-per-vf=0 \
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -a 18:00.0,queue-num-per-vf=0 \
--file-prefix=test1 --socket-mem 1024,1024 -- -i
2. Verify testpmd started with logs as below::
@@ -207,12 +207,12 @@ Test case 3: set valid VF queue number in testpmd command-line options
1. Start PF testpmd::
- ./testpmd -c f -n 4 -a 18:00.0 \
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -a 18:00.0 \
--file-prefix=test1 --socket-mem 1024,1024 -- -i
2. Start VF testpmd with "--rxq=[rxq] --txq=[txq]", and random valid values from 1 to 16, take 3 for example::
- ./testpmd -c 0xf0 -n 4 -a 18:02.0 --file-prefix=test2 \
+ ./<build_target>/app/dpdk-testpmd -c 0xf0 -n 4 -a 18:02.0 --file-prefix=test2 \
--socket-mem 1024,1024 -- -i --rxq=3 --txq=3
3. Configure vf forwarding prerequisits and start forwarding::
@@ -254,12 +254,12 @@ Test case 4: set invalid VF queue number in testpmd command-line options
1. Start PF testpmd::
- ./testpmd -c f -n 4 -a 18:00.0 \
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -a 18:00.0 \
--file-prefix=test1 --socket-mem 1024,1024 -- -i
2. Start VF testpmd with "--rxq=0 --txq=0" ::
- ./testpmd -c 0xf0 -n 4 -a 18:02.0 --file-prefix=test2 \
+ ./<build_target>/app/dpdk-testpmd -c 0xf0 -n 4 -a 18:02.0 --file-prefix=test2 \
--socket-mem 1024,1024 -- -i --rxq=0 --txq=0
Verify testpmd exited with error as below::
@@ -268,7 +268,7 @@ Test case 4: set invalid VF queue number in testpmd command-line options
3. Start VF testpmd with "--rxq=17 --txq=17" ::
- ./testpmd -c 0xf0 -n 4 -a 18:02.0 --file-prefix=test2 \
+ ./<build_target>/app/dpdk-testpmd -c 0xf0 -n 4 -a 18:02.0 --file-prefix=test2 \
--socket-mem 1024,1024 -- -i --rxq=17 --txq=17
Verify testpmd exited with error as below::
@@ -280,12 +280,12 @@ Test case 5: set valid VF queue number with testpmd function command
1. Start PF testpmd::
- ./testpmd -c f -n 4 -a 18:00.0 \
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -a 18:00.0 \
--file-prefix=test1 --socket-mem 1024,1024 -- -i
2. Start VF testpmd without setting "rxq" and "txq"::
- ./testpmd -c 0xf0 -n 4 -a 05:02.0 --file-prefix=test2 \
+ ./<build_target>/app/dpdk-testpmd -c 0xf0 -n 4 -a 05:02.0 --file-prefix=test2 \
--socket-mem 1024,1024 -- -i
3. Configure vf forwarding prerequisits and start forwarding::
@@ -307,12 +307,12 @@ Test case 6: set invalid VF queue number with testpmd function command
1. Start PF testpmd::
- ./testpmd -c f -n 4 -a 18:00.0 \
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -a 18:00.0 \
--file-prefix=test1 --socket-mem 1024,1024 -- -i
2. Start VF testpmd without setting "rxq" and "txq"::
- ./testpmd -c 0xf0 -n 4 -a 05:02.0 --file-prefix=test2 \
+ ./<build_target>/app/dpdk-testpmd -c 0xf0 -n 4 -a 05:02.0 --file-prefix=test2 \
--socket-mem 1024,1024 -- -i
@@ -344,7 +344,7 @@ Test case 7: Reserve VF queue number when VF bind to kernel driver
2. Reserve VF queue number ::
- ./testpmd -c f -n 4 -a 18:00.0,queue-num-per-vf=2 \
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -a 18:00.0,queue-num-per-vf=2 \
--file-prefix=test1 --socket-mem 1024,1024 -- -i
3. Check the VF0 rxq and txq number is 2::
diff --git a/test_plans/rxtx_callbacks_test_plan.rst b/test_plans/rxtx_callbacks_test_plan.rst
index 941c2097..ea88c5c9 100644
--- a/test_plans/rxtx_callbacks_test_plan.rst
+++ b/test_plans/rxtx_callbacks_test_plan.rst
@@ -46,11 +46,16 @@ prior to transmission to calculate the elapsed time, in CPU cycles.
Running the Application
=======================
-Set ``CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y`` in config/common_base.
+Build dpdk and examples=rxtx_callbacks:
+ CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static <build_target>
+ ninja -C <build_target>
+
+ meson configure -Dexamples=rxtx_callbacks <build_target>
+ ninja -C <build_target>
To run the example in a ``linuxapp`` environment::
- ./build/rxtx_callbacks -c 2 -n 4
+ ./<build_target>/examples/dpdk-rxtx_callbacks -c 2 -n 4
Refer to *DPDK Getting Started Guide* for general information on running
applications and the Environment Abstraction Layer (EAL) options.
@@ -60,7 +65,7 @@ Test Case:rxtx callbacks
Run the example::
- ./examples/rxtx_callbacks/build/rxtx_callbacks -c 2 -n 4
+ ./<build_target>/examples/dpdk-rxtx_callbacks -c 2 -n 4
waked up:::
diff --git a/test_plans/rxtx_offload_test_plan.rst b/test_plans/rxtx_offload_test_plan.rst
index 9d1029ab..172bb9cd 100644
--- a/test_plans/rxtx_offload_test_plan.rst
+++ b/test_plans/rxtx_offload_test_plan.rst
@@ -113,7 +113,7 @@ Test case: Rx offload per-port setting in command-line
1. Enable rx cksum in command-line::
- ./testpmd -c f -n 4 -- -i --rxq=4 --txq=4 --enable-rx-cksum
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -- -i --rxq=4 --txq=4 --enable-rx-cksum
testpmd> set fwd csum
testpmd> set verbose 1
testpmd> show port 0 rx_offload configuration
@@ -173,7 +173,7 @@ Test case: NNT Rx offload per-queue setting
1. Start testpmd::
- ./testpmd -c f -n 4 -- -i --rxq=4 --txq=4
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -- -i --rxq=4 --txq=4
testpmd> set fwd mac
testpmd> set verbose 1
testpmd> show port info all
@@ -287,7 +287,7 @@ Test case: Tx offload per-port setting
1. Start testpmd::
- ./testpmd -c 0x6 -n 4 -- -i --rxq=4 --txq=4 --port-topology=loop
+ ./<build_target>/app/dpdk-testpmd -c 0x6 -n 4 -- -i --rxq=4 --txq=4 --port-topology=loop
testpmd> set fwd txonly
testpmd> set verbose 1
testpmd> show port 0 tx_offload configuration
@@ -346,7 +346,7 @@ Test case: Tx offload per-port setting in command-line
1. Start testpmd with "--tx-offloads"::
- ./testpmd -c 0xf -n 4 -- -i --rxq=4 --txq=4 --port-topology=loop --tx-offloads=0x0001
+ ./<build_target>/app/dpdk-testpmd -c 0xf -n 4 -- -i --rxq=4 --txq=4 --port-topology=loop --tx-offloads=0x0001
testpmd> show port 0 tx_offload configuration
Tx Offloading Configuration of port 0 :
Port : VLAN_INSERT
@@ -446,7 +446,7 @@ Test case: Tx offload checksum
1. Set checksum forward mode::
- ./testpmd -c f -n 4 -- -i --rxq=4 --txq=4
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -- -i --rxq=4 --txq=4
testpmd> set fwd csum
testpmd> set verbose 1
testpmd> show port 0 tx_offload configuration
@@ -519,7 +519,7 @@ Test case: FVL Tx offload per-queue setting
1. Start testpmd and get the tx_offload capability and configuration::
- ./testpmd -c f -n 4 -- -i --rxq=4 --txq=4
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -- -i --rxq=4 --txq=4
testpmd> show port 0 tx_offload capabilities
Tx Offloading Capabilities of port 0 :
Per Queue : MBUF_FAST_FREE
@@ -604,7 +604,7 @@ Test case: Tx offload multi_segs setting
1. Start testpmd with "--tx-offloads=0x00008000" to enable tx_offload multi_segs ::
- ./testpmd -c 0xf -n 4 -- -i --tx-offloads==0x00008000
+ ./<build_target>/app/dpdk-testpmd -c 0xf -n 4 -- -i --tx-offloads==0x00008000
testpmd> show port 0 tx_offload configuration
Tx Offloading Configuration of port 0 :
Port : MULTI_SEGS
@@ -648,7 +648,7 @@ Test case: Tx offload multi_segs setting
4. Start testpmd again without "--tx-offloads", check multi-segs is disabled by default::
- ./testpmd -c 0xf -n 4 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 0xf -n 4 -- -i
testpmd> show port 0 tx_offload configuration
No MULTI_SEGS in Tx Offloading Configuration of ports
diff --git a/test_plans/scatter_test_plan.rst b/test_plans/scatter_test_plan.rst
index 901c6dc4..7841a2a7 100644
--- a/test_plans/scatter_test_plan.rst
+++ b/test_plans/scatter_test_plan.rst
@@ -101,7 +101,7 @@ Assuming that ports ``0`` and ``1`` of the test target are directly connected
to a Traffic Generator, launch the ``testpmd`` application with the following
arguments::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x6 -n 4 -- -i --mbcache=200 \
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i --mbcache=200 \
--mbuf-size=2048 --portmask=0x1 --max-pkt-len=9000 --port-topology=loop \
--tx-offloads=DEV_TX_OFFLOAD_MULTI_SEGS
diff --git a/test_plans/vdev_primary_secondary_test_plan.rst b/test_plans/vdev_primary_secondary_test_plan.rst
index 1e6cd2e0..cb78d192 100644
--- a/test_plans/vdev_primary_secondary_test_plan.rst
+++ b/test_plans/vdev_primary_secondary_test_plan.rst
@@ -143,7 +143,7 @@ SW preparation: Change one line of the symmetric_mp sample and rebuild::
1. Bind one port to vfio-pci, launch testpmd by below command::
- ./testpmd -l 1-6 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=2,client=1' --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1' -- -i --nb-cores=4 --rxq=2 --txq=2 --txd=1024 --rxd=1024
+ ./<build_target>/app/dpdk-testpmd -l 1-6 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=2,client=1' --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1' -- -i --nb-cores=4 --rxq=2 --txq=2 --txd=1024 --rxd=1024
testpmd>set fwd txonly
testpmd>start
@@ -181,7 +181,7 @@ Test Case 2: Virtio-pmd primary and secondary process hotplug test
1. Launch testpmd by below command::
- ./testpmd -l 1-6 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=2,client=1' --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1' -- -i --nb-cores=4 --rxq=2 --txq=2 --txd=1024 --rxd=1024
+ ./<build_target>/app/dpdk-testpmd -l 1-6 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=2,client=1' --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1' -- -i --nb-cores=4 --rxq=2 --txq=2 --txd=1024 --rxd=1024
testpmd>set fwd txonly
testpmd>start
diff --git a/test_plans/veb_switch_test_plan.rst b/test_plans/veb_switch_test_plan.rst
index 1d7d11c2..bf2e1cd0 100644
--- a/test_plans/veb_switch_test_plan.rst
+++ b/test_plans/veb_switch_test_plan.rst
@@ -111,7 +111,7 @@ Details:
1. In VF1, run testpmd::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 --socket-mem 1024,1024
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 --socket-mem 1024,1024
-a 05:02.0 --file-prefix=test1 -- -i --crc-strip --eth-peer=0,00:11:22:33:44:12
testpmd>set fwd txonly
testpmd>set promisc all off
@@ -119,7 +119,7 @@ Details:
In VF2, run testpmd::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xa -n 4 --socket-mem 1024,1024
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xa -n 4 --socket-mem 1024,1024
-a 05:02.1 --file-prefix=test2 -- -i --crc-strip
testpmd>set fwd rxonly
testpmd>set promisc all off
@@ -139,7 +139,7 @@ Details:
1. In VF1, run testpmd::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 --socket-mem 1024,1024
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 --socket-mem 1024,1024
-a 05:02.0 --file-prefix=test1 -- -i --crc-strip --eth-peer=0,00:11:22:33:44:12
testpmd>set fwd mac
testpmd>set promisc all off
@@ -147,7 +147,7 @@ Details:
In VF2, run testpmd::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xa -n 4 --socket-mem 1024,1024
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xa -n 4 --socket-mem 1024,1024
-a 05:02.1 --file-prefix=test2 -- -i --crc-strip
testpmd>set fwd rxonly
testpmd>set promisc all off
@@ -174,7 +174,7 @@ Details:
2. In VF1, run testpmd::
- ./testpmd -c 0xf -n 4 --socket-mem 1024,1024 -a 0000:05:02.0
+ ./<build_target>/app/dpdk-testpmd -c 0xf -n 4 --socket-mem 1024,1024 -a 0000:05:02.0
--file-prefix=test1 -- -i --crc-strip --eth-peer=0,00:11:22:33:44:12
testpmd>set fwd mac
testpmd>set promisc all off
@@ -182,7 +182,7 @@ Details:
In VF2, run testpmd::
- ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -a 0000:05:02.1
+ ./<build_target>/app/dpdk-testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -a 0000:05:02.1
--file-prefix=test2 -- -i --crc-strip
testpmd>set fwd rxonly
testpmd>set promisc all off
@@ -216,14 +216,14 @@ Details:
1. vf->pf
PF, launch testpmd::
- ./testpmd -c 0xf -n 4 --socket-mem 1024,1024 -a 0000:05:00.0 --file-prefix=test1 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 0xf -n 4 --socket-mem 1024,1024 -a 0000:05:00.0 --file-prefix=test1 -- -i
testpmd>set fwd rxonly
testpmd>set promisc all off
testpmd>start
VF1, run testpmd::
- ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -a 0000:05:02.0 --file-prefix=test2 -- -i --eth-peer=0,pf_mac_addr
+ ./<build_target>/app/dpdk-testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -a 0000:05:02.0 --file-prefix=test2 -- -i --eth-peer=0,pf_mac_addr
testpmd>set fwd txonly
testpmd>set promisc all off
testpmd>start
@@ -234,14 +234,14 @@ Details:
2. pf->vf
PF, launch testpmd::
- ./testpmd -c 0xf -n 4 --socket-mem 1024,1024 -a 0000:05:00.0 --file-prefix=test1 -- -i --eth-peer=0,vf1_mac_addr
+ ./<build_target>/app/dpdk-testpmd -c 0xf -n 4 --socket-mem 1024,1024 -a 0000:05:00.0 --file-prefix=test1 -- -i --eth-peer=0,vf1_mac_addr
testpmd>set fwd txonly
testpmd>set promisc all off
testpmd>start
VF1, run testpmd::
- ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -a 0000:05:02.0 --file-prefix=test2 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -a 0000:05:02.0 --file-prefix=test2 -- -i
testpmd>mac_addr add 0 vf1_mac_addr
testpmd>set fwd rxonly
testpmd>set promisc all off
@@ -253,14 +253,14 @@ Details:
3. tester->vf
PF, launch testpmd::
- ./testpmd -c 0xf -n 4 --socket-mem 1024,1024 -a 0000:05:00.0 --file-prefix=test1 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 0xf -n 4 --socket-mem 1024,1024 -a 0000:05:00.0 --file-prefix=test1 -- -i
testpmd>set fwd mac
testpmd>set promisc all off
testpmd>start
VF1, run testpmd::
- ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -a 0000:05:02.0 --file-prefix=test2 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -a 0000:05:02.0 --file-prefix=test2 -- -i
testpmd>mac_addr add 0 vf1_mac_addr
testpmd>set fwd rxonly
testpmd>set promisc all off
@@ -273,19 +273,19 @@ Details:
4. vf1->vf2
PF, launch testpmd::
- ./testpmd -c 0xf -n 4 --socket-mem 1024,1024 -a 0000:05:00.0 --file-prefix=test1 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 0xf -n 4 --socket-mem 1024,1024 -a 0000:05:00.0 --file-prefix=test1 -- -i
testpmd>set promisc all off
VF1, run testpmd::
- ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -a 0000:05:02.0 --file-prefix=test2 -- -i --eth-peer=0,vf2_mac_addr
+ ./<build_target>/app/dpdk-testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -a 0000:05:02.0 --file-prefix=test2 -- -i --eth-peer=0,vf2_mac_addr
testpmd>set fwd txonly
testpmd>set promisc all off
testpmd>start
VF2, run testpmd::
- ./testpmd -c 0xf00 -n 4 --socket-mem 1024,1024 -a 0000:05:02.1 --file-prefix=test3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 0xf00 -n 4 --socket-mem 1024,1024 -a 0000:05:02.1 --file-prefix=test3 -- -i
testpmd>mac_addr add 0 vf2_mac_addr
testpmd>set fwd rxonly
testpmd>set promisc all off
diff --git a/test_plans/vf_daemon_test_plan.rst b/test_plans/vf_daemon_test_plan.rst
index 07543c9d..cb7fb41d 100644
--- a/test_plans/vf_daemon_test_plan.rst
+++ b/test_plans/vf_daemon_test_plan.rst
@@ -87,7 +87,7 @@ Prerequisites
5. Start testpmd on host and vm0 in chained port topology::
- ./testpmd -c f -n 4 -- -i --port-topology=chained --tx-offloads=0x8fff
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -- -i --port-topology=chained --tx-offloads=0x8fff
Test Case 1: Set VLAN insert for VF from PF
diff --git a/test_plans/vf_jumboframe_test_plan.rst b/test_plans/vf_jumboframe_test_plan.rst
index 2f4cae3d..f7b480a2 100644
--- a/test_plans/vf_jumboframe_test_plan.rst
+++ b/test_plans/vf_jumboframe_test_plan.rst
@@ -85,7 +85,7 @@ Prerequisites
5. Start testpmd, set it in mac forward mode::
- testpmd -c 0x0f-- -i --portmask=0x1 \
+ ./<build_target>/app/dpdk-testpmd -c 0x0f-- -i --portmask=0x1 \
--tx-offloads=0x8fff --max-pkt-len=9000--port-topology=loop
testpmd> set fwd mac
testpmd> start
diff --git a/test_plans/vf_kernel_test_plan.rst b/test_plans/vf_kernel_test_plan.rst
index 5e42b8b3..f711f54a 100644
--- a/test_plans/vf_kernel_test_plan.rst
+++ b/test_plans/vf_kernel_test_plan.rst
@@ -81,7 +81,7 @@ Steps:
1. Enable multi-queues to start DPDK PF::
- ./testpmd -c f -n 4 -- -i --rxq=4 --txq=4
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -- -i --rxq=4 --txq=4
2. Link up kernel VF and expect VF link up
diff --git a/test_plans/vf_l3fwd_test_plan.rst b/test_plans/vf_l3fwd_test_plan.rst
index 9fb97cc2..0e3d0541 100644
--- a/test_plans/vf_l3fwd_test_plan.rst
+++ b/test_plans/vf_l3fwd_test_plan.rst
@@ -91,6 +91,13 @@ Setup overview
Set up topology as above based on the NIC used.
+Build dpdk and examples=l3fwd:
+ CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static <build_target>
+ ninja -C <build_target>
+
+ meson configure -Dexamples=l3fwd <build_target>
+ ninja -C <build_target>
+
Test Case 1: Measure performance with kernel PF & dpdk VF
=========================================================
@@ -111,7 +118,7 @@ take XL710 for example::
4, Start dpdk l3fwd with 1:1 matched cores and queues::
- ./examples/l3fwd/build/l3fwd -c 0xf -n 4 -- -p 0x3 --config '(0,0,0),(1,0,1),(0,1,2),(1,1,3)'
+ ./<build_target>/examples/dpdk-l3fwd -c 0xf -n 4 -- -p 0x3 --config '(0,0,0),(1,0,1),(0,1,2),(1,1,3)'
5, Send packet with frame size from 64bytes to 1518bytes with ixia traffic generator,
make sure your traffic configuration meets LPM rules, and will go to all queues, all ports.
@@ -150,13 +157,13 @@ take XL710 for example::
3, Start testpmd and set vfs mac address::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 --socket-mem=1024,1024 --file-prefix=pf -b 0000:18:02.0 -b 0000:18:06.0 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 --socket-mem=1024,1024 --file-prefix=pf -b 0000:18:02.0 -b 0000:18:06.0 -- -i
testpmd>set vf mac addr 0 0 00:12:34:56:78:01
testpmd>set vf mac addr 1 0 00:12:34:56:78:02
4, Start dpdk l3fwd with 1:1 matched cores and queues::
- ./examples/l3fwd/build/l3fwd -c 0x3c -n 4 -a 0000:18:02.0 -a 0000:18:06.0 -- -p 0x3 --config '(0,0,2),(1,0,3),(0,1,4),(1,1,5)'
+ ./<build_target>/examples/dpdk-l3fwd -c 0x3c -n 4 -a 0000:18:02.0 -a 0000:18:06.0 -- -p 0x3 --config '(0,0,2),(1,0,3),(0,1,4),(1,1,5)'
5, Send packet with frame size from 64bytes to 1518bytes with ixia traffic generator,
make sure your traffic configuration meets LPM rules, and will go to all queues, all ports.
diff --git a/test_plans/vf_single_core_perf_test_plan.rst b/test_plans/vf_single_core_perf_test_plan.rst
index 53cf304a..8c4ff727 100644
--- a/test_plans/vf_single_core_perf_test_plan.rst
+++ b/test_plans/vf_single_core_perf_test_plan.rst
@@ -86,7 +86,7 @@ Test Case : Vf Single Core Performance Measurement
4. Start testpmd::
- ./dpdk-testpmd -l 28,29 -n 4 -- -i --portmask=0x3 --txd=512 --rxd=512 \
+ ./<build_target>/app/dpdk-testpmd -l 28,29 -n 4 -- -i --portmask=0x3 --txd=512 --rxd=512 \
--txq=2 --rxq=2 --nb-cores=1
testpmd> set fwd mac
diff --git a/test_plans/virtio_user_as_exceptional_path_test_plan.rst b/test_plans/virtio_user_as_exceptional_path_test_plan.rst
index 2dffa877..97052d9e 100644
--- a/test_plans/virtio_user_as_exceptional_path_test_plan.rst
+++ b/test_plans/virtio_user_as_exceptional_path_test_plan.rst
@@ -74,7 +74,7 @@ Flow:tap0-->vhost-net-->virtio_user-->nic0-->nic1
3. Bind nic0 to vfio-pci and launch the virtio_user with testpmd::
./dpdk-devbind.py -b vfio-pci xx:xx.x # xx:xx.x is the pci addr of nic0
- ./testpmd -c 0xc0000 -n 4 --file-prefix=test2 \
+ ./<build_target>/app/dpdk-testpmd -c 0xc0000 -n 4 --file-prefix=test2 \
--vdev=virtio_user0,mac=00:01:02:03:04:05,path=/dev/vhost-net,queue_size=1024 -- -i --rxd=1024 --txd=1024
testpmd>set fwd csum
testpmd>stop
@@ -126,7 +126,7 @@ Flow: tap0<-->vhost-net<-->virtio_user<-->nic0<-->TG
2. Bind the physical port to vfio-pci, launch testpmd with one queue for virtio_user::
./dpdk-devbind.py -b vfio-pci xx:xx.x # xx:xx.x is the pci addr of nic0
- ./testpmd -l 1-2 -n 4 --file-prefix=test2 --vdev=virtio_user0,mac=00:01:02:03:04:05,path=/dev/vhost-net,queue_size=1024,queues=1 -- -i --rxd=1024 --txd=1024
+ ./<build_target>/app/dpdk-testpmd -l 1-2 -n 4 --file-prefix=test2 --vdev=virtio_user0,mac=00:01:02:03:04:05,path=/dev/vhost-net,queue_size=1024,queues=1 -- -i --rxd=1024 --txd=1024
3. Check if there is a tap device generated::
@@ -156,7 +156,7 @@ Flow: tap0<-->vhost-net<-->virtio_user<-->nic0<-->TG
2. Bind the physical port to vfio-pci, launch testpmd with two queues for virtio_user::
./dpdk-devbind.py -b vfio-pci xx:xx.x # xx:xx.x is the pci addr of nic0
- ./testpmd -l 1-2 -n 4 --file-prefix=test2 --vdev=virtio_user0,mac=00:01:02:03:04:05,path=/dev/vhost-net,queue_size=1024,queues=2 -- -i --rxd=1024 --txd=1024 --txq=2 --rxq=2 --nb-cores=1
+ ./<build_target>/app/dpdk-testpmd -l 1-2 -n 4 --file-prefix=test2 --vdev=virtio_user0,mac=00:01:02:03:04:05,path=/dev/vhost-net,queue_size=1024,queues=2 -- -i --rxd=1024 --txd=1024 --txq=2 --rxq=2 --nb-cores=1
3. Check if there is a tap device generated::
diff --git a/test_plans/virtio_user_for_container_networking_test_plan.rst b/test_plans/virtio_user_for_container_networking_test_plan.rst
index d28b30a1..3ec1d101 100644
--- a/test_plans/virtio_user_for_container_networking_test_plan.rst
+++ b/test_plans/virtio_user_for_container_networking_test_plan.rst
@@ -72,12 +72,12 @@ Test Case 1: packet forward test for container networking
2. Bind one port to vfio-pci, launch vhost::
- ./testpmd -l 1-2 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i
+ ./<build_target>/app/dpdk-testpmd -l 1-2 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i
2. Start a container instance with a virtio-user port::
docker run -i -t --privileged -v /root/dpdk/vhost-net:/tmp/vhost-net -v /mnt/huge:/dev/hugepages \
- -v /root/dpdk:/root/dpdk dpdk_image ./root/dpdk/x86_64-native-linuxapp-gcc/app/testpmd -l 3-4 -n 4 -m 1024 --no-pci --file-prefix=container \
+ -v /root/dpdk:/root/dpdk dpdk_image ./root/dpdk/x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m 1024 --no-pci --file-prefix=container \
--vdev=virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net -- -i
3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check virtio could receive and fwd packets correctly in container::
@@ -94,12 +94,12 @@ Test Case 2: packet forward with multi-queues for container networking
2. Bind one port to vfio-pci, launch vhost::
- ./testpmd -l 1-3 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2
+ ./<build_target>/app/dpdk-testpmd -l 1-3 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2
2. Start a container instance with a virtio-user port::
docker run -i -t --privileged -v /root/dpdk/vhost-net:/tmp/vhost-net -v /mnt/huge:/dev/hugepages \
- -v /root/dpdk:/root/dpdk dpdk_image ./root/dpdk/x86_64-native-linuxapp-gcc/app/testpmd -l 4-6 -n 4 -m 1024 --no-pci --file-prefix=container \
+ -v /root/dpdk:/root/dpdk dpdk_image ./root/dpdk/x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4-6 -n 4 -m 1024 --no-pci --file-prefix=container \
--vdev=virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=2 -- -i --rxq=2 --txq=2 --nb-cores=2
3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check virtio could receive and fwd packets in container with two queues::
--
2.25.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [dts][PATCH V1 4/4] test_plans/*: modify test plan to adapt meson build
2022-01-22 18:20 [dts][PATCH V1 0/4] test_plans/*: modify test plan to adapt meson build Yu Jiang
` (2 preceding siblings ...)
2022-01-22 18:20 ` [dts][PATCH V1 3/4] " Yu Jiang
@ 2022-01-22 18:20 ` Yu Jiang
2022-01-25 2:22 ` Tu, Lijuan
3 siblings, 1 reply; 7+ messages in thread
From: Yu Jiang @ 2022-01-22 18:20 UTC (permalink / raw)
To: lijuan.tu, dts; +Cc: Yu Jiang
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=y, Size: 52709 bytes --]
test_plans/*: modify test plan to adapt meson build
Signed-off-by: Yu Jiang <yux.jiang@intel.com>
---
test_plans/ABI_stable_test_plan.rst | 5 +-
test_plans/bbdev_test_plan.rst | 4 +-
test_plans/eventdev_perf_test_plan.rst | 36 ++--
.../eventdev_pipeline_perf_test_plan.rst | 25 ++-
test_plans/firmware_version_test_plan.rst | 2 +-
test_plans/ipsec_gw_and_library_test_plan.rst | 12 +-
test_plans/linux_modules_test_plan.rst | 10 +-
test_plans/mdd_test_plan.rst | 8 +-
test_plans/qos_meter_test_plan.rst | 2 +-
test_plans/qos_sched_test_plan.rst | 24 +--
| 2 +-
test_plans/rte_flow_test_plan.rst | 190 +++++++++---------
...time_vf_queue_number_maxinum_test_plan.rst | 8 +-
test_plans/speed_capabilities_test_plan.rst | 2 +-
test_plans/vmdq_dcb_test_plan.rst | 14 +-
15 files changed, 182 insertions(+), 162 deletions(-)
diff --git a/test_plans/ABI_stable_test_plan.rst b/test_plans/ABI_stable_test_plan.rst
index 16934c48..ae0eb7b9 100644
--- a/test_plans/ABI_stable_test_plan.rst
+++ b/test_plans/ABI_stable_test_plan.rst
@@ -65,6 +65,7 @@ Setup library path in environment::
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH,<dpdk_2002>
+meson: CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=shared <build_target>;ninja -C <build_target>
Common Test Steps
=================
@@ -75,7 +76,7 @@ application steps are below,
Go into <dpdk_1911> directory, launch application with specific library::
- testpmd -c 0xf -n 4 -d <dpdk_2002> -- -i
+ ./<build_target>/app/dpdk-testpmd -c 0xf -n 4 -d <dpdk_2002> -- -i
Expect the application could launch successfully.
@@ -292,7 +293,7 @@ Build shared libraries, (just enable i40e pmd for testing)::
Run testpmd application refer to Common Test steps with ixgbe pmd NIC.::
- testpmd -c 0xf -n 4 -d <dpdk_2002> -a 18:00.0 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 0xf -n 4 -d <dpdk_2002> -a 18:00.0 -- -i
Test txonly::
diff --git a/test_plans/bbdev_test_plan.rst b/test_plans/bbdev_test_plan.rst
index 2c4a0523..2e2a64e5 100644
--- a/test_plans/bbdev_test_plan.rst
+++ b/test_plans/bbdev_test_plan.rst
@@ -153,7 +153,7 @@ and operations timeout is set to 120s
and enqueue/dequeue burst size is set to 8 and to 32.
Moreover a bbdev (*turbo_sw*) device will be created::
- ./test-bbdev.py -p ../../x86_64-native-linuxapp-icc/app/testbbdev \
+ ./test-bbdev.py -p ../../x86_64-native-linuxapp-icc/app/dpdk-test-bbdev \
-e="--vdev=baseband_turbo_sw" -t 120 -c validation \
-v ./test_vectors/turbo_enc_c1_k40_r0_e1196_rm.data -n 64 -b 8 32
@@ -243,7 +243,7 @@ Test case 8: Turbo encoding and decoding offload and latency
It runs **offload ** and **latency** test for Turbo encode vector file::
- ./test-bbdev.py -p ../../x86_64-native-linuxapp-icc/app/testbbdev \
+ ./test-bbdev.py -p ../../x86_64-native-linuxapp-icc/app/dpdk-test-bbdev \
-e="--vdev=baseband_turbo_sw" -t 120 -c offload latency \
-v ./test_vectors/turbo_enc_c1_k40_r0_e1196_rm.data \
./test_vectors/turbo_dec_c1_k40_r0_e17280_sbd_negllr.data -n 64 -l 16 -b 8 32
diff --git a/test_plans/eventdev_perf_test_plan.rst b/test_plans/eventdev_perf_test_plan.rst
index f8e81536..7256a151 100644
--- a/test_plans/eventdev_perf_test_plan.rst
+++ b/test_plans/eventdev_perf_test_plan.rst
@@ -49,7 +49,7 @@ Description: Execute performance test with Atomic_atq type of stage in multi-flo
1. Run the sample with below command::
- # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=A --wlcores=23
+ # ./<build_target>/app/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=A --wlcores=23
Parameters::
@@ -76,7 +76,7 @@ Description: Execute performance test with Parallel_atq type of stage in multi-f
1. Run the sample with below command::
- # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=P --wlcores=23
+ # ./<build_target>/app/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=P --wlcores=23
2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
@@ -88,7 +88,7 @@ Description: Execute performance test with Ordered_atq type of stage in multi-fl
1. Run the sample with below command::
- # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=O --wlcores=23
+ # ./<build_target>/app/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=O --wlcores=23
2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
@@ -100,7 +100,7 @@ Description: Execute performance test with Atomic_queue type of stage in multi-f
1. Run the sample with below command::
- # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=A --wlcores=23
+ # ./<build_target>/app/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=A --wlcores=23
2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
@@ -112,7 +112,7 @@ Description: Execute performance test with Parallel_queue type of stage in multi
1. Run the sample with below command::
- # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=P --wlcores=23
+ # ./<build_target>/app/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=P --wlcores=23
2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
@@ -124,7 +124,7 @@ Description: Execute performance test with Ordered_queue type of stage in multi-
1. Run the sample with below command::
- # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=O --wlcores=23
+ # ./<build_target>/app/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=O --wlcores=23
2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
@@ -136,7 +136,7 @@ Description: Execute performance test with Atomic_atq type of stage in multi-flo
1. Run the sample with below command::
- # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=A --wlcores=23
+ # ./<build_target>/app/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=A --wlcores=23
2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
@@ -148,7 +148,7 @@ Description: Execute performance test with Parallel_atq type of stage in multi-f
1. Run the sample with below command::
- # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=P --wlcores=23
+ # ./<build_target>/app/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=P --wlcores=23
2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
@@ -160,7 +160,7 @@ Description: Execute performance test with Ordered_atq type of stage in multi-fl
1. Run the sample with below command::
- # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=O --wlcores=23
+ # ./<build_target>/app/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=O --wlcores=23
2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
@@ -172,7 +172,7 @@ Description: Execute performance test with Atomic_queue type of stage in multi-f
1. Run the sample with below command::
- # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=A --wlcores=23
+ # ./<build_target>/app/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=A --wlcores=23
2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
@@ -184,7 +184,7 @@ Description: Execute performance test with Parallel_queue type of stage in multi
1. Run the sample with below command::
- # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=P --wlcores=23
+ # ./<build_target>/app/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=P --wlcores=23
2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
@@ -196,7 +196,7 @@ Description: Execute performance test with Ordered_queue type of stage in multi-
1. Run the sample with below command::
- # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=O --wlcores=23
+ # ./<build_target>/app/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=O --wlcores=23
2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
@@ -209,7 +209,7 @@ Description: Execute performance test with Atomic_atq type of stage in multi-flo
1. Run the sample with below command::
- # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -w device2_bus_id -a device3_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=A --wlcores=23
+ # ./<build_target>/app/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -w device2_bus_id -a device3_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=A --wlcores=23
2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
@@ -221,7 +221,7 @@ Description: Execute performance test with Parallel_atq type of stage in multi-f
1. Run the sample with below command::
- # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -a device2_bus_id -a device3_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=P --wlcores=23
+ # ./<build_target>/app/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -a device2_bus_id -a device3_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=P --wlcores=23
2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
@@ -233,7 +233,7 @@ Description: Execute performance test with Ordered_atq type of stage in multi-fl
1. Run the sample with below command::
- # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -a device2_bus_id -a device3_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=O --wlcores=23
+ # ./<build_target>/app/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -a device2_bus_id -a device3_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=O --wlcores=23
2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
@@ -245,7 +245,7 @@ Description: Execute performance test with Atomic_queue type of stage in multi-f
1. Run the sample with below command::
- # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -a device2_bus_id -a device3_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=A --wlcores=23
+ # ./<build_target>/app/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -a device2_bus_id -a device3_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=A --wlcores=23
2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
@@ -257,7 +257,7 @@ Description: Execute performance test with Parallel_queue type of stage in multi
1. Run the sample with below command::
- # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -a device2_bus_id -a device3_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=P --wlcores=23
+ # ./<build_target>/app/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -a device2_bus_id -a device3_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=P --wlcores=23
2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
@@ -269,7 +269,7 @@ Description: Execute performance test with Ordered_queue type of stage in multi-
1. Run the sample with below command::
- # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -a device2_bus_id -a device3_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=O --wlcores=23
+ # ./<build_target>/app/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -a device2_bus_id -a device3_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=O --wlcores=23
2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
diff --git a/test_plans/eventdev_pipeline_perf_test_plan.rst b/test_plans/eventdev_pipeline_perf_test_plan.rst
index 34464ab6..90d08f10 100644
--- a/test_plans/eventdev_pipeline_perf_test_plan.rst
+++ b/test_plans/eventdev_pipeline_perf_test_plan.rst
@@ -22,6 +22,13 @@ to the device under test ::
usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
usertools/dpdk-devbind.py --bind=vfio-pci eventdev_device_bus_id
+Build dpdk and examples=eventdev_pipeline:
+ CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static <build_target>
+ ninja -C <build_target>
+
+ meson configure -Dexamples=eventdev_pipeline <build_target>
+ ninja -C <build_target>
+
Create huge pages
=================
mkdir -p /dev/huge
@@ -51,7 +58,7 @@ Description: Execute performance test with Atomic_atq type of stage in multi-flo
1. Run the sample with below command::
- # ./build/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device_bus_id -- -w 0xc00000 -n=0 --dump
+ # ./<build_target>/examples/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device_bus_id -- -w 0xc00000 -n=0 --dump
Parameters::
@@ -75,7 +82,7 @@ Description: Execute performance test with Parallel_atq type of stage in multi-f
1. Run the sample with below command::
- # ./build/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device_bus_id -- -w 0xc00000 -n=0 -p --dump
+ # ./<build_target>/examples/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device_bus_id -- -w 0xc00000 -n=0 -p --dump
Parameters::
@@ -100,7 +107,7 @@ Description: Execute performance test with Ordered_atq type of stage in multi-fl
1. Run the sample with below command::
- # ./build/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device_bus_id -- -w 0xc00000 -n=0 -o --dump
+ # ./<build_target>/examples/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device_bus_id -- -w 0xc00000 -n=0 -o --dump
Parameters::
@@ -125,7 +132,7 @@ Description: Execute performance test with Atomic_atq type of stage in multi-flo
1. Run the sample with below command::
- # ./build/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- -w 0xc00000 -n=0 --dump
+ # ./<build_target>/examples/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- -w 0xc00000 -n=0 --dump
Parameters::
@@ -149,7 +156,7 @@ Description: Execute performance test with Parallel_atq type of stage in multi-f
1. Run the sample with below command::
- # ./build/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- -w 0xc00000 -n=0 -p --dump
+ # ./<build_target>/examples/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- -w 0xc00000 -n=0 -p --dump
Parameters::
@@ -174,7 +181,7 @@ Description: Execute performance test with Ordered_atq type of stage in multi-fl
1. Run the sample with below command::
- # ./build/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- -w 0xc00000 -n=0 -o --dump
+ # ./<build_target>/examples/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- -w 0xc00000 -n=0 -o --dump
Parameters::
@@ -199,7 +206,7 @@ Description: Execute performance test with Atomic_atq type of stage in multi-flo
1. Run the sample with below command::
- # ./build/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -a device2_bus_id -a device3_bus_id -- -w 0xc00000 -n=0 --dump
+ # ./<build_target>/examples/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -a device2_bus_id -a device3_bus_id -- -w 0xc00000 -n=0 --dump
Parameters::
@@ -223,7 +230,7 @@ Description: Execute performance test with Parallel_atq type of stage in multi-f
1. Run the sample with below command::
- # ./build/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -a device2_bus_id -a device3_bus_id -- -w 0xc00000 -n=0 -p --dump
+ # ./<build_target>/examples/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -a device2_bus_id -a device3_bus_id -- -w 0xc00000 -n=0 -p --dump
Parameters::
@@ -248,7 +255,7 @@ Description: Execute performance test with Ordered_atq type of stage in multi-fl
1. Run the sample with below command::
- # ./build/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -a device2_bus_id -a device3_bus_id -- -w 0xc00000 -n=0 -o --dump
+ # ./<build_target>/examples/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -a device2_bus_id -a device3_bus_id -- -w 0xc00000 -n=0 -o --dump
Parameters::
diff --git a/test_plans/firmware_version_test_plan.rst b/test_plans/firmware_version_test_plan.rst
index d9cf47f8..64941db5 100644
--- a/test_plans/firmware_version_test_plan.rst
+++ b/test_plans/firmware_version_test_plan.rst
@@ -53,7 +53,7 @@ to the device under test::
Assuming that ports are up and working, then launch the ``testpmd`` application
with the following arguments::
- ./build/app/testpmd -- -i --portmask=0x3
+ ./build/app/dpdk-testpmd -- -i --portmask=0x3
Ensure the ```firmware_version.cfg``` file have the correct name and firmware
version.
diff --git a/test_plans/ipsec_gw_and_library_test_plan.rst b/test_plans/ipsec_gw_and_library_test_plan.rst
index 74bf407f..1cd80783 100644
--- a/test_plans/ipsec_gw_and_library_test_plan.rst
+++ b/test_plans/ipsec_gw_and_library_test_plan.rst
@@ -114,7 +114,7 @@ To test IPsec, an example ipsec-secgw is added into DPDK.
The test commands of ipsec-secgw is below::
- ./build/ipsec-secgw [EAL options] --
+ ./<build_target>/examples/dpdk-ipsec-secgw [EAL options] --
-p PORTMASK -P -u PORTMASK -j FRAMESIZE
-l -w REPLAY_WINOW_SIZE -e -a
--config (port,queue,lcore)[,(port,queue,lcore)]
@@ -127,6 +127,10 @@ compile the applications::
make -C ./examples/ipsec-secgw
+ meson:
+ meson configure -Dexamples=ipsec-secgw <build_target>
+ ninja -C <build_target>
+
Configuration File Syntax:
The ``-f CONFIG_FILE_PATH`` option enables the application read and
@@ -202,7 +206,7 @@ Cryptodev AES-NI algorithm validation matrix is showed in table below.
AESNI_MB device start cmd::
- ./examples/ipsec-secgw/build/ipsec-secgw --socket-mem 2048,0 --legacy-mem -a 0000:60:00.0
+ ./<build_target>/examples/dpdk-ipsec-secgw --socket-mem 2048,0 --legacy-mem -a 0000:60:00.0
--vdev=net_tap0,mac=fixed --vdev crypto_aesni_mb_pmd_1 --vdev=crypto_aesni_mb_pmd_2 -l 9,10,11 -n 6 -- -P --config "(0,0,10),(1,0,11)"
-u 0x1 -p 0x3 -f /root/dts/local_conf/ipsec_test.cfg
@@ -230,7 +234,7 @@ Cryptodev QAT algorithm validation matrix is showed in table below.
QAT device start cmd::
- ./examples/ipsec-secgw/build/ipsec-secgw --socket-mem 2048,0 --legacy-mem --vdev=net_tap0,mac=fixed -a 0000:60:00.0
+ ./<build_target>/examples/dpdk-ipsec-secgw --socket-mem 2048,0 --legacy-mem --vdev=net_tap0,mac=fixed -a 0000:60:00.0
-a 0000:1a:01.0 -l 9,10,11 -n 6 -- -P --config "(0,0,10),(1,0,11)" -u 0x1 -p 0x3
-f /root/dts/local_conf/ipsec_test.cfg
@@ -244,7 +248,7 @@ AES_GCM_PMD algorithm validation matrix is showed in table below.
AESNI_GCM device start cmd::
- ./examples/ipsec-secgw/build/ipsec-secgw --socket-mem 2048,0 --legacy-mem -a 0000:60:00.0 --vdev=net_tap0,mac=fixed
+ ./<build_target>/examples/dpdk-ipsec-secgw --socket-mem 2048,0 --legacy-mem -a 0000:60:00.0 --vdev=net_tap0,mac=fixed
--vdev crypto_aesni_gcm_pmd_1 --vdev=crypto_aesni_gcm_pmd_2 -l 9,10,11 -n 6 -- -P --config "(0,0,10),(1,0,11)"
-u 0x1 -p 0x3 -f /root/dts/local_conf/ipsec_test.cfg
diff --git a/test_plans/linux_modules_test_plan.rst b/test_plans/linux_modules_test_plan.rst
index 57b0327d..e8d41996 100644
--- a/test_plans/linux_modules_test_plan.rst
+++ b/test_plans/linux_modules_test_plan.rst
@@ -80,7 +80,7 @@ Bind the interface to the driver ::
Start testpmd in a loop configuration ::
- # x86_64-native-linux-gcc/app/testpmd -l 1,2 -n 4 -a xxxx:xx:xx.x \
+ # ./<build_target>/app/dpdk-testpmd -l 1,2 -n 4 -a xxxx:xx:xx.x \
-- -i --port-topology=loop
Start packet forwarding ::
@@ -122,7 +122,7 @@ Grant permissions for all users to access the new character device ::
Start testpmd in a loop configuration ::
- $ x86_64-native-linux-gcc/app/testpmd -l 1,2 -n 4 -a xxxx:xx:xx.x --in-memory \
+ $ ./<build_target>/app/dpdk-testpmd -l 1,2 -n 4 -a xxxx:xx:xx.x --in-memory \
-- -i --port-topology=loop
Start packet forwarding ::
@@ -148,11 +148,13 @@ application.
Compile the application ::
- # cd $RTE_SDK/examples/helloworld && make
+ make: # cd $RTE_SDK/examples/helloworld && make
+ meson: meson configure -Dexamples=helloworld <build_target>;ninja -C <build_target>
Run the application ::
- $ $RTE_SDK/examples/helloworld/build/helloworld-shared --in-memory
+ make: $ $RTE_SDK/examples/helloworld/build/helloworld-shared --in-memory
+ meson: $ ./<build_target>/examples/dpdk-helloworld --in-memory
Check for any error states or reported errors.
diff --git a/test_plans/mdd_test_plan.rst b/test_plans/mdd_test_plan.rst
index 3efa75b9..5fb1457e 100644
--- a/test_plans/mdd_test_plan.rst
+++ b/test_plans/mdd_test_plan.rst
@@ -75,7 +75,7 @@ Test Case 1: enable_mdd_dpdk_disable
5. Turn on testpmd and set mac forwarding mode::
- ./testpmd -c 0x0f -n 4 -- -i --portmask=0x3 --tx-offloads=0x1
+ ./<build_target>/app/dpdk-testpmd -c 0x0f -n 4 -- -i --portmask=0x3 --tx-offloads=0x1
testpmd> set fwd mac
testpmd> start
@@ -143,7 +143,7 @@ Test Case 2: enable_mdd_dpdk_enable
5. Turn on testpmd and set mac forwarding mode::
- ./testpmd -c 0x0f -n 4 -- -i --portmask=0x3 --tx-offloads=0x0
+ ./<build_target>/app/dpdk-testpmd -c 0x0f -n 4 -- -i --portmask=0x3 --tx-offloads=0x0
testpmd> set fwd mac
testpmd> start
@@ -211,7 +211,7 @@ Test Case 3: disable_mdd_dpdk_disable
5. Turn on testpmd and set mac forwarding mode::
- ./testpmd -c 0xf -n 4 -- -i --portmask=0x3 --tx-offloads=0x1
+ ./<build_target>/app/dpdk-testpmd -c 0xf -n 4 -- -i --portmask=0x3 --tx-offloads=0x1
testpmd> set fwd mac
testpmd> start
@@ -279,7 +279,7 @@ Test Case 4: disable_mdd_dpdk_enable
5. Turn on testpmd and set mac forwarding mode::
- ./testpmd -c 0xf -n 4 -- -i --portmask=0x3 --tx-offloads=0x0
+ ./<build_target>/app/dpdk-testpmd -c 0xf -n 4 -- -i --portmask=0x3 --tx-offloads=0x0
testpmd> set fwd mac
testpmd> start
diff --git a/test_plans/qos_meter_test_plan.rst b/test_plans/qos_meter_test_plan.rst
index 4ab5825f..d164c1c6 100644
--- a/test_plans/qos_meter_test_plan.rst
+++ b/test_plans/qos_meter_test_plan.rst
@@ -86,7 +86,7 @@ dut_port_1 : "0000:05:00.1"
and 2 ports only in the application port mask (first port from the port
mask is used for RX and the other port in the core mask is used for TX)::
- ./build/qos_meter -c 1 -n 4 -- -p 0x3
+ ./<build_target>/examples/dpdk-qos_meter -c 1 -n 4 -- -p 0x3
Test Case: srTCM blind input color RED
======================================
diff --git a/test_plans/qos_sched_test_plan.rst b/test_plans/qos_sched_test_plan.rst
index 28c8c611..9a893d9b 100644
--- a/test_plans/qos_sched_test_plan.rst
+++ b/test_plans/qos_sched_test_plan.rst
@@ -37,7 +37,7 @@ QoS Scheduler Tests
The QoS Scheduler results are produced using ''qos_sched'' application.
The application has a number of command line options::
- ./qos_sched [EAL options] -- <APP PARAMS>
+ ./<build_target>/examples/dpdk-qos_sched [EAL options] -- <APP PARAMS>
Mandatory application parameters include:
-pfc “RX PORT, TX PORT, RX LCORE, WT LCORE, TX CORE”: Packet flow configuration.
@@ -92,7 +92,7 @@ Test Case: 1 pipe, 8 TCs
which creates one RX thread on lcore 5 reading from port 0
and a worker thread on lcore 7 writing to port 1::
- ./qos_sched -l 1,5,7 -n 4 -- -i --pfc "0,1,5,7" --cfg ../profile.cfg
+ ./<build_target>/examples/dpdk-qos_sched -l 1,5,7 -n 4 -- -i --pfc "0,1,5,7" --cfg ../profile.cfg
2. The traffic manage setting is configured in profile.cfg.
Set flows with QinQ inner vlan ID=0, which represents pipe 0.
@@ -164,7 +164,7 @@ Test Case: 4 pipe, 4 TCs
which creates one RX thread on lcore 5 reading from port 0
and a worker thread on lcore 7 writing to port 1::
- ./qos_sched -l 1,5,7 -n 4 -- -i --pfc "0,1,5,7" --cfg ../profile.cfg
+ ./<build_target>/examples/dpdk-qos_sched -l 1,5,7 -n 4 -- -i --pfc "0,1,5,7" --cfg ../profile.cfg
2. The traffic manage setting is configured in profile.cfg.
Set three flows with QinQ inner vlan ID=0/1/2/3, which represents pipe 0/1/2/3.
@@ -189,7 +189,7 @@ Test Case: 1 pipe, 12 TCs
which creates one RX thread on lcore 5 reading from port 0
and a worker thread on lcore 7 writing to port 1::
- ./qos_sched -l 1,5,7 -n 4 -- -i --pfc "0,1,5,7" --cfg ../profile.cfg
+ ./<build_target>/examples/dpdk-qos_sched -l 1,5,7 -n 4 -- -i --pfc "0,1,5,7" --cfg ../profile.cfg
2. The traffic manage setting is configured in profile.cfg.
change pipe profile 0, set tb rate and tc rate to 1/40.96 port rate::
@@ -253,7 +253,7 @@ Test Case: 1 pipe, set a TC rate to 0
which creates one RX thread on lcore 5 reading from port 0
and a worker thread on lcore 7 writing to port 1::
- ./qos_sched -l 1,5,7 -n 4 -- -i --pfc "0,1,5,7" --cfg ../profile.cfg
+ ./<build_target>/examples/dpdk-qos_sched -l 1,5,7 -n 4 -- -i --pfc "0,1,5,7" --cfg ../profile.cfg
2. The traffic manage setting is configured in profile.cfg.
change pipe profile 0, set tb rate and tc rate to 1/40.96 port rate::
@@ -304,7 +304,7 @@ Test Case: best effort TC12
which creates one RX thread on lcore 5 reading from port 0
and a worker thread on lcore 7 writing to port 1::
- ./qos_sched -l 1,5,7 -n 4 -- -i --pfc "0,1,5,7" --cfg ../profile.cfg
+ ./<build_target>/examples/dpdk-qos_sched -l 1,5,7 -n 4 -- -i --pfc "0,1,5,7" --cfg ../profile.cfg
2. The traffic manage setting is configured in profile.cfg.
Set flows with QinQ inner vlan ID=0, which represents pipe 0.
@@ -363,7 +363,7 @@ Test Case: 4096 pipes, 12 TCs
which creates one RX thread on lcore 5 reading from port 0
and a worker thread on lcore 7 writing to port 1::
- ./qos_sched -l 1,5,7 -n 4 -- -i --pfc "0,1,5,7" --cfg ../profile.cfg
+ ./<build_target>/examples/dpdk-qos_sched -l 1,5,7 -n 4 -- -i --pfc "0,1,5,7" --cfg ../profile.cfg
2. The traffic manage setting is configured in profile.cfg.
Set flows with QinQ inner vlan ID=random, which represents pipe 0-4095.
@@ -426,7 +426,7 @@ Test Case: 4096 pipes, 12 TCs
3. If TX core defined::
- ./qos_sched -l 1,2,6,7 -n 4 -- -i --pfc "0,1,2,6,7" --cfg ../profile.cfg
+ ./<build_target>/examples/dpdk-qos_sched -l 1,2,6,7 -n 4 -- -i --pfc "0,1,2,6,7" --cfg ../profile.cfg
The received rate can reach linerate, which is 13.89Mpps, no packets are dropped::
@@ -473,7 +473,7 @@ Test Case: qos_sched of two ports
1. This example with two packet flows configuration using different ports
but sharing the same core for QoS scheduler is given below::
- ./qos_sched -l 1,2,6,7 -n 4 -- --pfc "0,1,2,6,7" --pfc "1,0,2,6,7" --cfg ../profile.cfg
+ ./<build_target>/examples/dpdk-qos_sched -l 1,2,6,7 -n 4 -- --pfc "0,1,2,6,7" --pfc "1,0,2,6,7" --cfg ../profile.cfg
2. The traffic manage setting is configured in profile.cfg.
Set flows with QinQ inner vlan ID=random, which represents pipe 0-4095.
@@ -623,7 +623,7 @@ so the two supports case can't be verified.*
which creates one RX thread on lcore 5 reading from port 0
and a worker thread on lcore 7 writing to port 1::
- ./qos_sched -l 1,2,5,7 -n 4 -- -i --pfc "0,1,2,5,7" --cfg ../profile.cfg
+ ./<build_target>/examples/dpdk-qos_sched -l 1,2,5,7 -n 4 -- -i --pfc "0,1,2,5,7" --cfg ../profile.cfg
3. The generator settings:
Set IP dst address mode is random, and the mask is "255.255.255.0".
@@ -648,7 +648,7 @@ so the two supports case can't be verified.*
which creates one RX thread on lcore 5 reading from port 0
and a worker thread on lcore 7 writing to port 1::
- ./qos_sched -l 1,2,5,7 -n 4 -- -i --pfc "0,1,2,5,7" --cfg ../profile.cfg
+ ./<build_target>/examples/dpdk-qos_sched -l 1,2,5,7 -n 4 -- -i --pfc "0,1,2,5,7" --cfg ../profile.cfg
3. The generator settings:
Set IP dst address mode is random, and the mask is "255.255.255.0".
@@ -672,7 +672,7 @@ Test Case: Redistribution of unused pipe BW to other pipes within the same subpo
which creates one RX thread on lcore 5 reading from port 0
and a worker thread on lcore 7 writing to port 1::
- ./qos_sched -l 1,2,5,7 -n 4 -- -i --pfc "0,1,2,5,7" --cfg ../profile_ov.cfg
+ ./<build_target>/examples/dpdk-qos_sched -l 1,2,5,7 -n 4 -- -i --pfc "0,1,2,5,7" --cfg ../profile_ov.cfg
3. The generator settings:
Configure 4 flows:
--git a/test_plans/rss_key_update_test_plan.rst b/test_plans/rss_key_update_test_plan.rst
index 43f726bc..699f1bdc 100644
--- a/test_plans/rss_key_update_test_plan.rst
+++ b/test_plans/rss_key_update_test_plan.rst
@@ -51,7 +51,7 @@ Test Case: test_set_hash_key_toeplitz
#. Launch the ``testpmd`` application with the following arguments::
- ./testpmd -c ffffff -n 4 -- -i --portmask=0x6 --rxq=16 --txq=16
+ ./<build_target>/app/dpdk-testpmd -c ffffff -n 4 -- -i --portmask=0x6 --rxq=16 --txq=16
#. PMD fwd only receive the packets::
diff --git a/test_plans/rte_flow_test_plan.rst b/test_plans/rte_flow_test_plan.rst
index b53f64b8..edbf59d6 100644
--- a/test_plans/rte_flow_test_plan.rst
+++ b/test_plans/rte_flow_test_plan.rst
@@ -76,7 +76,7 @@ Test Case: dst (destination MAC) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -116,7 +116,7 @@ Test Case: src (source MAC) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -157,7 +157,7 @@ Test Case: type (EtherType or TPID) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -201,7 +201,7 @@ Test Case: protocol (protocol type) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -246,7 +246,7 @@ Test Case: icmp_type (ICMP message type) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -291,7 +291,7 @@ We tested type 3, code 3.
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -335,7 +335,7 @@ Test Case: tos (Type of Service) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -377,7 +377,7 @@ Test Case: ttl (time to live) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -418,7 +418,7 @@ Test Case: proto (IPv4 protocol) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -459,7 +459,7 @@ Test Case: src (IPv4 source) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -499,7 +499,7 @@ Test Case: dst (IPv4 destination) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -542,7 +542,7 @@ Test Case: tc (Traffic Class) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -583,7 +583,7 @@ Test Case: flow (Flow Code) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -624,7 +624,7 @@ Test Case: proto (IPv6 protocol/next header protocol) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -665,7 +665,7 @@ Test Case: hop (Hop Limit) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -704,7 +704,7 @@ Test Case: dst (IPv6 destination) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -744,7 +744,7 @@ Test Case: src (IPv6 source) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -788,7 +788,7 @@ Test Case: src (source port) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -826,7 +826,7 @@ Test Case: dst (destination port) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -866,7 +866,7 @@ Test Case: tag (SCTP header tag) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -907,7 +907,7 @@ Test Case: cksum (SCTP header checksum) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -950,7 +950,7 @@ Test Case: src (source port) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -991,7 +991,7 @@ Test Case: dst (destination port) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -1032,7 +1032,7 @@ Test Case: flags (TCP flags) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -1075,7 +1075,7 @@ Test Case: src (source port) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -1115,7 +1115,7 @@ Test Case: dst (destination port) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -1161,7 +1161,7 @@ We test them altogether as the tci and we test each field individually.
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -1203,7 +1203,7 @@ Test Case: pcp (Priority Code Point) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -1242,7 +1242,7 @@ NOTE: The only two possible values for dei are 0 and 1.
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -1279,7 +1279,7 @@ Test Case: vid (VLAN identifier) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -1320,7 +1320,7 @@ Test Case: tpid (Tag Protocol Identifier) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -1364,7 +1364,7 @@ Test Case: vni (VXLAN network identifier) rule
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -1423,7 +1423,7 @@ Test Case: passthru test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -1465,7 +1465,7 @@ Test Case: flag test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -1507,7 +1507,7 @@ Test Case: drop test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -1550,7 +1550,7 @@ Test Case: test_shared
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -1590,7 +1590,7 @@ Test Case: test_id
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -1633,7 +1633,7 @@ Test Case: mac_swap test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -1676,7 +1676,7 @@ Test Case: dec_ttl test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -1719,7 +1719,7 @@ Test Case: jump test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -1763,7 +1763,7 @@ Test Case: mark test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -1806,7 +1806,7 @@ Test Case: queue test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -1849,7 +1849,7 @@ Test Case: pf test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -1892,7 +1892,7 @@ Test Case: test_original
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -1932,7 +1932,7 @@ Test Case: test_id
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -1976,7 +1976,7 @@ Test Case: test_original
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -2016,7 +2016,7 @@ Test Case: test_index
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -2060,7 +2060,7 @@ Test Case: test_original
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -2100,7 +2100,7 @@ Test Case: test_id
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -2144,7 +2144,7 @@ Test Case: meter test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -2187,7 +2187,7 @@ Test Case: security test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -2231,7 +2231,7 @@ Test Case: of_set_mpls_ttl test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -2274,7 +2274,7 @@ Test Case: of_dec_mpls_ttl test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -2318,7 +2318,7 @@ Test Case: of_set_nw_ttl test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -2362,7 +2362,7 @@ Test Case: of_dec_nw_ttl test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -2405,7 +2405,7 @@ Test Case: of_copy_ttl_out test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -2448,7 +2448,7 @@ Test Case: of_copy_ttl_in test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -2491,7 +2491,7 @@ Test Case: of_pop_vlan test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -2534,7 +2534,7 @@ Test Case: of_push_vlan test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -2578,7 +2578,7 @@ Test Case: of_set_vlan_vid test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -2621,7 +2621,7 @@ Test Case: of_set_vlan_pcp test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -2664,7 +2664,7 @@ Test Case: of_pop_mpls test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -2707,7 +2707,7 @@ Test Case: of_push_mpls test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -2750,7 +2750,7 @@ Test Case: vxlan_encap
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -2793,7 +2793,7 @@ Test Case: vxlan_decap
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -2836,7 +2836,7 @@ Test Case: test_data
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -2877,7 +2877,7 @@ Test Case: test_preserve
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -2919,7 +2919,7 @@ Test Case: test_size
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -2964,7 +2964,7 @@ Test Case: test_data
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -3005,7 +3005,7 @@ Test Case: test_size
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -3049,7 +3049,7 @@ Test Case: set_ipv4_src test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -3093,7 +3093,7 @@ Test Case: set_ipv4_dst test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -3136,7 +3136,7 @@ Test Case: set_ipv6_src test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -3181,7 +3181,7 @@ Test Case: set_ipv6_dst test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -3226,7 +3226,7 @@ Test Case: test_udp
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -3266,7 +3266,7 @@ Test Case: test_tcp
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -3310,7 +3310,7 @@ Test Case: test_udp
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -3350,7 +3350,7 @@ Test Case: test_tcp
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -3392,7 +3392,7 @@ Test Case: set_ttl test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -3435,7 +3435,7 @@ Test Case: set_mac_src test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -3478,7 +3478,7 @@ Test Case: set_mac_dst test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -3521,7 +3521,7 @@ Test Case: inc_tcp_seq test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -3564,7 +3564,7 @@ Test Case: dec_tcp_seq test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -3607,7 +3607,7 @@ Test Case: inc_tcp_ack test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -3650,7 +3650,7 @@ Test Case: dec_tcp_ack test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -3693,7 +3693,7 @@ Test Case: test_data
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -3734,7 +3734,7 @@ Test Case: test_mask
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -3776,7 +3776,7 @@ Test Case: test_index
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -3820,7 +3820,7 @@ Test Case: test_data
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -3861,7 +3861,7 @@ Test Case: test_mask
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -3904,7 +3904,7 @@ Test Case: set_ipv4_dscp test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -3948,7 +3948,7 @@ Test Case: set_ipv6_dscp test
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -3992,7 +3992,7 @@ Test Case: test_timeout
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -4032,7 +4032,7 @@ Test Case: test_reserved
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
@@ -4072,7 +4072,7 @@ Test Case: test_context
::
- build/testpmd -c 3 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 3 -- -i
..
diff --git a/test_plans/runtime_vf_queue_number_maxinum_test_plan.rst b/test_plans/runtime_vf_queue_number_maxinum_test_plan.rst
index 333b993d..1a9ff77d 100644
--- a/test_plans/runtime_vf_queue_number_maxinum_test_plan.rst
+++ b/test_plans/runtime_vf_queue_number_maxinum_test_plan.rst
@@ -108,7 +108,7 @@ Test case 1: VF consume max queue number on one PF port
================================================================
1. Start the PF testpmd::
- ./testpmd -c f -n 4 -a 05:00.0 --file-prefix=test1 \
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -a 05:00.0 --file-prefix=test1 \
--socket-mem 1024,1024 -- -i
2. Start the two testpmd to consume maximum queues::
@@ -120,10 +120,10 @@ Test case 1: VF consume max queue number on one PF port
The driver will alloc queues as power of 2, and queue must be equal or less than 16,
so the second VF testpmd can only start '--rxq=8 --txq=8'::
- ./testpmd -c 0xf0 -n 4 -a 05:02.0 -a 05:02.1 -a 05:02.2 -a... --file-prefix=test2 \
+ ./<build_target>/app/dpdk-testpmd -c 0xf0 -n 4 -a 05:02.0 -a 05:02.1 -a 05:02.2 -a... --file-prefix=test2 \
--socket-mem 1024,1024 -- -i --rxq=16 --txq=16
- ./testpmd -c 0xf00 -n 4 -a 05:05.7 --file-prefix=test3 \
+ ./<build_target>/app/dpdk-testpmd -c 0xf00 -n 4 -a 05:05.7 --file-prefix=test3 \
--socket-mem 1024,1024 -- -i --rxq=8 --txq=8
Check the Max possible RX queues and TX queues of the two VFs are both 16::
@@ -154,7 +154,7 @@ Test case 2: set max queue number per vf on one pf port
As the feature description describe, the max value of queue-num-per-vf is 8
for Both two and four ports Fortville NIC::
- ./testpmd -c f -n 4 -a 05:00.0,queue-num-per-vf=16 --file-prefix=test1 \
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -a 05:00.0,queue-num-per-vf=16 --file-prefix=test1 \
--socket-mem 1024,1024 -- -i
PF port failed to started with "i40e_pf_parameter_init():
diff --git a/test_plans/speed_capabilities_test_plan.rst b/test_plans/speed_capabilities_test_plan.rst
index 28216120..eb341e96 100644
--- a/test_plans/speed_capabilities_test_plan.rst
+++ b/test_plans/speed_capabilities_test_plan.rst
@@ -52,7 +52,7 @@ Assuming that ports ``0`` and ``1`` of the test target are directly connected
to the traffic generator, launch the ``testpmd`` application with the following
arguments::
- ./build/app/testpmd -- -i --portmask=0x3
+ ./<build_target>/app/dpdk-testpmd -- -i --portmask=0x3
Start packet forwarding in the ``testpmd`` application with the ``start``
command. Then, for each port on the target make the Traffic Generator
diff --git a/test_plans/vmdq_dcb_test_plan.rst b/test_plans/vmdq_dcb_test_plan.rst
index fc173993..1c9b82ef 100644
--- a/test_plans/vmdq_dcb_test_plan.rst
+++ b/test_plans/vmdq_dcb_test_plan.rst
@@ -64,14 +64,16 @@ Prerequisites
to the pools numbers(inclusive) and the VLAN user priority field increments from
0 to 7 (inclusive) for each VLAN ID.
- Build vmdq_dcb example,
- make -C examples/vmdq_dcb RTE_SDK=`pwd` T=x86_64-native-linuxapp-gcc
+ make: make -C examples/vmdq_dcb RTE_SDK=`pwd` T=x86_64-native-linuxapp-gcc
+ meson: ./<build_target>/examples/dpdk-vmdq_dcb -c 0xff -n 4 -- -p 0x3 --nb-pools 32 --nb-tcs 4 --enable-rss
Test Case 1: Verify VMDQ & DCB with 32 Pools and 4 TCs
======================================================
1. Run the application as the following::
- ./examples/vmdq_dcb/build/vmdq_dcb_app -c 0xff -n 4 -- -p 0x3 --nb-pools 32 --nb-tcs 4 --enable-rss
+ make: ./examples/vmdq_dcb/build/vmdq_dcb_app -c 0xff -n 4 -- -p 0x3 --nb-pools 32 --nb-tcs 4 --enable-rss
+ meson: ./<build_target>/examples/dpdk-vmdq_dcb -c 0xff -n 4 -- -p 0x3 --nb-pools 32 --nb-tcs 4 --enable-rss
2. Start traffic transmission using approx 10% of line rate.
3. After a number of seconds, e.g. 15, stop traffic, and ensure no traffic
@@ -92,10 +94,12 @@ Test Case 2: Verify VMDQ & DCB with 16 Pools and 8 TCs
======================================================
1. change CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM to 8 in "./config/common_linuxapp", rebuild DPDK.
+ meson: change "#define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM 4" to 8 in config/rte_config.h, rebuild DPDK.
2. Repeat Test Case 1, with `--nb-pools 16` and `--nb-tcs 8` of the sample application::
- ./examples/vmdq_dcb/build/vmdq_dcb_app -c 0xff -n 4 -- -p 0x3 --nb-pools 16 --nb-tcs 8 --enable-rss
+ make: ./examples/vmdq_dcb/build/vmdq_dcb_app -c 0xff -n 4 -- -p 0x3 --nb-pools 16 --nb-tcs 8 --enable-rss
+ meson: ./<build_target>/examples/dpdk-vmdq_dcb -c 0xff -n 4 -- -p 0x3 --nb-pools 16 --nb-tcs 8 --enable-rss
Expected result:
- No packet loss is expected
@@ -103,10 +107,12 @@ Expected result:
- verify queue should be equal "vlan user priority value"
3. change CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM to 16 in "./config/common_linuxapp", rebuild DPDK.
+ meson: change "#define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM 4" to 16 in config/rte_config.h, rebuild DPDK.
4. Repeat Test Case 1, with `--nb-pools 16` and `--nb-tcs 8` of the sample application::
- ./examples/vmdq_dcb/build/vmdq_dcb_app -c 0xff -n 4 -- -p 0x3 --nb-pools 16 --nb-tcs 8 --enable-rss
+ make: ./examples/vmdq_dcb/build/vmdq_dcb_app -c 0xff -n 4 -- -p 0x3 --nb-pools 16 --nb-tcs 8 --enable-rss
+ meson: ./<build_target>/examples/dpdk-vmdq_dcb -c 0xff -n 4 -- -p 0x3 --nb-pools 16 --nb-tcs 8 --enable-rss
Expected result:
- No packet loss is expected
--
2.25.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* RE: [dts][PATCH V1 4/4] test_plans/*: modify test plan to adapt meson build
2022-01-22 18:20 ` [dts][PATCH V1 4/4] " Yu Jiang
@ 2022-01-25 2:22 ` Tu, Lijuan
0 siblings, 0 replies; 7+ messages in thread
From: Tu, Lijuan @ 2022-01-25 2:22 UTC (permalink / raw)
To: Jiang, YuX, dts
> -----Original Message-----
> From: Jiang, YuX <yux.jiang@intel.com>
> Sent: 2022年1月23日 2:21
> To: Tu, Lijuan <lijuan.tu@intel.com>; dts@dpdk.org
> Cc: Jiang, YuX <yux.jiang@intel.com>
> Subject: [dts][PATCH V1 4/4] test_plans/*: modify test plan to adapt meson
> build
>
> test_plans/*: modify test plan to adapt meson build
>
> Signed-off-by: Yu Jiang <yux.jiang@intel.com>
Applied the series, thanks
^ permalink raw reply [flat|nested] 7+ messages in thread
* [dts][PATCH V1 0/4] test_plans/*: modify test plan to adapt meson build
@ 2022-01-22 17:55 Yu Jiang
0 siblings, 0 replies; 7+ messages in thread
From: Yu Jiang @ 2022-01-22 17:55 UTC (permalink / raw)
To: lijuan.tu, dts; +Cc: Yu Jiang
test_plans/*: modify test plan to adapt meson build
Yu Jiang (4):
test_plans/*: modify test plan to adapt meson build
test_plans/*: modify test plan to adapt meson build
test_plans/*: modify test plan to adapt meson build
test_plans/*: modify test plan to adapt meson build
test_plans/ABI_stable_test_plan.rst | 5 +-
test_plans/bbdev_test_plan.rst | 4 +-
test_plans/blocklist_test_plan.rst | 6 +-
test_plans/checksum_offload_test_plan.rst | 2 +-
.../cloud_filter_with_l4_port_test_plan.rst | 2 +-
test_plans/cmdline_test_plan.rst | 9 +-
test_plans/dcf_lifecycle_test_plan.rst | 52 ++---
test_plans/ddp_gtp_qregion_test_plan.rst | 2 +-
test_plans/ddp_gtp_test_plan.rst | 2 +-
test_plans/ddp_l2tpv3_test_plan.rst | 2 +-
test_plans/ddp_mpls_test_plan.rst | 2 +-
test_plans/ddp_ppp_l2tp_test_plan.rst | 2 +-
test_plans/dual_vlan_test_plan.rst | 2 +-
test_plans/dynamic_flowtype_test_plan.rst | 2 +-
test_plans/dynamic_queue_test_plan.rst | 2 +-
test_plans/eeprom_dump_test_plan.rst | 2 +-
test_plans/ethtool_stats_test_plan.rst | 34 ++--
test_plans/eventdev_perf_test_plan.rst | 36 ++--
.../eventdev_pipeline_perf_test_plan.rst | 25 ++-
test_plans/eventdev_pipeline_test_plan.rst | 24 ++-
test_plans/external_memory_test_plan.rst | 8 +-
.../external_mempool_handler_test_plan.rst | 23 ++-
test_plans/firmware_version_test_plan.rst | 2 +-
test_plans/interrupt_pmd_test_plan.rst | 15 +-
test_plans/ip_pipeline_test_plan.rst | 33 +--
test_plans/ipgre_test_plan.rst | 6 +-
test_plans/ipsec_gw_and_library_test_plan.rst | 12 +-
test_plans/ipv4_reassembly_test_plan.rst | 24 ++-
..._get_extra_queue_information_test_plan.rst | 4 +-
test_plans/jumboframes_test_plan.rst | 4 +-
test_plans/kernelpf_iavf_test_plan.rst | 12 +-
test_plans/kni_test_plan.rst | 14 +-
test_plans/l2fwd_jobstats_test_plan.rst | 11 +-
test_plans/l2tp_esp_coverage_test_plan.rst | 12 +-
test_plans/l3fwdacl_test_plan.rst | 39 ++--
test_plans/large_vf_test_plan.rst | 10 +-
test_plans/link_flowctrl_test_plan.rst | 2 +-
.../link_status_interrupt_test_plan.rst | 9 +-
test_plans/linux_modules_test_plan.rst | 10 +-
...ack_multi_paths_port_restart_test_plan.rst | 40 ++--
.../loopback_multi_queues_test_plan.rst | 80 ++++----
test_plans/mac_filter_test_plan.rst | 2 +-
test_plans/macsec_for_ixgbe_test_plan.rst | 10 +-
...ious_driver_event_indication_test_plan.rst | 8 +-
test_plans/mdd_test_plan.rst | 8 +-
.../metering_and_policing_test_plan.rst | 28 +--
test_plans/mtu_update_test_plan.rst | 2 +-
test_plans/multiple_pthread_test_plan.rst | 68 +++----
test_plans/ptpclient_test_plan.rst | 10 +-
test_plans/ptype_mapping_test_plan.rst | 2 +-
test_plans/qinq_filter_test_plan.rst | 16 +-
test_plans/qos_api_test_plan.rst | 18 +-
test_plans/qos_meter_test_plan.rst | 2 +-
test_plans/qos_sched_test_plan.rst | 24 +--
test_plans/queue_region_test_plan.rst | 2 +-
test_plans/queue_start_stop_test_plan.rst | 2 +-
test_plans/rss_key_update_test_plan.rst | 2 +-
test_plans/rss_to_rte_flow_test_plan.rst | 30 +--
test_plans/rte_flow_test_plan.rst | 190 +++++++++---------
test_plans/rteflow_priority_test_plan.rst | 16 +-
...ntime_vf_queue_number_kernel_test_plan.rst | 10 +-
...time_vf_queue_number_maxinum_test_plan.rst | 8 +-
.../runtime_vf_queue_number_test_plan.rst | 26 +--
test_plans/rxtx_callbacks_test_plan.rst | 11 +-
test_plans/rxtx_offload_test_plan.rst | 16 +-
test_plans/scatter_test_plan.rst | 2 +-
test_plans/speed_capabilities_test_plan.rst | 2 +-
.../vdev_primary_secondary_test_plan.rst | 4 +-
test_plans/veb_switch_test_plan.rst | 30 +--
test_plans/vf_daemon_test_plan.rst | 2 +-
test_plans/vf_jumboframe_test_plan.rst | 2 +-
test_plans/vf_kernel_test_plan.rst | 2 +-
test_plans/vf_l3fwd_test_plan.rst | 13 +-
test_plans/vf_single_core_perf_test_plan.rst | 2 +-
...tio_user_as_exceptional_path_test_plan.rst | 6 +-
...ser_for_container_networking_test_plan.rst | 8 +-
test_plans/vmdq_dcb_test_plan.rst | 14 +-
77 files changed, 638 insertions(+), 547 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2022-01-25 2:22 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-22 18:20 [dts][PATCH V1 0/4] test_plans/*: modify test plan to adapt meson build Yu Jiang
2022-01-22 18:20 ` [dts][PATCH V1 1/4] " Yu Jiang
2022-01-22 18:20 ` [dts][PATCH V1 2/4] " Yu Jiang
2022-01-22 18:20 ` [dts][PATCH V1 3/4] " Yu Jiang
2022-01-22 18:20 ` [dts][PATCH V1 4/4] " Yu Jiang
2022-01-25 2:22 ` Tu, Lijuan
-- strict thread matches above, loose matches on Subject: below --
2022-01-22 17:55 [dts][PATCH V1 0/4] " Yu Jiang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).