From: Yu Jiang <yux.jiang@intel.com>
To: lijuan.tu@intel.com, dts@dpdk.org
Cc: Yu Jiang <yux.jiang@intel.com>
Subject: [dts][PATCH V1 1/4] test_plans/*: modify test plan to adapt meson build
Date: Sat, 22 Jan 2022 17:55:30 +0000 [thread overview]
Message-ID: <20220122175533.912631-2-yux.jiang@intel.com> (raw)
In-Reply-To: <20220122175533.912631-1-yux.jiang@intel.com>
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=y, Size: 56487 bytes --]
test_plans/*: modify test plan to adapt meson build
Signed-off-by: Yu Jiang <yux.jiang@intel.com>
---
test_plans/blocklist_test_plan.rst | 6 +--
test_plans/checksum_offload_test_plan.rst | 2 +-
.../cloud_filter_with_l4_port_test_plan.rst | 2 +-
test_plans/cmdline_test_plan.rst | 9 +++-
test_plans/dcf_lifecycle_test_plan.rst | 52 +++++++++----------
test_plans/ddp_gtp_qregion_test_plan.rst | 2 +-
test_plans/ddp_gtp_test_plan.rst | 2 +-
test_plans/ddp_l2tpv3_test_plan.rst | 2 +-
test_plans/ddp_mpls_test_plan.rst | 2 +-
test_plans/ddp_ppp_l2tp_test_plan.rst | 2 +-
test_plans/dual_vlan_test_plan.rst | 2 +-
test_plans/dynamic_flowtype_test_plan.rst | 2 +-
test_plans/dynamic_queue_test_plan.rst | 2 +-
test_plans/eeprom_dump_test_plan.rst | 2 +-
test_plans/ethtool_stats_test_plan.rst | 34 ++++++------
test_plans/eventdev_pipeline_test_plan.rst | 24 +++++----
test_plans/external_memory_test_plan.rst | 8 +--
.../external_mempool_handler_test_plan.rst | 23 ++++----
test_plans/interrupt_pmd_test_plan.rst | 15 ++++--
test_plans/ip_pipeline_test_plan.rst | 33 +++++++-----
test_plans/ipgre_test_plan.rst | 6 +--
test_plans/ipv4_reassembly_test_plan.rst | 24 +++++----
| 4 +-
test_plans/jumboframes_test_plan.rst | 4 +-
test_plans/kernelpf_iavf_test_plan.rst | 12 ++---
test_plans/kni_test_plan.rst | 14 ++---
test_plans/l2fwd_jobstats_test_plan.rst | 11 +++-
27 files changed, 171 insertions(+), 130 deletions(-)
diff --git a/test_plans/blocklist_test_plan.rst b/test_plans/blocklist_test_plan.rst
index a284448d..f1231331 100644
--- a/test_plans/blocklist_test_plan.rst
+++ b/test_plans/blocklist_test_plan.rst
@@ -53,7 +53,7 @@ Test Case: Testpmd with no blocklisted device
Run testpmd in interactive mode and ensure that at least 2 ports
are bound and available::
- build/testpmd -c 3 -- -i
+ build/app/dpdk-testpmd -c 3 -- -i
....
EAL: unbind kernel driver /sys/bus/pci/devices/0000:01:00.0/driver/unbind
EAL: Core 1 is ready (tid=357fc700)
@@ -91,7 +91,7 @@ Test Case: Testpmd with one port blocklisted
Select first available port to be blocklisted and specify it with -b option. For the example above::
- build/testpmd -c 3 -b 0000:01:00.0 -- -i
+ build/app/dpdk-testpmd -c 3 -b 0000:01:00.0 -- -i
Check that corresponding device is skipped for binding, and
only 3 ports are available now:::
@@ -126,7 +126,7 @@ Test Case: Testpmd with all but one port blocklisted
Blocklist all devices except the last one.
For the example above:::
- build/testpmd -c 3 -b 0000:01:00.0 -b 0000:01:00.0 -b 0000:02:00.0 -- -i
+ build/app/dpdk-testpmd -c 3 -b 0000:01:00.0 -b 0000:01:00.0 -b 0000:02:00.0 -- -i
Check that 3 corresponding device is skipped for binding, and
only 1 ports is available now:::
diff --git a/test_plans/checksum_offload_test_plan.rst b/test_plans/checksum_offload_test_plan.rst
index f4b388c4..7b29b1ec 100644
--- a/test_plans/checksum_offload_test_plan.rst
+++ b/test_plans/checksum_offload_test_plan.rst
@@ -92,7 +92,7 @@ to the device under test::
Assuming that ports ``0`` and ``2`` are connected to a traffic generator,
launch the ``testpmd`` with the following arguments::
- ./build/app/testpmd -cffffff -n 1 -- -i --burst=1 --txpt=32 \
+ ./build/app/dpdk-testpmd -cffffff -n 1 -- -i --burst=1 --txpt=32 \
--txht=8 --txwt=0 --txfreet=0 --rxfreet=64 --mbcache=250 --portmask=0x5
enable-rx-cksum
diff --git a/test_plans/cloud_filter_with_l4_port_test_plan.rst b/test_plans/cloud_filter_with_l4_port_test_plan.rst
index ed2109eb..e9f226ac 100644
--- a/test_plans/cloud_filter_with_l4_port_test_plan.rst
+++ b/test_plans/cloud_filter_with_l4_port_test_plan.rst
@@ -49,7 +49,7 @@ Prerequisites
./usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:81:00.0
4.Launch the testpmd::
- ./testpmd -l 0-3 -n 4 -a 81:00.0 --file-prefix=test -- -i --rxq=16 --txq=16 --disable-rss
+ ./build/app/dpdk-testpmd -l 0-3 -n 4 -a 81:00.0 --file-prefix=test -- -i --rxq=16 --txq=16 --disable-rss
testpmd> set fwd rxonly
testpmd> set promisc all off
testpmd> set verbose 1
diff --git a/test_plans/cmdline_test_plan.rst b/test_plans/cmdline_test_plan.rst
index d1499991..70a17b00 100644
--- a/test_plans/cmdline_test_plan.rst
+++ b/test_plans/cmdline_test_plan.rst
@@ -66,9 +66,16 @@ to the device under test::
modprobe vfio-pci
usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
+Build dpdk and examples=cmdline:
+ CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static <build_target>
+ ninja -C <build_target>
+
+ meson configure -Dexamples=cmdline <build_target>
+ ninja -C <build_target>
+
Launch the ``cmdline`` with 24 logical cores in linuxapp environment::
- $ ./build/app/cmdline -cffffff
+ $ ./build/examples/dpdk-cmdline -cffffff
Test the 3 simple commands in below prompt ::
diff --git a/test_plans/dcf_lifecycle_test_plan.rst b/test_plans/dcf_lifecycle_test_plan.rst
index 4c010e76..2c8628f2 100644
--- a/test_plans/dcf_lifecycle_test_plan.rst
+++ b/test_plans/dcf_lifecycle_test_plan.rst
@@ -102,7 +102,7 @@ Set a VF as trust ::
Launch dpdk on the VF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=vf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=vf -- -i
Expected: VF get DCF mode. There are outputs in testpmd launching ::
@@ -128,8 +128,8 @@ Set a VF as trust on each PF ::
Launch dpdk on the VF on each PF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0 18:11.0
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf1 -- -i
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 11-15 -n 4 -a 18:11.0,cap=dcf --file-prefix=dcf2 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf1 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 11-15 -n 4 -a 18:11.0,cap=dcf --file-prefix=dcf2 -- -i
Expected: VF get DCF mode. There are outputs in each testpmd launching ::
@@ -152,7 +152,7 @@ Set a VF as trust ::
Launch dpdk on the VF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.1
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.1,cap=dcf --file-prefix=vf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.1,cap=dcf --file-prefix=vf -- -i
Expected: VF can NOT get DCF mode. testpmd should provide a friendly output ::
@@ -180,7 +180,7 @@ Set a VF as trust ::
Launch dpdk on the VF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=vf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=vf -- -i
Expected: VF can NOT get DCF mode. testpmd should provide a friendly output ::
@@ -208,11 +208,11 @@ Set a VF as trust ::
Launch dpdk on the VF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0 18:01.1
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
Launch another testpmd on the VF1, and start mac forward ::
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 11-14 -n 4 -a 18:01.1 --file-prefix=vf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 11-14 -n 4 -a 18:01.1 --file-prefix=vf -- -i
set verbose 1
set fwd mac
start
@@ -260,11 +260,11 @@ Set a VF as trust ::
Launch dpdk on the VF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0 18:01.1
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
Launch another testpmd on the VF1, and start mac forward ::
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 11-14 -n 4 -a 18:01.1 --file-prefix=vf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 11-14 -n 4 -a 18:01.1 --file-prefix=vf -- -i
set verbose 1
set fwd mac
start
@@ -309,11 +309,11 @@ Set a VF as trust ::
Launch dpdk on the VF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0 18:01.1
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
Launch another testpmd on the VF1, and start mac forward ::
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 11-14 -n 4 -a 18:01.1 --file-prefix=vf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 11-14 -n 4 -a 18:01.1 --file-prefix=vf -- -i
set verbose 1
set fwd mac
start
@@ -360,11 +360,11 @@ Set a VF as trust ::
Launch dpdk on the VF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0 18:01.1
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
Launch another testpmd on the DCF ::
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 11-14 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf2 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 11-14 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf2 -- -i
Expect: the second testpmd can't be launched
@@ -385,16 +385,16 @@ Set a VF as trust ::
Launch dpdk on the VF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0 18:01.1
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
Launch another testpmd on the VF1 and VF2, and start mac forward ::
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 11-14 -n 4 -a 18:01.1 --file-prefix=vf1 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 11-14 -n 4 -a 18:01.1 --file-prefix=vf1 -- -i
set verbose 1
set fwd mac
start
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 15-16 -n 4 -a 18:01.2 --file-prefix=vf2 -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 15-16 -n 4 -a 18:01.2 --file-prefix=vf2 -- -i
set verbose 1
set fwd mac
start
@@ -453,11 +453,11 @@ Set a VF as trust ::
Launch dpdk on the VF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0 18:01.1 18:01.2
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
Launch another testpmd on the VF1, and start mac forward ::
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 11-14 -n 4 -a 18:01.1 -a 18:01.2 --file-prefix=vf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 11-14 -n 4 -a 18:01.1 -a 18:01.2 --file-prefix=vf -- -i
set verbose 1
set fwd mac
start
@@ -549,7 +549,7 @@ Set ADQ on PF ::
Try to launch dpdk on the VF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
Expect: testpmd can't be launched. PF should reject DCF mode.
@@ -565,7 +565,7 @@ Remove ADQ on PF ::
Launch dpdk on the VF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
Expect: testpmd can launch successfully. DCF mode can be grant ::
@@ -589,7 +589,7 @@ Set a VF as trust ::
Launch dpdk on the VF, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
Set ADQ on PF ::
@@ -629,7 +629,7 @@ Set a VF as trust ::
Launch dpdk on the VF0 on PF1, request DCF mode ::
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=dcf -- -i
Set ADQ on PF2 ::
@@ -973,7 +973,7 @@ TC31: add ACL rule by kernel, reject request for DCF functionality
3. launch testpmd on VF0 requesting for DCF funtionality::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc -n 4 -a 18:01.0,cap=dcf --log-level=ice,7 -- -i --port-topology=loop
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc -n 4 -a 18:01.0,cap=dcf --log-level=ice,7 -- -i --port-topology=loop
report error::
@@ -1015,7 +1015,7 @@ TC32: add ACL rule by kernel, accept request for DCF functionality of another PF
3. launch testpmd on VF0 of PF0 requesting for DCF funtionality successfully::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc -n 4 -a 18:01.0,cap=dcf --log-level=ice,7 -- -i --port-topology=loop
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc -n 4 -a 18:01.0,cap=dcf --log-level=ice,7 -- -i --port-topology=loop
show the port info::
@@ -1032,7 +1032,7 @@ TC33: ACL DCF mode is active, add ACL filters by way of host based configuration
2. launch testpmd on VF0 of PF0 requesting for DCF funtionality successfully::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc -n 4 -a 18:01.0,cap=dcf --log-level=ice,7 -- -i --port-topology=loop
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc -n 4 -a 18:01.0,cap=dcf --log-level=ice,7 -- -i --port-topology=loop
show the port info::
@@ -1061,7 +1061,7 @@ TC34: ACL DCF mode is active, add ACL filters by way of host based configuration
2. launch testpmd on VF0 of PF0 requesting for DCF funtionality successfully::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc -n 4 -a 18:01.0,cap=dcf --log-level=ice,7 -- -i --port-topology=loop
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc -n 4 -a 18:01.0,cap=dcf --log-level=ice,7 -- -i --port-topology=loop
show the port info::
diff --git a/test_plans/ddp_gtp_qregion_test_plan.rst b/test_plans/ddp_gtp_qregion_test_plan.rst
index 596f4855..7e2b1816 100644
--- a/test_plans/ddp_gtp_qregion_test_plan.rst
+++ b/test_plans/ddp_gtp_qregion_test_plan.rst
@@ -86,7 +86,7 @@ Prerequisites
--pkt-filter-mode=perfect on testpmd to enable flow director. In general,
PF's max queue is 64::
- ./testpmd -c f -n 4 -- -i --pkt-filter-mode=perfect
+ ./<build>/app/dpdk-testpmd -c f -n 4 -- -i --pkt-filter-mode=perfect
--port-topology=chained --txq=64 --rxq=64
diff --git a/test_plans/ddp_gtp_test_plan.rst b/test_plans/ddp_gtp_test_plan.rst
index ed5139bc..0fd5a50d 100644
--- a/test_plans/ddp_gtp_test_plan.rst
+++ b/test_plans/ddp_gtp_test_plan.rst
@@ -82,7 +82,7 @@ Prerequisites
port topology mode, add txq/rxq to enable multi-queues. In general, PF's
max queue is 64, VF's max queue is 4::
- ./testpmd -c f -n 4 -- -i --pkt-filter-mode=perfect --port-topology=chained --tx-offloads=0x8fff --txq=64 --rxq=64
+ ./<build>/app/dpdk-testpmd -c f -n 4 -- -i --pkt-filter-mode=perfect --port-topology=chained --tx-offloads=0x8fff --txq=64 --rxq=64
Test Case: Load dynamic device personalization
diff --git a/test_plans/ddp_l2tpv3_test_plan.rst b/test_plans/ddp_l2tpv3_test_plan.rst
index 8262da35..d4ae0f55 100644
--- a/test_plans/ddp_l2tpv3_test_plan.rst
+++ b/test_plans/ddp_l2tpv3_test_plan.rst
@@ -100,7 +100,7 @@ any DDP functionality*
5. Start the TESTPMD::
- ./x86_64-native-linuxapp-gcc/build/app/test-pmd/testpmd -c f -n 4 -a
+ ./<build>/app/dpdk-testpmd -c f -n 4 -a
<PCI address of device> -- -i --port-topology=chained --txq=64 --rxq=64
--pkt-filter-mode=perfect
diff --git a/test_plans/ddp_mpls_test_plan.rst b/test_plans/ddp_mpls_test_plan.rst
index d76934c1..6c4d0e01 100644
--- a/test_plans/ddp_mpls_test_plan.rst
+++ b/test_plans/ddp_mpls_test_plan.rst
@@ -70,7 +70,7 @@ Prerequisites
enable multi-queues. In general, PF's max queue is 64, VF's max queue
is 4::
- ./testpmd -c f -n 4 -- -i --port-topology=chained --tx-offloads=0x8fff
+ ./<build>/app/dpdk-testpmd -c f -n 4 -- -i --port-topology=chained --tx-offloads=0x8fff
--txq=4 --rxq=4
diff --git a/test_plans/ddp_ppp_l2tp_test_plan.rst b/test_plans/ddp_ppp_l2tp_test_plan.rst
index 8f51ff20..3f9c53b7 100644
--- a/test_plans/ddp_ppp_l2tp_test_plan.rst
+++ b/test_plans/ddp_ppp_l2tp_test_plan.rst
@@ -109,7 +109,7 @@ Prerequisites
--pkt-filter-mode=perfect on testpmd to enable flow director. In general,
PF's max queue is 64::
- ./testpmd -c f -n 4 -- -i --port-topology=chained --txq=64 --rxq=64
+ ./<build>/app/dpdk-testpmd -c f -n 4 -- -i --port-topology=chained --txq=64 --rxq=64
--pkt-filter-mode=perfect
Load/delete dynamic device personalization
diff --git a/test_plans/dual_vlan_test_plan.rst b/test_plans/dual_vlan_test_plan.rst
index 9955ef5d..a7e03bcc 100644
--- a/test_plans/dual_vlan_test_plan.rst
+++ b/test_plans/dual_vlan_test_plan.rst
@@ -56,7 +56,7 @@ to the device under test::
Assuming that ports ``0`` and ``1`` are connected to the traffic generator's port ``A`` and ``B``,
launch the ``testpmd`` with the following arguments::
- ./build/app/testpmd -c ffffff -n 3 -- -i --burst=1 --txpt=32 \
+ ./<build>/app/dpdk-testpmd -c ffffff -n 3 -- -i --burst=1 --txpt=32 \
--txht=8 --txwt=0 --txfreet=0 --rxfreet=64 --mbcache=250 --portmask=0x3
The -n command is used to select the number of memory channels. It should match the number of memory channels on that setup.
diff --git a/test_plans/dynamic_flowtype_test_plan.rst b/test_plans/dynamic_flowtype_test_plan.rst
index 1acf60c8..5fda715e 100644
--- a/test_plans/dynamic_flowtype_test_plan.rst
+++ b/test_plans/dynamic_flowtype_test_plan.rst
@@ -87,7 +87,7 @@ Prerequisites
2. Start testpmd on host, set chained port topology mode, add txq/rxq to
enable multi-queues. In general, PF's max queue is 64::
- ./testpmd -c f -n 4 -- -i --port-topology=chained --txq=64 --rxq=64
+ ./<build>/app/dpdk-testpmd -c f -n 4 -- -i --port-topology=chained --txq=64 --rxq=64
3. Set rxonly forwarding and enable output
diff --git a/test_plans/dynamic_queue_test_plan.rst b/test_plans/dynamic_queue_test_plan.rst
index 6be6ec74..dc1d350a 100644
--- a/test_plans/dynamic_queue_test_plan.rst
+++ b/test_plans/dynamic_queue_test_plan.rst
@@ -79,7 +79,7 @@ Prerequisites
2. Start testpmd on host, set chained port topology mode, add txq/rxq to
enable multi-queues::
- ./testpmd -c 0xf -n 4 -- -i --port-topology=chained --txq=64 --rxq=64
+ ./<build>/app/dpdk-testpmd -c 0xf -n 4 -- -i --port-topology=chained --txq=64 --rxq=64
Test Case: Rx queue setup at runtime
diff --git a/test_plans/eeprom_dump_test_plan.rst b/test_plans/eeprom_dump_test_plan.rst
index 3b169c39..3923f3fc 100644
--- a/test_plans/eeprom_dump_test_plan.rst
+++ b/test_plans/eeprom_dump_test_plan.rst
@@ -54,7 +54,7 @@ to the device under test::
Assuming that ports are up and working, then launch the ``testpmd`` application
with the following arguments::
- ./build/app/testpmd -- -i --portmask=0x3
+ ./<build>/app/dpdk-testpmd -- -i --portmask=0x3
Test Case : EEPROM Dump
=======================
diff --git a/test_plans/ethtool_stats_test_plan.rst b/test_plans/ethtool_stats_test_plan.rst
index 95f9e7a6..7947b68d 100644
--- a/test_plans/ethtool_stats_test_plan.rst
+++ b/test_plans/ethtool_stats_test_plan.rst
@@ -74,7 +74,7 @@ bind two ports::
Test Case: xstat options
------------------------
-check ``dpdk-procinfo`` tool support ``xstats`` command options.
+check ``dpdk-proc-info`` tool support ``xstats`` command options.
These options should be included::
@@ -87,17 +87,17 @@ steps:
#. boot up ``testpmd``::
- ./<target name>/app/testpmd -c 0x3 -n 4 -- -i --port-topology=loop
+ ./<target name>/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=loop
testpmd> set fwd io
testpmd> clear port xstats all
testpmd> start
-#. run ``dpdk-procinfo`` tool::
+#. run ``dpdk-proc-info`` tool::
- ./<target name>/app/dpdk-procinfo
+ ./<target name>/app/dpdk-proc-info
-#. check ``dpdk-procinfo`` tool output should contain upper options.
+#. check ``dpdk-proc-info`` tool output should contain upper options.
Test Case: xstat statistic integrity
------------------------------------
@@ -108,7 +108,7 @@ steps:
#. boot up ``testpmd``::
- ./<target name>/app/testpmd -c 0x3 -n 4 -- -i --port-topology=loop
+ ./<target name>/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=loop
testpmd> set fwd io
testpmd> clear port xstats all
@@ -118,11 +118,11 @@ steps:
sendp([Ether()/IP()/UDP()/Raw('\0'*60)], iface=<port 0 name>)
-#. run ``dpdk-procinfo`` tool with ``xstats`` option and check if all ports
+#. run ``dpdk-proc-info`` tool with ``xstats`` option and check if all ports
extended statistics can access by xstat name or xstat id::
- ./<target name>/app/dpdk-procinfo -- -p 3 --xstats-id <N>
- ./<target name>/app/dpdk-procinfo -- -p 3 --xstats-name <statistic name>
+ ./<target name>/app/dpdk-proc-info -- -p 3 --xstats-id <N>
+ ./<target name>/app/dpdk-proc-info -- -p 3 --xstats-name <statistic name>
Test Case: xstat-reset command
------------------------------
@@ -133,7 +133,7 @@ steps:
#. boot up ``testpmd``::
- ./<target name>/app/testpmd -c 0x3 -n 4 -- -i --port-topology=loop
+ ./<target name>/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=loop
testpmd> set fwd io
testpmd> clear port xstats all
@@ -143,10 +143,10 @@ steps:
sendp([Ether()/IP()/UDP()/Raw('\0'*60)], iface=<port 0 name>)
-#. run ``dpdk-procinfo`` tool with ``xstats-reset`` option and check if all port
+#. run ``dpdk-proc-info`` tool with ``xstats-reset`` option and check if all port
statistics have been cleared::
- ./<target name>/app/dpdk-procinfo -- -p 3 --xstats-reset
+ ./<target name>/app/dpdk-proc-info -- -p 3 --xstats-reset
Test Case: xstat single statistic
---------------------------------
@@ -158,7 +158,7 @@ steps:
#. boot up ``testpmd``::
- ./<target name>/app/testpmd -c 0x3 -n 4 -- -i --port-topology=loop
+ ./<target name>/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=loop
testpmd> set fwd io
testpmd> clear port xstats all
@@ -172,14 +172,14 @@ steps:
testpmd> show port xstats all
-#. run ``dpdk-procinfo`` tool with ``xstats-id`` option to get the statistic
+#. run ``dpdk-proc-info`` tool with ``xstats-id`` option to get the statistic
name corresponding with the index id::
- ./<target name>/app/dpdk-procinfo -- -p 3 --xstats-id 0,1,...N
+ ./<target name>/app/dpdk-proc-info -- -p 3 --xstats-id 0,1,...N
-#. run ``dpdk-procinfo`` tool with ``xstats-name`` option to get the statistic
+#. run ``dpdk-proc-info`` tool with ``xstats-name`` option to get the statistic
data corresponding with the statistic name::
- ./<target name>/app/dpdk-procinfo -- -p 3 --xstats-name <statistic name>
+ ./<target name>/app/dpdk-proc-info -- -p 3 --xstats-name <statistic name>
#. compare these proc info tool xstat values with testpmd xstat values.
\ No newline at end of file
diff --git a/test_plans/eventdev_pipeline_test_plan.rst b/test_plans/eventdev_pipeline_test_plan.rst
index 866eae72..4e4498d4 100644
--- a/test_plans/eventdev_pipeline_test_plan.rst
+++ b/test_plans/eventdev_pipeline_test_plan.rst
@@ -36,6 +36,12 @@ Eventdev Pipeline SW PMD Tests
Prerequisites
==============
+Build dpdk and examples=eventdev_pipeline
+ CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static <build_target>
+ ninja -C <build_target>
+
+ meson configure -Dexamples=eventdev_pipeline <build_target>
+ ninja -C <build_target>
Test Case: Keep the packets order with default stage in single-flow and multi-flow
====================================================================================
@@ -43,7 +49,7 @@ Description: the packets' order which will pass through a same flow should be gu
1. Run the sample with below command::
- # ./build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s1 -n0 -c32 -W1000 -D
+ # ./<build_target>/examples/dpdk-eventdev_pipeline /build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s1 -n0 -c32 -W1000 -D
Parameters:
-r2, -t4, -e8: allocate cores to rx, tx and shedular
@@ -62,7 +68,7 @@ Description: the sample only guarantee that keep the packets order with only one
1. Run the sample with below command::
- # ./build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s1 -n0 -c32 -W1000 -o -D
+ # ./<build_target>/examples/dpdk-eventdev_pipeline /build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s1 -n0 -c32 -W1000 -o -D
2. Send traffic from ixia device with same 5 tuple(single-link) and with different 5-tuple(multi-flow)
@@ -75,7 +81,7 @@ in single-flow, the load-balanced behavior is not guaranteed;
1. Run the sample with below command::
- # ./build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s1 -n0 -c32 -W1000 -D
+ # ./<build_target>/examples/dpdk-eventdev_pipeline /build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s1 -n0 -c32 -W1000 -D
2. Use traffic generator to send huge number of packets:
In single-flow situation, traffic generator will send packets with the same 5-tuple
@@ -90,7 +96,7 @@ Description: A good load-balanced behavior should be guaranteed in both single-f
1. Run the sample with below command::
- # ./build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s1 -n0 -c32 -W1000 -o -D
+ # ./<build_target>/examples/dpdk-eventdev_pipeline /build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s1 -n0 -c32 -W1000 -o -D
2. Use traffic generator to send huge number of packets:
In single-flow situation, traffic generator will send packets with the same 5-tuple
@@ -105,7 +111,7 @@ Description: A good load-balanced behavior should be guaranteed in both single-f
1. Run the sample with below command::
- # ./build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s1 -n0 -c32 -W1000 -p -D
+ # ./<build_target>/examples/dpdk-eventdev_pipeline /build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s1 -n0 -c32 -W1000 -p -D
2. Use traffic generator to send huge number of packets:
In single-flow situation, traffic generator will send packets with the same 5-tuple
@@ -121,7 +127,7 @@ We use 4 worker and 2 stage as the test background.
1. Run the sample with below command::
- # ./build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s2 -n0 -c32
+ # ./<build_target>/examples/dpdk-eventdev_pipeline /build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s2 -n0 -c32
2. use traffic generator to send huge number of packets(with same 5-tuple and different 5-tuple)
@@ -134,7 +140,7 @@ We use 4 worker and 2 stage as the test background.
1. Run the sample with below command::
- # ./build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s2 -n0 -c32 -p
+ # ./<build_target>/examples/dpdk-eventdev_pipeline /build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s2 -n0 -c32 -p
2. use traffic generator to send huge number of packets(with same 5-tuple and different 5-tuple)
@@ -147,7 +153,7 @@ We use 4 worker and 2 stage as the test background.
1. Run the sample with below command::
- # ./build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s2 -n0 -c32 -o
+ # ./<build_target>/examples/dpdk-eventdev_pipeline /build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s2 -n0 -c32 -o
2. use traffic generator to send huge number of packets(with same 5-tuple and different 5-tuple)
@@ -159,6 +165,6 @@ Description: Execute basic forward test with all type of stage.
1. Run the sample with below command::
- # ./build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s1 -n0 -c32
+ # ./<build_target>/examples/dpdk-eventdev_pipeline /build/eventdev_pipeline --vdev event_sw0 -- -r2 -t4 -e8 -w F0 -s1 -n0 -c32
2. use traffic generator to send some packets and verify the sample could forward them normally
diff --git a/test_plans/external_memory_test_plan.rst b/test_plans/external_memory_test_plan.rst
index 42f57726..7109e337 100644
--- a/test_plans/external_memory_test_plan.rst
+++ b/test_plans/external_memory_test_plan.rst
@@ -46,7 +46,7 @@ Bind the ports to IGB_UIO driver
Start testpmd with --mp-alloc=xmem flag::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -- --mp-alloc=xmem -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -- --mp-alloc=xmem -i
Start forward in testpmd
@@ -60,7 +60,7 @@ Bind the ports to IGB_UIO driver
Start testpmd with --mp-alloc=xmemhuge flag::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -- --mp-alloc=xmemhuge -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -- --mp-alloc=xmemhuge -i
Start forward in testpmd
@@ -73,7 +73,7 @@ Bind the ports to vfio-pci driver
Start testpmd with --mp-alloc=xmem flag::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -- --mp-alloc=xmem -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -- --mp-alloc=xmem -i
Start forward in testpmd
@@ -86,7 +86,7 @@ Bind the ports to vfio-pci driver
Start testpmd with --mp-alloc=xmemhuge flag::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -- --mp-alloc=xmemhuge -i
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -- --mp-alloc=xmemhuge -i
Start forward in testpmd
diff --git a/test_plans/external_mempool_handler_test_plan.rst b/test_plans/external_mempool_handler_test_plan.rst
index 09ed4ca9..2f821364 100644
--- a/test_plans/external_mempool_handler_test_plan.rst
+++ b/test_plans/external_mempool_handler_test_plan.rst
@@ -42,13 +42,14 @@ systems and software based memory allocators to be used with DPDK.
Test Case 1: Multiple producers and multiple consumers
======================================================
-1. Change default mempool handler operations to "ring_mp_mc"::
+1. Default mempool handler operations is "ring_mp_mc"::
- sed -i 's/CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=.*$/CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=\"ring_mp_mc\"/' ./config/common_base
+ cat /root/dpdk/config/rte_config.h |grep MEMPOOL_OPS
+ #define RTE_MBUF_DEFAULT_MEMPOOL_OPS "ring_mp_mc"
2. Start test app and verify mempool autotest passed::
- test -n 4 -c f
+ ./<build_target>/app/test/dpdk-test -n 4 -c f
RTE>> mempool_autotest
3. Start testpmd with two ports and start forwarding::
@@ -65,11 +66,11 @@ Test Case 2: Single producer and Single consumer
1. Change default mempool operation to "ring_sp_sc"::
- sed -i 's/CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=.*$/CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=\"ring_sp_sc\"/' ./config/common_base
+ sed -i '$a\#define RTE_MBUF_DEFAULT_MEMPOOL_OPS \"ring_sp_sc\"' config/rte_config.h
2. Start test app and verify mempool autotest passed::
- test -n 4 -c f
+ ./<build_target>/app/test/dpdk-test -n 4 -c f
RTE>> mempool_autotest
3. Start testpmd with two ports and start forwarding::
@@ -86,11 +87,11 @@ Test Case 3: Single producer and Multiple consumers
1. Change default mempool operation to "ring_sp_mc"::
- sed -i 's/CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=.*$/CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=\"ring_sp_mc\"/' ./config/common_base
+ sed -i '$a\#define RTE_MBUF_DEFAULT_MEMPOOL_OPS \"ring_sp_mc\"' config/rte_config.h
2. Start test app and verify mempool autotest passed::
- test -n 4 -c f
+ ./<build_target>/app/test/dpdk-test -n 4 -c f
RTE>> mempool_autotest
3. Start testpmd with two ports and start forwarding::
@@ -107,11 +108,11 @@ Test Case 4: Multiple producers and single consumer
1. Change default mempool operation to "ring_mp_sc"::
- sed -i 's/CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=.*$/CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=\"ring_mp_sc\"/' ./config/common_base
+ sed -i '$a\#define RTE_MBUF_DEFAULT_MEMPOOL_OPS \"ring_mp_sc\"' config/rte_config.h
2. Start test app and verify mempool autotest passed::
- test -n 4 -c f
+ ./<build_target>/app/test/dpdk-test -n 4 -c f
RTE>> mempool_autotest
3. Start testpmd with two ports and start forwarding::
@@ -128,11 +129,11 @@ Test Case 4: Stack mempool handler
1. Change default mempool operation to "stack"::
- sed -i 's/CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=.*$/CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=\"stack\"/' ./config/common_base
+ sed -i '$a\#define RTE_MBUF_DEFAULT_MEMPOOL_OPS \"stack\"' config/rte_config.h
2. Start test app and verify mempool autotest passed::
- test -n 4 -c f
+ ./<build_target>/app/test/dpdk-test -n 4 -c f
RTE>> mempool_autotest
3. Start testpmd with two ports and start forwarding::
diff --git a/test_plans/interrupt_pmd_test_plan.rst b/test_plans/interrupt_pmd_test_plan.rst
index cb8b2f19..c89d68e2 100644
--- a/test_plans/interrupt_pmd_test_plan.rst
+++ b/test_plans/interrupt_pmd_test_plan.rst
@@ -60,12 +60,19 @@ Iommu pass through feature has been enabled in kernel::
Support igb_uio and vfio driver, if used vfio, kernel need 3.6+ and enable vt-d
in bios. When used vfio, requested to insmod two drivers vfio and vfio-pci.
+Build dpdk and examples=l3fwd-power:
+ CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static <build_target>
+ ninja -C <build_target>
+
+ meson configure -Dexamples=l3fwd-power <build_target>
+ ninja -C <build_target>
+
Test Case1: PF interrupt pmd with different queue
=================================================
Run l3fwd-power with one queue per port::
- l3fwd-power -c 0x7 -n 4 -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
+ ./<build_target>/examples/dpdk-l3fwd-power -c 0x7 -n 4 -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Send one packet to Port0 and Port1, check that thread on core1 and core2
waked up::
@@ -85,7 +92,7 @@ keep up awake.
Run l3fwd-power with random number queue per port, if is 4::
- l3fwd-power -c 0x7 -n 4 -- -p 0x3 -P --config="0,0,0),(0,1,1),\
+ ./<build_target>/examples/dpdk-l3fwd-power -c 0x7 -n 4 -- -p 0x3 -P --config="0,0,0),(0,1,1),\
(0,2,2),(0,3,3),(0,4,4)"
Send packet with increased dest IP to Port0, check that all threads waked up
@@ -95,7 +102,7 @@ keep up awake.
Run l3fwd-power with 15 queues per port::
- l3fwd-power -c 0xffffff -n 4 -- -p 0x3 -P --config="(0,0,0),(0,1,1),\
+ ./<build_target>/examples/dpdk-l3fwd-power -c 0xffffff -n 4 -- -p 0x3 -P --config="(0,0,0),(0,1,1),\
(0,2,2),(0,3,3),(0,4,4),(0,5,5),(0,6,6),(0,7,7),(1,0,8),\
(1,1,9),(1,2,10),(1,3,11),(1,4,12),(1,5,13),(1,6,14)"
@@ -109,7 +116,7 @@ Test Case2: PF lsc interrupt with vfio
Run l3fwd-power with one queue per port::
- l3fwd-power -c 0x7 -n 4 -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
+ ./<build_target>/examples/dpdk-l3fwd-power -c 0x7 -n 4 -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Plug out Port0 cable, check that link down interrupt captured and handled by
pmd driver.
diff --git a/test_plans/ip_pipeline_test_plan.rst b/test_plans/ip_pipeline_test_plan.rst
index 1c774e3c..5452bc90 100644
--- a/test_plans/ip_pipeline_test_plan.rst
+++ b/test_plans/ip_pipeline_test_plan.rst
@@ -76,6 +76,13 @@ Change pci device id of LINK0 to pci device id of dut_port_0.
There are two drivers supported now: aesni_gcm and aesni_mb.
Different drivers support different Algorithms.
+Build dpdk and examples=ip_pipeline:
+ CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static <build_target>
+ ninja -C <build_target>
+
+ meson configure -Dexamples=ip_pipeline <build_target>
+ ninja -C <build_target>
+
Test Case: l2fwd pipeline
===========================
1. Edit examples/ip_pipeline/examples/l2fwd.cli,
@@ -84,7 +91,7 @@ Test Case: l2fwd pipeline
2. Run ip_pipeline app as the following::
- ./build/ip_pipeline -c 0x3 -n 4 -- -s examples/l2fwd.cli
+ ./<build_target>/examples/dpdk-ip_pipeline -c 0x3 -n 4 -- -s examples/l2fwd.cli
3. Send packets at tester side with scapy, verify:
@@ -99,7 +106,7 @@ Test Case: flow classification pipeline
2. Run ip_pipeline app as the following::
- ./build/ip_pipeline -c 0x3 -n 4 –- -s examples/flow.cli
+ ./<build_target>/examples/dpdk-ip_pipeline -c 0x3 -n 4 –- -s examples/flow.cli
3. Send following packets with one test port::
@@ -121,7 +128,7 @@ Test Case: routing pipeline
2. Run ip_pipeline app as the following::
- ./build/ip_pipeline -c 0x3 -n 4 –- -s examples/route.cli,
+ ./<build_target>/examples/dpdk-ip_pipeline -c 0x3 -n 4 –- -s examples/route.cli,
3. Send following packets with one test port::
@@ -143,7 +150,7 @@ Test Case: firewall pipeline
2. Run ip_pipeline app as the following::
- ./build/ip_pipeline -c 0x3 -n 4 –- -s examples/firewall.cli
+ ./<build_target>/examples/dpdk-ip_pipeline -c 0x3 -n 4 –- -s examples/firewall.cli
3. Send following packets with one test port::
@@ -164,7 +171,7 @@ Test Case: pipeline with tap
2. Run ip_pipeline app as the following::
- ./build/ip_pipeline -c 0x3 -n 4 –- -s examples/tap.cli,
+ ./<build_target>/examples/dpdk-ip_pipeline -c 0x3 -n 4 –- -s examples/tap.cli,
3. Send packets at tester side with scapy, verify
packets sent from tester_port_0 can be received at tester_port_1, and vice versa.
@@ -178,7 +185,7 @@ Test Case: traffic management pipeline
3. Run ip_pipeline app as the following::
- ./build/ip_pipeline -c 0x3 -n 4 -a 0000:81:00.0 -- -s examples/traffic_manager.cli
+ ./<build_target>/examples/dpdk-ip_pipeline -c 0x3 -n 4 -a 0000:81:00.0 -- -s examples/traffic_manager.cli
4. Config traffic with dst ipaddr increase from 0.0.0.0 to 15.255.0.0, total 4096 streams,
also config flow tracked-by dst ipaddr, verify each flow's throughput is about linerate/4096.
@@ -191,7 +198,7 @@ Test Case: RSS pipeline
2. Run ip_pipeline app as the following::
- ./build/ip_pipeline -c 0x1f -n 4 –- -s examples/rss.cli
+ ./<build_target>/examples/dpdk-ip_pipeline -c 0x1f -n 4 –- -s examples/rss.cli
3. Send following packets with one test port::
@@ -220,7 +227,7 @@ Test Case: vf l2fwd pipeline(pf bound to dpdk driver)
2. Start testpmd with the four pf ports::
- ./testpmd -c 0xf0 -n 4 -a 05:00.0 -a 05:00.1 -a 05:00.2 -a 05:00.3 --file-prefix=pf --socket-mem 1024,1024 -- -i
+ ./<build_target>/app/dpdk-testpmd -c 0xf0 -n 4 -a 05:00.0 -a 05:00.1 -a 05:00.2 -a 05:00.3 --file-prefix=pf --socket-mem 1024,1024 -- -i
Set vf mac address from pf port::
@@ -235,7 +242,7 @@ Test Case: vf l2fwd pipeline(pf bound to dpdk driver)
4. Run ip_pipeline app as the following::
- ./build/ip_pipeline -c 0x3 -n 4 -a 0000:05:02.0 -a 0000:05:06.0 \
+ ./<build_target>/examples/dpdk-ip_pipeline -c 0x3 -n 4 -a 0000:05:02.0 -a 0000:05:06.0 \
-a 0000:05:0a.0 -a 0000:05:0e.0 --file-prefix=vf --socket-mem 1024,1024 -- -s examples/vf.cli
The exact format of port allowlist: domain:bus:devid:func
@@ -290,7 +297,7 @@ Test Case: vf l2fwd pipeline(pf bound to kernel driver)
4. Run ip_pipeline app as the following::
- ./build/ip_pipeline -c 0x3 -n 4 -- -s examples/vf.cli
+ ./<build_target>/examples/dpdk-ip_pipeline -c 0x3 -n 4 -- -s examples/vf.cli
5. Send packets at tester side with scapy::
@@ -331,7 +338,7 @@ Test Case: crypto pipeline - AEAD algorithm in aesni_gcm
4. Run ip_pipeline app as the following::
- ./examples/ip_pipeline/build/ip_pipeline -a 0000:81:00.0 --vdev crypto_aesni_gcm0
+ ./<build_target>/examples/dpdk-ip_pipeline -a 0000:81:00.0 --vdev crypto_aesni_gcm0
--socket-mem 0,2048 -l 23,24,25 -- -s ./examples/ip_pipeline/examples/flow_crypto.cli
5. Send packets with IXIA port,
@@ -365,7 +372,7 @@ Test Case: crypto pipeline - cipher algorithm in aesni_mb
4. Run ip_pipeline app as the following::
- ./examples/ip_pipeline/build/ip_pipeline -a 0000:81:00.0 --vdev crypto_aesni_mb0 --socket-mem 0,2048 -l 23,24,25 -- -s ./examples/ip_pipeline/examples/flow_crypto.cli
+ ./<build_target>/examples/dpdk-ip_pipeline -a 0000:81:00.0 --vdev crypto_aesni_mb0 --socket-mem 0,2048 -l 23,24,25 -- -s ./examples/ip_pipeline/examples/flow_crypto.cli
5. Send packets with IXIA port,
Use a tool to caculate the ciphertext from plaintext and key as an expected value.
@@ -395,7 +402,7 @@ Test Case: crypto pipeline - cipher_auth algorithm in aesni_mb
4. Run ip_pipeline app as the following::
- ./examples/ip_pipeline/build/ip_pipeline -a 0000:81:00.0 --vdev crypto_aesni_mb0 --socket-mem 0,2048 -l 23,24,25 -- -s ./examples/ip_pipeline/examples/flow_crypto.cli
+ ./<build_target>/examples/dpdk-ip_pipeline -a 0000:81:00.0 --vdev crypto_aesni_mb0 --socket-mem 0,2048 -l 23,24,25 -- -s ./examples/ip_pipeline/examples/flow_crypto.cli
5. Send packets with IXIA port,
Use a tool to caculate the ciphertext from plaintext and cipher key with AES-CBC algorithm.
diff --git a/test_plans/ipgre_test_plan.rst b/test_plans/ipgre_test_plan.rst
index 3a466b75..2c652273 100644
--- a/test_plans/ipgre_test_plan.rst
+++ b/test_plans/ipgre_test_plan.rst
@@ -48,7 +48,7 @@ Test Case 1: GRE ipv4 packet detect
Start testpmd and enable rxonly forwarding mode::
- testpmd -c ffff -n 4 -- -i
+ ./<build_target>/app/dpdk-testpmd -c ffff -n 4 -- -i
testpmd> set fwd rxonly
testpmd> set verbose 1
testpmd> start
@@ -77,7 +77,7 @@ Test Case 2: GRE ipv6 packet detect
Start testpmd and enable rxonly forwarding mode::
- testpmd -c ffff -n 4 -- -i --enable-hw-vlan
+ ./<build_target>/app/dpdk-testpmd -c ffff -n 4 -- -i --enable-hw-vlan
testpmd> set fwd rxonly
testpmd> set verbose 1
testpmd> start
@@ -124,7 +124,7 @@ Test Case 4: GRE packet chksum offload
Start testpmd with hardware checksum offload enabled::
- testpmd -c ff -n 3 -- -i --enable-rx-cksum --port-topology=loop
+ ./<build_target>/app/dpdk-testpmd -c ff -n 3 -- -i --enable-rx-cksum --port-topology=loop
testpmd> set verbose 1
testpmd> set fwd csum
testpmd> csum set ip hw 0
diff --git a/test_plans/ipv4_reassembly_test_plan.rst b/test_plans/ipv4_reassembly_test_plan.rst
index 75aba16e..354dae51 100644
--- a/test_plans/ipv4_reassembly_test_plan.rst
+++ b/test_plans/ipv4_reassembly_test_plan.rst
@@ -56,13 +56,19 @@ to the device under test::
1x Intel® 82599 (Niantic) NICs (1x 10GbE full duplex optical ports per NIC)
plugged into the available PCIe Gen2 8-lane slots.
+Build dpdk and examples=ip_reassembly:
+ CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static <build_target>
+ ninja -C <build_target>
+
+ meson configure -Dexamples=ip_reassembly <build_target>
+ ninja -C <build_target>
Test Case: Send 1K packets, 4 fragments each and 1K maxflows
============================================================
Sample command::
- ./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
+ ./<build_target>/examples/dpdk-ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=1024 --flowttl=10s
Sends 1K packets split in 4 fragments each with a ``maxflows`` of 1K.
@@ -79,7 +85,7 @@ Test Case: Send 2K packets, 4 fragments each and 1K maxflows
Sample command::
- ./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
+ ./<build_target>/examples/dpdk-ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=1024 --flowttl=10s
Sends 2K packets split in 4 fragments each with a ``maxflows`` of 1K.
@@ -96,7 +102,7 @@ Test Case: Send 4K packets, 7 fragments each and 4K maxflows
Sample command::
- ./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
+ ./<build_target>/examples/dpdk-ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=4096 --flowttl=10s
Modifies the sample app source code to enable up to 7 fragments per packet,
@@ -116,7 +122,7 @@ Test Case: Send +1K packets and ttl 3s; wait +ttl; send 1K packets
Sample command::
- ./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
+ ./<build_target>/examples/dpdk-ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=1024 --flowttl=3s
Sends 1100 packets split in 4 fragments each.
@@ -142,7 +148,7 @@ Test Case: Send more packets than maxflows; only maxflows packets are forwarded
Sample command::
- ./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
+ ./<build_target>/examples/dpdk-ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=1023 --flowttl=5s
Sends 1K packets with ``maxflows`` equal to 1023.
@@ -175,7 +181,7 @@ Test Case: Send more fragments than supported
Sample command::
- ./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
+ ./<build_target>/examples/dpdk-ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=1024 --flowttl=10s
Sends 1 packet split in 5 fragments while the maximum number of supported
@@ -194,7 +200,7 @@ Test Case: Send 3 frames and delay the 4th; no frames are forwarded back
Sample command::
- ./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
+ ./<build_target>/examples/dpdk-ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=1024 --flowttl=3s
Creates 1 packet split in 4 fragments. Sends the first 3 fragments and waits
@@ -213,7 +219,7 @@ Test Case: Send jumbo frames
Sample command::
- ./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
+ ./<build_target>/examples/dpdk-ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=1024 --flowttl=10s --enable-jumbo --max-pkt-len=9500
Sets the NIC MTU to 9000 and sends 1K packets of 8900B split in 4 fragments of
@@ -232,7 +238,7 @@ Test Case: Send jumbo frames without enable them in the app
Sample command::
- ./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
+ ./<build_target>/examples/dpdk-ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=1024 --flowttl=10s
Sends jumbo packets in the same way the previous test case does but without
--git a/test_plans/ixgbe_vf_get_extra_queue_information_test_plan.rst b/test_plans/ixgbe_vf_get_extra_queue_information_test_plan.rst
index 146f04bb..07c67e76 100644
--- a/test_plans/ixgbe_vf_get_extra_queue_information_test_plan.rst
+++ b/test_plans/ixgbe_vf_get_extra_queue_information_test_plan.rst
@@ -89,7 +89,7 @@ Test case 1: DPDK PF, kernel VF, enable DCB mode with TC=4
1. start the testpmd on PF::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -- -i --rxq=4 --txq=4 --nb-cores=16
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 1ffff -n 4 -- -i --rxq=4 --txq=4 --nb-cores=16
testpmd> port stop 0
testpmd> port config 0 dcb vt on 4 pfc off
testpmd> port start 0
@@ -135,7 +135,7 @@ Test case 2: DPDK PF, kernel VF, disable DCB mode
1. start the testpmd on PF::
- ./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -- -i --rxq=2 --txq=2 --nb-cores=16
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 1ffff -n 4 -- -i --rxq=2 --txq=2 --nb-cores=16
2. check if VF port is linked. if vf port is down, up the port::
diff --git a/test_plans/jumboframes_test_plan.rst b/test_plans/jumboframes_test_plan.rst
index a713ee5d..65287cd1 100644
--- a/test_plans/jumboframes_test_plan.rst
+++ b/test_plans/jumboframes_test_plan.rst
@@ -59,7 +59,7 @@ Assuming that ports ``0`` and ``1`` of the test target are directly connected
to the traffic generator, launch the ``testpmd`` application with the following
arguments::
- ./build/app/testpmd -c ffffff -n 6 -- -i --portmask=0x3 --max-pkt-len=9600 \
+ ./<build_target>/app/dpdk-testpmd -c ffffff -n 6 -- -i --portmask=0x3 --max-pkt-len=9600 \
--tx-offloads=0x00008000
The -n command is used to select the number of memory channels. It should match the number of memory channels on that setup.
@@ -179,7 +179,7 @@ Test Case: Normal frames with jumbo frame support
Start testpmd with jumbo frame support enabled ::
- ./testpmd -c ffffff -n 3 -- -i --rxd=1024 --txd=1024 \
+ ./<build_target>/app/dpdk-testpmd -c ffffff -n 3 -- -i --rxd=1024 --txd=1024 \
--burst=144 --txpt=32 --txht=8 --txwt=8 --txfreet=0 --rxfreet=64 \
--mbcache=200 --portmask=0x3 --mbuf-size=2048 --max-pkt-len=9600
diff --git a/test_plans/kernelpf_iavf_test_plan.rst b/test_plans/kernelpf_iavf_test_plan.rst
index 72223c77..45c217e4 100644
--- a/test_plans/kernelpf_iavf_test_plan.rst
+++ b/test_plans/kernelpf_iavf_test_plan.rst
@@ -72,7 +72,7 @@ Bind VF device to igb_uio or vfio-pci
Start up VF port::
- ./testpmd -c f -n 4 -- -i
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -- -i
Test case: VF basic RX/TX
=========================
@@ -345,7 +345,7 @@ Ensure tester's port supports sending jumboframe::
Launch testpmd for VF port without enabling jumboframe option::
- ./testpmd -c f -n 4 -- -i
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -- -i
testpmd> set fwd mac
testpmd> start
@@ -363,7 +363,7 @@ Ensure tester's port supports sending jumboframe::
Launch testpmd for VF port with jumboframe option::
- ./testpmd -c f -n 4 -- -i --max-pkt-len=3000
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -- -i --max-pkt-len=3000
testpmd> set fwd mac
testpmd> start
@@ -380,7 +380,7 @@ Test case: VF RSS
Start command with multi-queues like below::
- ./testpmd -c f -n 4 -- -i --txq=4 --rxq=4
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -- -i --txq=4 --rxq=4
Show RSS RETA configuration::
@@ -424,7 +424,7 @@ Test case: VF RSS hash key
Start command with multi-queues like below::
- ./testpmd -c f -n 4 -- -i --txq=4 --rxq=4
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -- -i --txq=4 --rxq=4
Show port rss hash key::
@@ -518,7 +518,7 @@ Change mtu for large packet::
Launch the ``testpmd`` with the following arguments, add "--max-pkt-len"
for large packet::
- ./testpmd -c f -n 4 -- -i --port-topology=chained --max-pkt-len=9000
+ ./<build_target>/app/dpdk-testpmd -c f -n 4 -- -i --port-topology=chained --max-pkt-len=9000
Set csum forward::
diff --git a/test_plans/kni_test_plan.rst b/test_plans/kni_test_plan.rst
index 1d4736bb..1802f6ab 100644
--- a/test_plans/kni_test_plan.rst
+++ b/test_plans/kni_test_plan.rst
@@ -117,7 +117,7 @@ system to another)::
rmmod igb_uio
insmod ./x86_64-default-linuxapp-gcc/kmod/igb_uio.ko
insmod ./x86_64-default-linuxapp-gcc/kmod/rte_kni.ko
- ./examples/kni/build/app/kni -c 0xa0001e -n 4 -- -P -p 0x3 --config="(0,1,2,21),(1,3,4,23)" &
+ ./<build_target>/examples/dpdk-kni -c 0xa0001e -n 4 -- -P -p 0x3 --config="(0,1,2,21),(1,3,4,23)" &
Case config::
@@ -133,7 +133,7 @@ to write to NIC, threads 21 and 23 are used by the kernel.
As the kernel module is installed using ``"kthread_mode=single"`` the core
affinity is set using ``taskset``::
- ./build/app/kni -c 0xa0001e -n 4 -- -P -p 0xc --config="(2,1,2,21),(3,3,4,23)"
+ ./<build_target>/examples/dpdk-kni -c 0xa0001e -n 4 -- -P -p 0xc --config="(2,1,2,21),(3,3,4,23)"
Verify whether the interface has been added::
@@ -379,7 +379,7 @@ Assume that ``port 2 and 3`` are used by this application::
rmmod kni
insmod ./kmod/rte_kni.ko "lo_mode=lo_mode_ring_skb"
- ./build/app/kni -c 0xff -n 3 -- -p 0xf -i 0xf -o 0xf0
+ ./<build_target>/examples/dpdk-kni -c 0xff -n 3 -- -p 0xf -i 0xf -o 0xf0
Assume ``port A and B`` on tester connects to NIC ``port 2 and 3``.
@@ -407,7 +407,7 @@ successfully::
rmmod rte_kni
insmod ./kmod/rte_kni.ko <Changing Parameters>
- ./build/app/kni -c 0xa0001e -n 4 -- -P -p 0xc --config="(2,1,2,21),(3,3,4,23)"
+ ./<build_target>/examples/dpdk-kni -c 0xa0001e -n 4 -- -P -p 0xc --config="(2,1,2,21),(3,3,4,23)"
Using ``dmesg`` to check whether kernel module is loaded with the specified
@@ -437,7 +437,7 @@ Compare performance results for loopback mode using:
insmod ./x86_64-default-linuxapp-gcc/kmod/igb_uio.ko
insmod ./x86_64-default-linuxapp-gcc/kmod/rte_kni.ko <lo_mode and kthread_mode parameters>
- ./examples/kni/build/app/kni -c <Core mask> -n 4 -- -P -p <Port mask> --config="<Ports/Cores configuration>" &
+ ./<build_target>/examples/dpdk-kni -c <Core mask> -n 4 -- -P -p <Port mask> --config="<Ports/Cores configuration>" &
At this point, the throughput is measured and recorded for the different
@@ -474,7 +474,7 @@ Compare performance results for bridge mode using:
The application is launched and the bridge is setup using the commands below::
insmod ./x86_64-default-linuxapp-gcc/kmod/rte_kni.ko <kthread_mode parameter>
- ./build/app/kni -c <Core mask> -n 4 -- -P -p <Port mask> --config="<Ports/Cores configuration>" &
+ ./<build_target>/examples/dpdk-kni -c <Core mask> -n 4 -- -P -p <Port mask> --config="<Ports/Cores configuration>" &
ifconfig vEth2_0 up
ifconfig vEth3_0 up
@@ -560,7 +560,7 @@ The application is launched and the bridge is setup using the commands below::
echo 1 > /proc/sys/net/ipv4/ip_forward
insmod ./x86_64-default-linuxapp-gcc/kmod/rte_kni.ko <kthread_mode parameter>
- ./build/app/kni -c <Core mask> -n 4 -- -P -p <Port mask> --config="<Ports/Cores configuration>" &
+ ./<build_target>/examples/dpdk-kni -c <Core mask> -n 4 -- -P -p <Port mask> --config="<Ports/Cores configuration>" &
ifconfig vEth2_0 192.170.2.1
ifconfig vEth3_0 192.170.3.1
diff --git a/test_plans/l2fwd_jobstats_test_plan.rst b/test_plans/l2fwd_jobstats_test_plan.rst
index ba5a53f2..585f853a 100644
--- a/test_plans/l2fwd_jobstats_test_plan.rst
+++ b/test_plans/l2fwd_jobstats_test_plan.rst
@@ -64,7 +64,7 @@ note: If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.
The application requires a number of command line options::
- ./build/l2fwd-jobstats [EAL options] -- -p PORTMASK [-q NQ] [-l]
+ ./<build_target>/examples/dpdk-l2fwd-jobstats [EAL options] -- -p PORTMASK [-q NQ] [-l]
The ``l2fwd-jobstats`` application is run with EAL parameters and parameters for
the application itself. For details about the EAL parameters, see the relevant
@@ -75,6 +75,13 @@ itself.
- q NQ: A number of queues (=ports) per lcore (default is 1)
- l: Use locale thousands separator when formatting big numbers.
+Build dpdk and examples=l2fwd-jobstats:
+ CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static <build_target>
+ ninja -C <build_target>
+
+ meson configure -Dexamples=l2fwd-jobstats <build_target>
+ ninja -C <build_target>
+
Test Case: L2fwd jobstats check
================================================
@@ -82,7 +89,7 @@ Assume port 0 and 1 are connected to the traffic generator, to run the test
application in linuxapp environment with 2 lcores, 2 ports and 2 RX queues
per lcore::
- ./examples/l2fwd-jobstats/build/l2fwd-jobstats -c 0x03 -n 4 -- -q 2 -p 0x03 -l
+ ./<build_target>/examples/dpdk-l2fwd-jobstats -c 0x03 -n 4 -- -q 2 -p 0x03 -l
Then send 100, 000 packet to port 0 and 100, 000 packets to port 1, check the
NIC packets number reported by sample is the same with what we set at the packet
--
2.25.1
next prev parent reply other threads:[~2022-01-22 9:55 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-22 17:55 [dts][PATCH V1 0/4] " Yu Jiang
2022-01-22 17:55 ` Yu Jiang [this message]
2022-01-22 17:55 ` [dts][PATCH V1 2/4] " Yu Jiang
2022-01-22 17:55 ` [dts][PATCH V1 3/4] " Yu Jiang
2022-01-22 17:55 ` [dts][PATCH V1 4/4] " Yu Jiang
2022-01-22 18:20 [dts][PATCH V1 0/4] " Yu Jiang
2022-01-22 18:20 ` [dts][PATCH V1 1/4] " Yu Jiang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220122175533.912631-2-yux.jiang@intel.com \
--to=yux.jiang@intel.com \
--cc=dts@dpdk.org \
--cc=lijuan.tu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).