* [dts][PATCH V2] test_plans/*: remove common_base info in test plan
2022-05-09 11:03 [dts][PATCH V2] test_plans/*: remove common_base info in test plan Lingli Chen
@ 2022-05-09 7:48 ` lijuan.tu
0 siblings, 0 replies; 2+ messages in thread
From: lijuan.tu @ 2022-05-09 7:48 UTC (permalink / raw)
To: dts, Lingli Chen; +Cc: Lingli Chen
On Mon, 9 May 2022 11:03:44 +0000, Lingli Chen <linglix.chen@intel.com> wrote:
> makefile have moved from test suite long times, but plan still have common_base info, they need keep sync each other.
>
> Signed-off-by: Lingli Chen <linglix.chen@intel.com>
Applied, thanks
^ permalink raw reply [flat|nested] 2+ messages in thread
* [dts][PATCH V2] test_plans/*: remove common_base info in test plan
@ 2022-05-09 11:03 Lingli Chen
2022-05-09 7:48 ` lijuan.tu
0 siblings, 1 reply; 2+ messages in thread
From: Lingli Chen @ 2022-05-09 11:03 UTC (permalink / raw)
To: dts; +Cc: Lingli Chen
makefile have moved from test suite long times, but plan still have common_base info, they need keep sync each other.
Signed-off-by: Lingli Chen <linglix.chen@intel.com>
---
v2:add ptpclient_test_plan modify
v1:modify 12 suit test_plans
test_plans/bbdev_test_plan.rst | 9 --
test_plans/compressdev_zlib_pmd_test_plan.rst | 8 --
test_plans/iavf_test_plan.rst | 11 +--
test_plans/ipv4_reassembly_test_plan.rst | 2 +-
test_plans/kni_test_plan.rst | 2 +-
test_plans/nvgre_test_plan.rst | 2 -
test_plans/packet_capture_test_plan.rst | 10 --
test_plans/ptpclient_test_plan.rst | 5 +-
test_plans/qinq_filter_test_plan.rst | 3 -
test_plans/vhost_1024_ethports_test_plan.rst | 14 +--
test_plans/vm2vm_virtio_pmd_test_plan.rst | 93 ++++++-------------
test_plans/vmdq_dcb_test_plan.rst | 4 +-
test_plans/vxlan_test_plan.rst | 3 -
13 files changed, 40 insertions(+), 126 deletions(-)
diff --git a/test_plans/bbdev_test_plan.rst b/test_plans/bbdev_test_plan.rst
index 2e2a64e5..2ac4cbe9 100644
--- a/test_plans/bbdev_test_plan.rst
+++ b/test_plans/bbdev_test_plan.rst
@@ -93,15 +93,6 @@ Prerequisites
measure the overhead added by the framework.
2) Turbo_sw is a sw-only driver wrapper for FlexRAN SDK optimized Turbo
coding libraries.
- It can be enabled by setting
-
- ``CONFIG_RTE_LIBRTE_PMD_BBDEV_TURBO_SW=y``
-
- The offload cases can be enabled by setting
-
- ``CONFIG_RTE_BBDEV_OFFLOAD_COST=y``
-
- They are both located in the build configuration file ``common_base``.
4. Test tool
diff --git a/test_plans/compressdev_zlib_pmd_test_plan.rst b/test_plans/compressdev_zlib_pmd_test_plan.rst
index 5d20dc61..678ea714 100644
--- a/test_plans/compressdev_zlib_pmd_test_plan.rst
+++ b/test_plans/compressdev_zlib_pmd_test_plan.rst
@@ -49,14 +49,6 @@ http://doc.dpdk.org/guides/compressdevs/zlib.html
Prerequisites
----------------------
-In order to enable this virtual compression PMD, user must:
-
- Set CONFIG_RTE_LIBRTE_PMD_ZLIB=y in config/common_base.
-
-and enable compressdev unit test:
-
- Set CONFIG_RTE_COMPRESSDEV_TEST=y in config/common_base.
-
A compress performance test app is added into DPDK to test CompressDev.
RTE_COMPRESS_ZLIB and RTE_LIB_COMPRESSDEV is enabled by default in meson build.
Calgary corpus is a collection of text and binary data files, commonly used
diff --git a/test_plans/iavf_test_plan.rst b/test_plans/iavf_test_plan.rst
index ddd7fbb9..9f657905 100644
--- a/test_plans/iavf_test_plan.rst
+++ b/test_plans/iavf_test_plan.rst
@@ -457,9 +457,7 @@ Test Case: VF performance
Test Case: vector vf performance
---------------------------------
-1. config vector=y in config/common_base, and rebuild dpdk
-
-2. start testpmd for PF::
+1. start testpmd for PF::
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 --socket-mem=1024,1024 --file-prefix=pf \
-a 08:00.0 -a 08:00.1 -- -i
@@ -467,7 +465,7 @@ Test Case: vector vf performance
testpmd>set vf mac addr 0 0 00:12:34:56:78:01
testpmd>set vf mac addr 1 0 00:12:34:56:78:02
-3. start testpmd for VF::
+2. start testpmd for VF::
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x0f8 -n 4 --master-lcore=3 --socket-mem=1024,1024 --file-prefix=vf \
-a 09:0a.0 -a 09:02.0 -- -i --txq=2 --rxq=2 --rxd=512 --txd=512 --nb-cores=4 --rss-ip
@@ -476,10 +474,9 @@ Test Case: vector vf performance
testpmd>set fwd mac
testpmd>start
-4. send traffic and verify throughput
+3. send traffic and verify throughput
Test Case: scalar/bulk vf performance
-------------------------------------
-1. change CONFIG_RTE_LIBRTE_IAVF_INC_VECTOR=n in config/common_base, and rebuild dpdk.
-2. repeat test steps 2-4 in above test case: vector vf performance.
+1. repeat above test case: vector vf performance, by launch dpdk-testpmd with '--force-max-simd-bitwidth=64'.
diff --git a/test_plans/ipv4_reassembly_test_plan.rst b/test_plans/ipv4_reassembly_test_plan.rst
index 354dae51..2f6de54e 100644
--- a/test_plans/ipv4_reassembly_test_plan.rst
+++ b/test_plans/ipv4_reassembly_test_plan.rst
@@ -106,7 +106,7 @@ Sample command::
-P -p 0x2 --config "(1,0,1)" --maxflows=4096 --flowttl=10s
Modifies the sample app source code to enable up to 7 fragments per packet,
-and it need set the "CONFIG_RTE_LIBRTE_IP_FRAG_MAX_FRAG=7" in ./config/common_base and re-build DPDK.
+and it need set the "RTE_LIBRTE_IP_FRAG_MAX_FRAG=7" in ./config/rte_config.h and re-build DPDK.
Sends 4K packets split in 7 fragments each with a ``maxflows`` of 4K.
diff --git a/test_plans/kni_test_plan.rst b/test_plans/kni_test_plan.rst
index 1802f6ab..ee6c977a 100644
--- a/test_plans/kni_test_plan.rst
+++ b/test_plans/kni_test_plan.rst
@@ -121,7 +121,7 @@ system to another)::
Case config::
- For enable KNI features, need to set the "CONFIG_RTE_KNI_KMOD=y" in ./config/common_base and re-build DPDK.
+ For enable KNI features, need build DPDK with '-Denable_kmods=True'.
Test Case: ifconfig testing
===========================
diff --git a/test_plans/nvgre_test_plan.rst b/test_plans/nvgre_test_plan.rst
index c05292ee..71a406fd 100644
--- a/test_plans/nvgre_test_plan.rst
+++ b/test_plans/nvgre_test_plan.rst
@@ -55,8 +55,6 @@ plugged into the available PCIe Gen3 8-lane slot.
DUT board must be two sockets system and each cpu have more than 8 lcores.
-For fortville NICs need change the value of CONFIG_RTE_LIBRTE_I40E_INC_VECTOR
-in dpdk/config/common_base file to n.
Test Case: NVGRE ipv4 packet detect
===================================
diff --git a/test_plans/packet_capture_test_plan.rst b/test_plans/packet_capture_test_plan.rst
index e2be1430..7e7f8768 100644
--- a/test_plans/packet_capture_test_plan.rst
+++ b/test_plans/packet_capture_test_plan.rst
@@ -86,16 +86,6 @@ note: portB0/portB1 are the binded ports.
Prerequisites
=============
-Enable pcap lib in dpdk code and recompile::
-
- --- a/config/common_base
- +++ b/config/common_base
- @@ -492,7 +492,7 @@ CONFIG_RTE_LIBRTE_PMD_NULL=y
- #
- # Compile software PMD backed by PCAP files
- #
- -CONFIG_RTE_LIBRTE_PMD_PCAP=n
- +CONFIG_RTE_LIBRTE_PMD_PCAP=y
Test cases
==========
diff --git a/test_plans/ptpclient_test_plan.rst b/test_plans/ptpclient_test_plan.rst
index 31ba2b15..3ae8b68b 100644
--- a/test_plans/ptpclient_test_plan.rst
+++ b/test_plans/ptpclient_test_plan.rst
@@ -46,10 +46,7 @@ has been installed on the tester.
Case Config::
- Meson: For support IEEE1588, need to execute "sed -i '$a\#define RTE_LIBRTE_IEEE1588 1' config/rte_config.h",
- and re-build DPDK.
- $ CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static <build_target>
- $ ninja -C <build_target>
+ Meson: For support IEEE1588, build DPDK with '-Dc_args=-DRTE_LIBRTE_IEEE1588'
The sample should be validated on Forville, Niantic and i350 Nics.
diff --git a/test_plans/qinq_filter_test_plan.rst b/test_plans/qinq_filter_test_plan.rst
index 488596ed..6781c517 100644
--- a/test_plans/qinq_filter_test_plan.rst
+++ b/test_plans/qinq_filter_test_plan.rst
@@ -53,9 +53,6 @@ Test Case 1: test qinq packet type
Testpmd configuration - 4 RX/TX queues per port
------------------------------------------------
-#. For fortville NICs need change the value of
- CONFIG_RTE_LIBRTE_I40E_INC_VECTOR in dpdk/config/common_base file to n.
-
#. set up testpmd with fortville NICs::
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1f -n 4 -- -i --rxq=4 --txq=4 --disable-rss
diff --git a/test_plans/vhost_1024_ethports_test_plan.rst b/test_plans/vhost_1024_ethports_test_plan.rst
index a95042f2..1c69c123 100644
--- a/test_plans/vhost_1024_ethports_test_plan.rst
+++ b/test_plans/vhost_1024_ethports_test_plan.rst
@@ -41,19 +41,13 @@ So when vhost-user ports number > 1023, it will report an error "failed to add l
Test Case1: Basic test for launch vhost with 1023 ethports
===========================================================
-1. SW preparation: change "CONFIG_RTE_MAX_ETHPORTS" to 1023 in DPDK configure file::
-
- vi ./config/common_base
- -CONFIG_RTE_MAX_ETHPORTS=32
- +CONFIG_RTE_MAX_ETHPORTS=1023
+1. SW preparation::
+ build dpdk with '-Dmax_ethports=1024'
2. Launch vhost with 1023 vdev::
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3000 -n 4 --file-prefix=vhost --vdev 'eth_vhost0,iface=vhost-net,queues=1' \
--vdev 'eth_vhost1,iface=vhost-net1,queues=1' ... -- -i # only list two vdev, here ommit other 1021 vdevs, from eth_vhost2 to eth_vhost1022
-3. Change "CONFIG_RTE_MAX_ETHPORTS" back to 32 in DPDK configure file::
-
- vi ./config/common_base
- +CONFIG_RTE_MAX_ETHPORTS=32
- -CONFIG_RTE_MAX_ETHPORTS=1023
+3. restore dpdk::
+ build dpdk with '-Dmax_ethports=32'
diff --git a/test_plans/vm2vm_virtio_pmd_test_plan.rst b/test_plans/vm2vm_virtio_pmd_test_plan.rst
index 903695ff..6afb8d6f 100644
--- a/test_plans/vm2vm_virtio_pmd_test_plan.rst
+++ b/test_plans/vm2vm_virtio_pmd_test_plan.rst
@@ -296,60 +296,47 @@ Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check
-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
-3. On VM1, enable pcap lib in dpdk code and recompile::
-
- diff --git a/config/common_base b/config/common_base
- index 6b96e0e80..0f7d22f22 100644
- --- a/config/common_base
- +++ b/config/common_base
- @@ -492,7 +492,7 @@ CONFIG_RTE_LIBRTE_PMD_NULL=y
- #
- # Compile software PMD backed by PCAP files
- #
- -CONFIG_RTE_LIBRTE_PMD_PCAP=n
- +CONFIG_RTE_LIBRTE_PMD_PCAP=y
-
-4. Bind virtio with vfio-pci driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::
+3. Bind virtio with vfio-pci driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
testpmd>set fwd rxonly
testpmd>start
-5. Bootup pdump in VM1::
+4. Bootup pdump in VM1::
./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=test -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000'
-6. On VM2, bind virtio with vfio-pci driver,then run testpmd, config tx_packets to 8k length with chain mode::
+5. On VM2, bind virtio with vfio-pci driver,then run testpmd, config tx_packets to 8k length with chain mode::
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
testpmd>set fwd mac
testpmd>set txpkts 2000,2000,2000,2000
-7. Send ten packets with 8k length from virtio-pmd on VM2::
+6. Send ten packets with 8k length from virtio-pmd on VM2::
testpmd>set burst 1
testpmd>start tx_first 10
-8. Check payload is correct in each dumped packets.
+7. Check payload is correct in each dumped packets.
-9. Relaunch testpmd in VM1::
+8. Relaunch testpmd in VM1::
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024
testpmd>set fwd rxonly
testpmd>start
-10. Bootup pdump in VM1::
+9. Bootup pdump in VM1::
./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=test -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-rx-small.pcap,mbuf-size=8000'
-11. Relaunch testpmd on VM2, send ten 64B packets from virtio-pmd on VM2::
+10. Relaunch testpmd on VM2, send ten 64B packets from virtio-pmd on VM2::
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>set burst 1
testpmd>start tx_first 10
-12. Check payload is correct in each dumped packets.
+11. Check payload is correct in each dumped packets.
Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid check
===================================================================================
@@ -384,60 +371,47 @@ Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid ch
-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
-3. On VM1, enable pcap lib in dpdk code and recompile::
-
- diff --git a/config/common_base b/config/common_base
- index 6b96e0e80..0f7d22f22 100644
- --- a/config/common_base
- +++ b/config/common_base
- @@ -492,7 +492,7 @@ CONFIG_RTE_LIBRTE_PMD_NULL=y
- #
- # Compile software PMD backed by PCAP files
- #
- -CONFIG_RTE_LIBRTE_PMD_PCAP=n
- +CONFIG_RTE_LIBRTE_PMD_PCAP=y
-
-4. Bind virtio with vfio-pci driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::
+3. Bind virtio with vfio-pci driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
testpmd>set fwd rxonly
testpmd>start
-5. Bootup pdump in VM1::
+4. Bootup pdump in VM1::
./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=test -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000'
-6. On VM2, bind virtio with vfio-pci driver,then run testpmd, config tx_packets to 8k length with chain mode::
+5. On VM2, bind virtio with vfio-pci driver,then run testpmd, config tx_packets to 8k length with chain mode::
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
testpmd>set fwd mac
testpmd>set txpkts 2000,2000,2000,2000
-7. Send ten packets from virtio-pmd on VM2::
+6. Send ten packets from virtio-pmd on VM2::
testpmd>set burst 1
testpmd>start tx_first 10
-8. Check payload is correct in each dumped packets.
+7. Check payload is correct in each dumped packets.
-9. Relaunch testpmd in VM1::
+8. Relaunch testpmd in VM1::
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024
testpmd>set fwd rxonly
testpmd>start
-10. Bootup pdump in VM1::
+9. Bootup pdump in VM1::
./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=test -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-rx-small.pcap'
-11. Relaunch testpmd On VM2, send ten 64B packets from virtio-pmd on VM2::
+10. Relaunch testpmd On VM2, send ten 64B packets from virtio-pmd on VM2::
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
testpmd>set fwd mac
testpmd>set burst 1
testpmd>start tx_first 10
-12. Check payload is correct in each dumped packets.
+11. Check payload is correct in each dumped packets.
Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid check
===================================================================================
@@ -472,60 +446,47 @@ Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid ch
-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
-3. On VM1, enable pcap lib in dpdk code and recompile::
-
- diff --git a/config/common_base b/config/common_base
- index 6b96e0e80..0f7d22f22 100644
- --- a/config/common_base
- +++ b/config/common_base
- @@ -492,7 +492,7 @@ CONFIG_RTE_LIBRTE_PMD_NULL=y
- #
- # Compile software PMD backed by PCAP files
- #
- -CONFIG_RTE_LIBRTE_PMD_PCAP=n
- +CONFIG_RTE_LIBRTE_PMD_PCAP=y
-
-4. Bind virtio with vfio-pci driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::
+3. Bind virtio with vfio-pci driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
testpmd>set fwd rxonly
testpmd>start
-5. Bootup pdump in VM1::
+4. Bootup pdump in VM1::
./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=test -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000'
-6. On VM2, bind virtio with vfio-pci driver,then run testpmd, config tx_packets to 8k length with chain mode::
+5. On VM2, bind virtio with vfio-pci driver,then run testpmd, config tx_packets to 8k length with chain mode::
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
testpmd>set fwd mac
testpmd>set txpkts 2000,2000,2000,2000
-7. Send ten packets from virtio-pmd on VM2::
+6. Send ten packets from virtio-pmd on VM2::
testpmd>set burst 1
testpmd>start tx_first 10
-8. Check payload is correct in each dumped packets.
+7. Check payload is correct in each dumped packets.
-9. Relaunch testpmd in VM1::
+8. Relaunch testpmd in VM1::
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024
testpmd>set fwd rxonly
testpmd>start
-10. Bootup pdump in VM1::
+9. Bootup pdump in VM1::
./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=test -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-rx-small.pcap'
-11. Relaunch testpmd On VM2, send ten 64B packets from virtio-pmd on VM2::
+10. Relaunch testpmd On VM2, send ten 64B packets from virtio-pmd on VM2::
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
testpmd>set fwd mac
testpmd>set burst 1
testpmd>start tx_first 10
-12. Check payload is correct in each dumped packets.
+11. Check payload is correct in each dumped packets.
Test Case 8: VM2VM vhost-user/virtio1.1-pmd with normal path
============================================================
diff --git a/test_plans/vmdq_dcb_test_plan.rst b/test_plans/vmdq_dcb_test_plan.rst
index a4beaa93..1d20a9bc 100644
--- a/test_plans/vmdq_dcb_test_plan.rst
+++ b/test_plans/vmdq_dcb_test_plan.rst
@@ -91,7 +91,7 @@ Expected Result:
Test Case 2: Verify VMDQ & DCB with 16 Pools and 8 TCs
======================================================
-1. change CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM to 8 in "./config/common_linuxapp", rebuild DPDK.
+1. change RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM to 8 in "./config/rte_config.h", rebuild DPDK.
meson: change "#define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM 4" to 8 in config/rte_config.h, rebuild DPDK.
2. Repeat Test Case 1, with `--nb-pools 16` and `--nb-tcs 8` of the sample application::
@@ -115,4 +115,4 @@ Expected result:
- Every RX queue should have received approximately (+/-15%) the same number of incoming packets
- verify queue id should be in [vlan user priority value * 2, vlan user priority value * 2 + 1]
-(NOTE: SIGHUP output will obviously change to show 8 columns per row, with only 16 rows)
\ No newline at end of file
+(NOTE: SIGHUP output will obviously change to show 8 columns per row, with only 16 rows)
diff --git a/test_plans/vxlan_test_plan.rst b/test_plans/vxlan_test_plan.rst
index f7bdeca3..12e35bee 100644
--- a/test_plans/vxlan_test_plan.rst
+++ b/test_plans/vxlan_test_plan.rst
@@ -53,9 +53,6 @@ plugged into the available PCIe Gen3 8-lane slot.
DUT board must be two sockets system and each cpu have more than 8 lcores.
-For fortville NICs need change the value of CONFIG_RTE_LIBRTE_I40E_INC_VECTOR
-in dpdk/config/common_base file to n.
-
Test Case: Vxlan ipv4 packet detect
===================================
Start testpmd with tunneling packet type to vxlan::
--
2.17.1
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2022-05-09 7:48 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-09 11:03 [dts][PATCH V2] test_plans/*: remove common_base info in test plan Lingli Chen
2022-05-09 7:48 ` lijuan.tu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).