From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B1305A0C4D; Wed, 13 Oct 2021 11:11:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A5FA541168; Wed, 13 Oct 2021 11:11:31 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id BBBE740150 for ; Wed, 13 Oct 2021 11:11:28 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10135"; a="226159649" X-IronPort-AV: E=Sophos;i="5.85,370,1624345200"; d="scan'208";a="226159649" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Oct 2021 02:11:27 -0700 X-IronPort-AV: E=Sophos;i="5.85,370,1624345200"; d="scan'208";a="441576514" Received: from unknown (HELO dpdk-zhaohy-t.sh.intel.com) ([10.240.183.68]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Oct 2021 02:11:26 -0700 From: Jun Dong To: dts@dpdk.org Cc: PingX.Yu@intel.com, weix.ling@intel.com, junx.dong@intel.com Date: Wed, 13 Oct 2021 17:24:48 +0800 Message-Id: <1634117089-150476-2-git-send-email-junx.dong@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1634117089-150476-1-git-send-email-junx.dong@intel.com> References: <1634117089-150476-1-git-send-email-junx.dong@intel.com> Subject: [dts] [PATCH V1 2/3] test_plans/*: changed eal -w parameter to -a X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Sender: "dts" - change eal parameter -w to -a for all test plans Signed-off-by: Jun Dong --- test_plans/ABI_stable_test_plan.rst | 2 +- test_plans/cloud_filter_with_l4_port_test_plan.rst | 4 +- .../cvl_advanced_iavf_rss_gtpu_test_plan.rst | 2 +- test_plans/cvl_advanced_iavf_rss_test_plan.rst | 2 +- ...ed_iavf_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst | 2 +- test_plans/cvl_advanced_rss_pppoe_test_plan.rst | 4 +- ...dvanced_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst | 4 +- test_plans/cvl_dcf_acl_filter_test_plan.rst | 20 +++--- test_plans/cvl_dcf_date_path_test_plan.rst | 4 +- .../cvl_dcf_switch_filter_pppoe_test_plan.rst | 2 +- test_plans/cvl_dcf_switch_filter_test_plan.rst | 10 +-- test_plans/cvl_fdir_test_plan.rst | 4 +- test_plans/cvl_limit_value_test_test_plan.rst | 18 +++--- test_plans/cvl_switch_filter_pppoe_test_plan.rst | 4 +- test_plans/cvl_switch_filter_test_plan.rst | 4 +- test_plans/ddp_l2tpv3_test_plan.rst | 2 +- test_plans/dpdk_hugetlbfs_mount_size_test_plan.rst | 18 +++--- ...le_package_download_in_ice_driver_test_plan.rst | 4 +- test_plans/eventdev_perf_test_plan.rst | 38 ++++++------ test_plans/eventdev_pipeline_perf_test_plan.rst | 38 ++++++------ test_plans/flexible_rxd_test_plan.rst | 24 ++++---- test_plans/floating_veb_test_plan.rst | 38 ++++++------ test_plans/fortville_rss_input_test_plan.rst | 2 +- test_plans/generic_flow_api_test_plan.rst | 36 +++++------ test_plans/iavf_flexible_descriptor_test_plan.rst | 22 +++---- .../iavf_package_driver_error_handle_test_plan.rst | 2 +- test_plans/iavf_test_plan.rst | 10 +-- test_plans/inline_ipsec_test_plan.rst | 22 +++---- test_plans/ip_pipeline_test_plan.rst | 14 ++--- test_plans/ipsec_gw_and_library_test_plan.rst | 8 +-- test_plans/l2tp_esp_coverage_test_plan.rst | 12 ++-- test_plans/linux_modules_test_plan.rst | 4 +- test_plans/macsec_for_ixgbe_test_plan.rst | 6 +- ...malicious_driver_event_indication_test_plan.rst | 10 +-- test_plans/pmd_test_plan.rst | 2 +- test_plans/port_representor_test_plan.rst | 6 +- test_plans/qinq_filter_test_plan.rst | 12 ++-- .../runtime_vf_queue_number_maxinum_test_plan.rst | 8 +-- test_plans/runtime_vf_queue_number_test_plan.rst | 28 ++++----- test_plans/unit_tests_dump_test_plan.rst | 2 +- test_plans/unit_tests_event_timer_test_plan.rst | 2 +- test_plans/veb_switch_test_plan.rst | 30 ++++----- test_plans/vf_l3fwd_test_plan.rst | 2 +- test_plans/vf_macfilter_test_plan.rst | 8 +-- test_plans/vf_packet_rxtx_test_plan.rst | 4 +- test_plans/vf_pf_reset_test_plan.rst | 6 +- test_plans/vf_vlan_test_plan.rst | 2 +- .../vhost_virtio_user_interrupt_test_plan.rst | 4 +- test_plans/virtio_pvp_regression_test_plan.rst | 4 +- test_plans/vm2vm_virtio_pmd_test_plan.rst | 72 +++++++++++----------- 50 files changed, 294 insertions(+), 294 deletions(-) diff --git a/test_plans/ABI_stable_test_plan.rst b/test_plans/ABI_stable_test_plan.rst index c15af72..16934c4 100644 --- a/test_plans/ABI_stable_test_plan.rst +++ b/test_plans/ABI_stable_test_plan.rst @@ -292,7 +292,7 @@ Build shared libraries, (just enable i40e pmd for testing):: Run testpmd application refer to Common Test steps with ixgbe pmd NIC.:: - testpmd -c 0xf -n 4 -d -w 18:00.0 -- -i + testpmd -c 0xf -n 4 -d -a 18:00.0 -- -i Test txonly:: diff --git a/test_plans/cloud_filter_with_l4_port_test_plan.rst b/test_plans/cloud_filter_with_l4_port_test_plan.rst index da39c9a..ed2109e 100644 --- a/test_plans/cloud_filter_with_l4_port_test_plan.rst +++ b/test_plans/cloud_filter_with_l4_port_test_plan.rst @@ -49,7 +49,7 @@ Prerequisites ./usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:81:00.0 4.Launch the testpmd:: - ./testpmd -l 0-3 -n 4 -w 81:00.0 --file-prefix=test -- -i --rxq=16 --txq=16 --disable-rss + ./testpmd -l 0-3 -n 4 -a 81:00.0 --file-prefix=test -- -i --rxq=16 --txq=16 --disable-rss testpmd> set fwd rxonly testpmd> set promisc all off testpmd> set verbose 1 @@ -517,4 +517,4 @@ Test Case 3: NEGATIVE_TEST create conflicted rules:: testpmd> flow create 0 ingress pattern eth / ipv4 / udp src is 156 / end actions pf / queue index 2 / end - Verify rules can not create. \ No newline at end of file + Verify rules can not create. diff --git a/test_plans/cvl_advanced_iavf_rss_gtpu_test_plan.rst b/test_plans/cvl_advanced_iavf_rss_gtpu_test_plan.rst index 909c722..7d99f77 100644 --- a/test_plans/cvl_advanced_iavf_rss_gtpu_test_plan.rst +++ b/test_plans/cvl_advanced_iavf_rss_gtpu_test_plan.rst @@ -213,7 +213,7 @@ Prerequisites 5. Launch the testpmd to configuration queue of rx and tx number 16 in DUT:: - testpmd>./x86_64-native-linuxapp-gcc/app/testpmd -c 0xff -n 4 -w 0000:18:01.0 -- -i --rxq=16 --txq=16 + testpmd>./x86_64-native-linuxapp-gcc/app/testpmd -c 0xff -n 4 -a 0000:18:01.0 -- -i --rxq=16 --txq=16 testpmd>set fwd rxonly testpmd>set verbose 1 diff --git a/test_plans/cvl_advanced_iavf_rss_test_plan.rst b/test_plans/cvl_advanced_iavf_rss_test_plan.rst index 27c8792..0e0a810 100644 --- a/test_plans/cvl_advanced_iavf_rss_test_plan.rst +++ b/test_plans/cvl_advanced_iavf_rss_test_plan.rst @@ -349,7 +349,7 @@ Prerequisites 5. Launch the testpmd to configuration queue of rx and tx number 16 in DUT:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xff -n 4 -w 0000:18:01.0 -- -i --rxq=16 --txq=16 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xff -n 4 -a 0000:18:01.0 -- -i --rxq=16 --txq=16 testpmd>set fwd rxonly testpmd>set verbose 1 diff --git a/test_plans/cvl_advanced_iavf_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst b/test_plans/cvl_advanced_iavf_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst index b120986..b9229f1 100644 --- a/test_plans/cvl_advanced_iavf_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst +++ b/test_plans/cvl_advanced_iavf_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst @@ -119,7 +119,7 @@ Prerequisites 6. Launch the testpmd to configuration queue of rx and tx number 16 in DUT:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xff -n 4 -w 0000:18:01.0 -- -i --rxq=16 --txq=16 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xff -n 4 -a 0000:18:01.0 -- -i --rxq=16 --txq=16 testpmd>set fwd rxonly testpmd>set verbose 1 diff --git a/test_plans/cvl_advanced_rss_pppoe_test_plan.rst b/test_plans/cvl_advanced_rss_pppoe_test_plan.rst index e209e0d..829a94d 100644 --- a/test_plans/cvl_advanced_rss_pppoe_test_plan.rst +++ b/test_plans/cvl_advanced_rss_pppoe_test_plan.rst @@ -125,7 +125,7 @@ Prerequisites 5. Launch the testpmd in DUT for cases with toeplitz hash function:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w 0000:18:00.0 -- -i --rxq=16 --txq=16 --disable-rss + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -a 0000:18:00.0 -- -i --rxq=16 --txq=16 --disable-rss testpmd> port config 0 rss-hash-key ipv4 1b9d58a4b961d9cd1c56ad1621c3ad51632c16a5d16c21c3513d132c135d132c13ad1531c23a51d6ac49879c499d798a7d949c8a testpmd> set fwd rxonly testpmd> set verbose 1 @@ -133,7 +133,7 @@ Prerequisites Launch testpmd for cases with symmetric_toeplitz and simple_xor hash function:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w 0000:18:00.0 -- -i --rxq=16 --txq=16 + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -a 0000:18:00.0 -- -i --rxq=16 --txq=16 6. on tester side, copy the layer python file to /root:: diff --git a/test_plans/cvl_advanced_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst b/test_plans/cvl_advanced_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst index a0c232d..bc2db46 100644 --- a/test_plans/cvl_advanced_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst +++ b/test_plans/cvl_advanced_rss_vlan_esp_ah_l2tp_pfcp_test_plan.rst @@ -116,7 +116,7 @@ Prerequisites 5. Launch the testpmd in DUT for cases with toeplitz hash function:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w 0000:18:00.0 -- -i --rxq=16 --txq=16 --disable-rss + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -a 0000:18:00.0 -- -i --rxq=16 --txq=16 --disable-rss testpmd> port config 0 rss-hash-key ipv4 1b9d58a4b961d9cd1c56ad1621c3ad51632c16a5d16c21c3513d132c135d132c13ad1531c23a51d6ac49879c499d798a7d949c8a testpmd> set fwd rxonly testpmd> set verbose 1 @@ -124,7 +124,7 @@ Prerequisites Launch testpmd for cases with symmetric_toeplitz and simple_xor hash function:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w 0000:18:00.0 -- -i --rxq=16 --txq=16 + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -a 0000:18:00.0 -- -i --rxq=16 --txq=16 6. on tester side, copy the layer python file to /root:: diff --git a/test_plans/cvl_dcf_acl_filter_test_plan.rst b/test_plans/cvl_dcf_acl_filter_test_plan.rst index 378514d..c74dcb6 100644 --- a/test_plans/cvl_dcf_acl_filter_test_plan.rst +++ b/test_plans/cvl_dcf_acl_filter_test_plan.rst @@ -95,7 +95,7 @@ Prerequisites 9. Launch dpdk on VF0, and VF0 request DCF mode:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -w 0000:86:01.0,cap=dcf --file-prefix=vf0 --log-level="ice,7" -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -a 0000:86:01.0,cap=dcf --file-prefix=vf0 --log-level="ice,7" -- -i testpmd> set fwd mac testpmd> set verbose 1 testpmd> start @@ -106,7 +106,7 @@ Prerequisites 10. Launch dpdk on VF1:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf0 -n 4 -w 86:01.1 --file-prefix=vf1 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf0 -n 4 -a 86:01.1 --file-prefix=vf1 -- -i testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start @@ -118,7 +118,7 @@ Prerequisites or launch one testpmd on VF0 and VF1:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -w 0000:86:01.0,cap=dcf -w 86:01.1 --file-prefix=vf0 --log-level="ice,7" -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -a 0000:86:01.0,cap=dcf -a 86:01.1 --file-prefix=vf0 --log-level="ice,7" -- -i Common steps of basic cases =========================== @@ -516,11 +516,11 @@ while we can create 256 ipv4-udp/ipv4-tcp/ipv4-sctp rules at most. 1. launch DPDK on VF0, request DCF mode:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc -n 4 -w 86:01.0,cap=dcf -- -i --port-topology=loop + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc -n 4 -a 86:01.0,cap=dcf -- -i --port-topology=loop Launch dpdk on VF1:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf0 -n 4 -w 86:01.1 --file-prefix=vf1 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf0 -n 4 -a 86:01.1 --file-prefix=vf1 -- -i 2. create a full mask rule, it's created as a switch rule:: @@ -592,11 +592,11 @@ Test Case 6: max entry number ipv4-other ======================================== 1. launch DPDK on VF0, request DCF mode:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc -n 4 -w 86:01.0,cap=dcf -- -i --port-topology=loop + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc -n 4 -a 86:01.0,cap=dcf -- -i --port-topology=loop Launch dpdk on VF1:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf0 -n 4 -w 86:01.1 --file-prefix=vf1 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf0 -n 4 -a 86:01.1 --file-prefix=vf1 -- -i 2. create a full mask rule, it's created as a switch rule:: @@ -669,11 +669,11 @@ Test Case 7: max entry number combined patterns =============================================== 1. launch DPDK on VF0, request DCF mode:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc -n 4 -w 86:01.0,cap=dcf -- -i --port-topology=loop + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc -n 4 -a 86:01.0,cap=dcf -- -i --port-topology=loop Launch dpdk on VF1:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf0 -n 4 -w 86:01.1 --file-prefix=vf1 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf0 -n 4 -a 86:01.1 --file-prefix=vf1 -- -i 2. create 32 ipv4-other ACL rules:: @@ -912,7 +912,7 @@ Test Case 11: switch/acl/fdir/rss rules combination =================================================== 1. launch testpmd:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc -n 4 -w 86:01.0,cap=dcf -w 86:01.1 --log-level="ice,7" -- -i --port-topology=loop --rxq=4 --txq=4 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc -n 4 -a 86:01.0,cap=dcf -a 86:01.1 --log-level="ice,7" -- -i --port-topology=loop --rxq=4 --txq=4 2. create rules:: diff --git a/test_plans/cvl_dcf_date_path_test_plan.rst b/test_plans/cvl_dcf_date_path_test_plan.rst index 380090f..a5f8bdf 100755 --- a/test_plans/cvl_dcf_date_path_test_plan.rst +++ b/test_plans/cvl_dcf_date_path_test_plan.rst @@ -17,7 +17,7 @@ Set a VF as trust :: Launch dpdk on the VF, request DCF mode :: ./usertools/dpdk-devbind.py -b vfio-pci 18:01.0 - ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -w 18:01.0,cap=dcf --file-prefix=vf -- -i + ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-10 -n 4 -a 18:01.0,cap=dcf --file-prefix=vf -- -i Test Case: Launch DCF and do macfwd @@ -200,4 +200,4 @@ Test Case: Measure performance of DCF interface The steps are same to iAVF performance test, a slight difference on launching testpmd devarg. DCF need cap=dcf option. -Expect the performance is same to iAVF \ No newline at end of file +Expect the performance is same to iAVF diff --git a/test_plans/cvl_dcf_switch_filter_pppoe_test_plan.rst b/test_plans/cvl_dcf_switch_filter_pppoe_test_plan.rst index 0149b46..d781f73 100644 --- a/test_plans/cvl_dcf_switch_filter_pppoe_test_plan.rst +++ b/test_plans/cvl_dcf_switch_filter_pppoe_test_plan.rst @@ -201,7 +201,7 @@ Prerequisites 9. Launch dpdk on VF0 and VF1, and VF0 request DCF mode:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w 0000:18:01.0,cap=dcf -w 0000:18:01.1 -- -i + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -a 0000:18:01.0,cap=dcf -a 0000:18:01.1 -- -i testpmd> set portlist 1 testpmd> set fwd rxonly testpmd> set verbose 1 diff --git a/test_plans/cvl_dcf_switch_filter_test_plan.rst b/test_plans/cvl_dcf_switch_filter_test_plan.rst index 116b2cc..76857e4 100644 --- a/test_plans/cvl_dcf_switch_filter_test_plan.rst +++ b/test_plans/cvl_dcf_switch_filter_test_plan.rst @@ -231,7 +231,7 @@ Prerequisites 9. Launch dpdk on VF0 and VF1, and VF0 request DCF mode:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w 0000:18:01.0,cap=dcf -w 0000:18:01.1 -- -i + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -a 0000:18:01.0,cap=dcf -a 0000:18:01.1 -- -i testpmd> set portlist 1 testpmd> set fwd rxonly testpmd> set verbose 1 @@ -2392,7 +2392,7 @@ Subcase 1: add existing rules but with different vfs 1. Launch dpdk on VF0, VF1 and VF2, and VF0 request DCF mode:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w 0000:18:01.0,cap=dcf -w 0000:18:01.1 -w 0000:18:01.2 -- -i + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -a 0000:18:01.0,cap=dcf -a 0000:18:01.1 -a 0000:18:01.2 -- -i testpmd> set portlist 1,2 testpmd> set fwd rxonly testpmd> set verbose 1 @@ -2454,7 +2454,7 @@ Subcase 3: add two rules with one rule's input set included in the other 1. Launch dpdk on VF0, VF1 and VF2, and VF0 request DCF mode:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w 0000:18:01.0,cap=dcf -w 0000:18:01.1 -w 0000:18:01.2 -- -i + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -a 0000:18:01.0,cap=dcf -a 0000:18:01.1 -a 0000:18:01.2 -- -i testpmd> set portlist 1,2 testpmd> set fwd rxonly testpmd> set verbose 1 @@ -2617,7 +2617,7 @@ are dropped. 1. Launch dpdk on VF0, VF1 and VF2, and VF0 request DCF mode:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w 0000:18:01.0,cap=dcf -w 0000:18:01.1 -w 0000:18:01.2 -- -i + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -a 0000:18:01.0,cap=dcf -a 0000:18:01.1 -a 0000:18:01.2 -- -i testpmd> set portlist 1,2 testpmd> set fwd mac testpmd> set verbose 1 @@ -2688,7 +2688,7 @@ This case is designed based on 4*25G NIC. 6. launch dpdk on VF0, and request DCF mode:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w 0000:18:01.0,cap=dcf -- -i + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -a 0000:18:01.0,cap=dcf -- -i 7. set a switch rule to each VF from DCF, totally 63 rules:: diff --git a/test_plans/cvl_fdir_test_plan.rst b/test_plans/cvl_fdir_test_plan.rst index 2458601..64d06f1 100644 --- a/test_plans/cvl_fdir_test_plan.rst +++ b/test_plans/cvl_fdir_test_plan.rst @@ -145,7 +145,7 @@ Prerequisites 5. Launch the app ``testpmd`` with the following arguments:: - ./testpmd -c 0xff -n 6 -w 86:00.0 --log-level="ice,7" -- -i --portmask=0xff --rxq=64 --txq=64 --port-topology=loop + ./testpmd -c 0xff -n 6 -a 86:00.0 --log-level="ice,7" -- -i --portmask=0xff --rxq=64 --txq=64 --port-topology=loop testpmd> set fwd rxonly testpmd> set verbose 1 @@ -156,7 +156,7 @@ Prerequisites Notes: if need two ports environment, launch ``testpmd`` with the following arguments:: - ./testpmd -c 0xff -n 6 -w 86:00.0 -w 86:00.1 --log-level="ice,7" -- -i --portmask=0xff --rxq=64 --txq=64 --port-topology=loop + ./testpmd -c 0xff -n 6 -a 86:00.0 -a 86:00.1 --log-level="ice,7" -- -i --portmask=0xff --rxq=64 --txq=64 --port-topology=loop Default parameters ------------------ diff --git a/test_plans/cvl_limit_value_test_test_plan.rst b/test_plans/cvl_limit_value_test_test_plan.rst index 9fec36f..160b126 100644 --- a/test_plans/cvl_limit_value_test_test_plan.rst +++ b/test_plans/cvl_limit_value_test_test_plan.rst @@ -91,7 +91,7 @@ Prerequisites 5. Launch the app ``testpmd`` with the following arguments:: - ./testpmd -c 0xff -n 6 -w 86:01.0 -w 86:01.1 --file-prefix=vf -- -i --rxq=16 --txq=16 + ./testpmd -c 0xff -n 6 -a 86:01.0 -a 86:01.1 --file-prefix=vf -- -i --rxq=16 --txq=16 testpmd> set fwd rxonly testpmd> set verbose 1 @@ -159,7 +159,7 @@ if 2 vfs generated by 2 pf port, each vf can create 14336 rules at most. 1. start testpmd on vf00:: - ./testpmd -c 0xf -n 6 -w 86:01.0 --file-prefix=vf00 -- -i --rxq=4 --txq=4 + ./testpmd -c 0xf -n 6 -a 86:01.0 --file-prefix=vf00 -- -i --rxq=4 --txq=4 create 1 rule on vf00:: @@ -169,7 +169,7 @@ if 2 vfs generated by 2 pf port, each vf can create 14336 rules at most. 2. start testpmd on vf10:: - ./testpmd -c 0xf0 -n 6 -w 86:0a.0 --file-prefix=vf10 -- -i --rxq=4 --txq=4 + ./testpmd -c 0xf0 -n 6 -a 86:0a.0 --file-prefix=vf10 -- -i --rxq=4 --txq=4 create 14336 rules on vf10:: @@ -218,7 +218,7 @@ this card can create (2048 + 14336)*2=32768 rules. 2. start testpmd on vf00:: - ./testpmd -c 0xf -n 6 -w 86:01.0 --file-prefix=vf00 -- -i --rxq=4 --txq=4 + ./testpmd -c 0xf -n 6 -a 86:01.0 --file-prefix=vf00 -- -i --rxq=4 --txq=4 create 1 rule on vf00:: @@ -228,7 +228,7 @@ this card can create (2048 + 14336)*2=32768 rules. 2. start testpmd on vf10:: - ./testpmd -c 0xf0 -n 6 -w 86:0a.0 --file-prefix=vf10 -- -i --rxq=4 --txq=4 + ./testpmd -c 0xf0 -n 6 -a 86:0a.0 --file-prefix=vf10 -- -i --rxq=4 --txq=4 create 14335 rules on vf10:: @@ -289,7 +289,7 @@ so if create 16384 rules on pf1,check failed to create rule on vf00 and vf10(vf0 3. start testpmd on vf00 and vf10:: - ./testpmd -c 0xf -n 6 -w 86:01.0 -w 86:11.0 --file-prefix=vf00 -- -i --rxq=4 --txq=4 + ./testpmd -c 0xf -n 6 -a 86:01.0 -a 86:11.0 --file-prefix=vf00 -- -i --rxq=4 --txq=4 create 1 rule on vf00:: @@ -435,7 +435,7 @@ Prerequisites 5. Launch the app ``testpmd`` with the following arguments:: - ./testpmd -c 0xff -n 6 -w 86:00.0 --log-level="ice,7" -- -i --portmask=0xff --rxq=64 --txq=64 --port-topology=loop + ./testpmd -c 0xff -n 6 -a 86:00.0 --log-level="ice,7" -- -i --portmask=0xff --rxq=64 --txq=64 --port-topology=loop testpmd> set fwd rxonly testpmd> set verbose 1 @@ -446,7 +446,7 @@ Prerequisites Notes: if need two ports environment, launch ``testpmd`` with the following arguments:: - ./testpmd -c 0xff -n 6 -w 86:00.0 -w 86:00.1 --log-level="ice,7" -- -i --portmask=0xff --rxq=64 --txq=64 --port-topology=loop + ./testpmd -c 0xff -n 6 -a 86:00.0 -a 86:00.1 --log-level="ice,7" -- -i --portmask=0xff --rxq=64 --txq=64 --port-topology=loop Test case: add/delete rules ============================ @@ -529,7 +529,7 @@ Prerequisites 8. Launch dpdk on VF0 and VF1, and VF0 request DCF mode:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w 0000:18:01.0,cap=dcf -w 0000:18:01.1 -- -i + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -a 0000:18:01.0,cap=dcf -a 0000:18:01.1 -- -i testpmd> set portlist 1 testpmd> set fwd rxonly testpmd> set verbose 1 diff --git a/test_plans/cvl_switch_filter_pppoe_test_plan.rst b/test_plans/cvl_switch_filter_pppoe_test_plan.rst index f63965b..897e8c6 100644 --- a/test_plans/cvl_switch_filter_pppoe_test_plan.rst +++ b/test_plans/cvl_switch_filter_pppoe_test_plan.rst @@ -203,7 +203,7 @@ Prerequisites 6. Launch dpdk with the following arguments in non-pipeline mode:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w 0000:18:00.0 --log-level="ice,8" -- -i --txq=16 --rxq=16 --cmdline-file=testpmd_fdir_rules + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -a 0000:18:00.0 --log-level="ice,8" -- -i --txq=16 --rxq=16 --cmdline-file=testpmd_fdir_rules testpmd> port config 0 rss-hash-key ipv4 1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd testpmd> set fwd rxonly testpmd> set verbose 1 @@ -217,7 +217,7 @@ Prerequisites Launch dpdk in pipeline mode with the following testpmd command line:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w 0000:18:00.0,pipeline-mode-support=1 --log-level="ice,8" -- -i --txq=16 --rxq=16 + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -a 0000:18:00.0,pipeline-mode-support=1 --log-level="ice,8" -- -i --txq=16 --rxq=16 Test case: Ethertype filter =========================== diff --git a/test_plans/cvl_switch_filter_test_plan.rst b/test_plans/cvl_switch_filter_test_plan.rst index 992aa6c..ae29e64 100644 --- a/test_plans/cvl_switch_filter_test_plan.rst +++ b/test_plans/cvl_switch_filter_test_plan.rst @@ -181,7 +181,7 @@ Prerequisites 6. Launch dpdk with the following arguments in non-pipeline mode:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w 0000:18:00.0 --log-level="ice,8" -- -i --txq=16 --rxq=16 --cmdline-file=testpmd_fdir_rules + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -a 0000:18:00.0 --log-level="ice,8" -- -i --txq=16 --rxq=16 --cmdline-file=testpmd_fdir_rules testpmd> port config 0 rss-hash-key ipv4 1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd testpmd> set fwd rxonly testpmd> set verbose 1 @@ -195,7 +195,7 @@ Prerequisites Launch dpdk in pipeline mode with the following testpmd command line:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w 0000:18:00.0,pipeline-mode-support=1 --log-level="ice,8" -- -i --txq=16 --rxq=16 + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -a 0000:18:00.0,pipeline-mode-support=1 --log-level="ice,8" -- -i --txq=16 --rxq=16 Test case: VXLAN non-pipeline mode ================================== diff --git a/test_plans/ddp_l2tpv3_test_plan.rst b/test_plans/ddp_l2tpv3_test_plan.rst index 6d3952f..8262da3 100644 --- a/test_plans/ddp_l2tpv3_test_plan.rst +++ b/test_plans/ddp_l2tpv3_test_plan.rst @@ -100,7 +100,7 @@ any DDP functionality* 5. Start the TESTPMD:: - ./x86_64-native-linuxapp-gcc/build/app/test-pmd/testpmd -c f -n 4 -w + ./x86_64-native-linuxapp-gcc/build/app/test-pmd/testpmd -c f -n 4 -a -- -i --port-topology=chained --txq=64 --rxq=64 --pkt-filter-mode=perfect diff --git a/test_plans/dpdk_hugetlbfs_mount_size_test_plan.rst b/test_plans/dpdk_hugetlbfs_mount_size_test_plan.rst index 6d0eb88..218b960 100644 --- a/test_plans/dpdk_hugetlbfs_mount_size_test_plan.rst +++ b/test_plans/dpdk_hugetlbfs_mount_size_test_plan.rst @@ -51,14 +51,14 @@ Test Case 1: default hugepage size w/ and w/o numa 2. Bind one nic port to igb_uio driver, launch testpmd:: - ./testpmd -c 0x3 -n 4 --huge-dir /mnt/huge --file-prefix=abc -- -i + ./dpdk-testpmd -c 0x3 -n 4 --huge-dir /mnt/huge --file-prefix=abc -- -i testpmd>start 3. Send packet with packet generator, check testpmd could forward packets correctly. 4. Goto step 2 resart testpmd with numa support:: - ./testpmd -c 0x3 -n 4 --huge-dir /mnt/huge --file-prefix=abc -- -i --numa + ./dpdk-testpmd -c 0x3 -n 4 --huge-dir /mnt/huge --file-prefix=abc -- -i --numa testpmd>start 5. Send packets with packet generator, make sure testpmd could receive and fwd packets correctly. @@ -73,10 +73,10 @@ Test Case 2: mount size exactly match total hugepage size with two mount points 2. Bind two nic ports to igb_uio driver, launch testpmd with numactl:: - numactl --membind=1 ./testpmd -l 31-32 -n 4 --legacy-mem --socket-mem 0,2048 --huge-dir /mnt/huge1 --file-prefix=abc -w 82:00.0 -- -i --socket-num=1 --no-numa + numactl --membind=1 ./dpdk-testpmd -l 31-32 -n 4 --legacy-mem --socket-mem 0,2048 --huge-dir /mnt/huge1 --file-prefix=abc -a 82:00.0 -- -i --socket-num=1 --no-numa testpmd>start - numactl --membind=1 ./testpmd -l 33-34 -n 4 --legacy-mem --socket-mem 0,2048 --huge-dir /mnt/huge2 --file-prefix=bcd -w 82:00.1 -- -i --socket-num=1 --no-numa + numactl --membind=1 ./dpdk-testpmd -l 33-34 -n 4 --legacy-mem --socket-mem 0,2048 --huge-dir /mnt/huge2 --file-prefix=bcd -a 82:00.1 -- -i --socket-num=1 --no-numa testpmd>start 3. Send packets with packet generator, make sure two testpmd could receive and fwd packets correctly. @@ -90,7 +90,7 @@ Test Case 3: mount size greater than total hugepage size with single mount point 2. Bind one nic port to igb_uio driver, launch testpmd:: - ./testpmd -c 0x3 -n 4 --legacy-mem --huge-dir /mnt/huge --file-prefix=abc -- -i + ./dpdk-testpmd -c 0x3 -n 4 --legacy-mem --huge-dir /mnt/huge --file-prefix=abc -- -i testpmd>start 3. Send packets with packet generator, make sure testpmd could receive and fwd packets correctly. @@ -106,13 +106,13 @@ Test Case 4: mount size greater than total hugepage size with multiple mount poi 2. Bind one nic port to igb_uio driver, launch testpmd:: - numactl --membind=0 ./testpmd -c 0x3 -n 4 --legacy-mem --socket-mem 2048,0 --huge-dir /mnt/huge1 --file-prefix=abc -- -i --socket-num=0 --no-numa + numactl --membind=0 ./dpdk-testpmd -c 0x3 -n 4 --legacy-mem --socket-mem 2048,0 --huge-dir /mnt/huge1 --file-prefix=abc -- -i --socket-num=0 --no-numa testpmd>start - numactl --membind=0 ./testpmd -c 0xc -n 4 --legacy-mem --socket-mem 2048,0 --huge-dir /mnt/huge2 --file-prefix=bcd -- -i --socket-num=0 --no-numa + numactl --membind=0 ./dpdk-testpmd -c 0xc -n 4 --legacy-mem --socket-mem 2048,0 --huge-dir /mnt/huge2 --file-prefix=bcd -- -i --socket-num=0 --no-numa testpmd>start - numactl --membind=0 ./testpmd -c 0x30 -n 4 --legacy-mem --socket-mem 1024,0 --huge-dir /mnt/huge3 --file-prefix=fgh -- -i --socket-num=0 --no-numa + numactl --membind=0 ./dpdk-testpmd -c 0x30 -n 4 --legacy-mem --socket-mem 1024,0 --huge-dir /mnt/huge3 --file-prefix=fgh -- -i --socket-num=0 --no-numa testpmd>start 3. Send packets with packet generator, check first and second testpmd will start correctly while third one will report error with not enough mem in socket 0. @@ -124,6 +124,6 @@ Test Case 5: run dpdk app in limited hugepages controlled by cgroup cgcreate -g hugetlb:/test-subgroup cgset -r hugetlb.1GB.limit_in_bytes=2147483648 test-subgroup - cgexec -g hugetlb:test-subgroup numactl -m 1 ./testpmd -c 0x3000 -n 4 -- -i --socket-num=1 --no-numa + cgexec -g hugetlb:test-subgroup numactl -m 1 ./dpdk-testpmd -c 0x3000 -n 4 -- -i --socket-num=1 --no-numa 2. Start testpmd and send packets with packet generator, make sure testpmd could receive and fwd packets correctly. diff --git a/test_plans/enable_package_download_in_ice_driver_test_plan.rst b/test_plans/enable_package_download_in_ice_driver_test_plan.rst index 4139191..578ba30 100644 --- a/test_plans/enable_package_download_in_ice_driver_test_plan.rst +++ b/test_plans/enable_package_download_in_ice_driver_test_plan.rst @@ -104,7 +104,7 @@ Test case 2: Driver enters Safe Mode successfully 2. Start testpmd:: ./testpmd -c 0x3fe -n 6 \ - -w PORT0_PCI,safe-mode-support=1 -w PORT1_PCI,safe-mode-support=1 \ + -a PORT0_PCI,safe-mode-support=1 -a PORT1_PCI,safe-mode-support=1 \ -- -i --nb-cores=8 --rxq=8 --txq=8 --port-topology=chained There will be an error reported:: @@ -176,7 +176,7 @@ Compile DPDK and testpmd:: Launch testpmd with 1 default interface and 1 specific interface:: - ./x86_64-native-linux-gcc/app/testpmd -l 6-9 -n 4 -w 18:00.0 -w b1:00.0 --log-level=8 -- -i + ./x86_64-native-linux-gcc/app/testpmd -l 6-9 -n 4 -a 18:00.0 -a b1:00.0 --log-level=8 -- -i In this case, b1:00.0 interface is specific interface. diff --git a/test_plans/eventdev_perf_test_plan.rst b/test_plans/eventdev_perf_test_plan.rst index d5fe4ed..f8e8153 100644 --- a/test_plans/eventdev_perf_test_plan.rst +++ b/test_plans/eventdev_perf_test_plan.rst @@ -49,14 +49,14 @@ Description: Execute performance test with Atomic_atq type of stage in multi-flo 1. Run the sample with below command:: - # ./build/dpdk-test-eventdev -l 22-23 -w eventdev_device_bus_id -w device_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=A --wlcores=23 + # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=A --wlcores=23 Parameters:: -l CORELIST : List of cores to run on The argument format is [-c2][,c3[-c4],...] where c1, c2, etc are core indexes between 0 and 24 - -w --pci-allowlist : Add a PCI device in allow list. + -a --pci-allowlist : Add a PCI device in allow list. Only use the specified PCI devices. The argument format is <[domain:]bus:devid.func>. This option can be present several times (once per device). @@ -76,7 +76,7 @@ Description: Execute performance test with Parallel_atq type of stage in multi-f 1. Run the sample with below command:: - # ./build/dpdk-test-eventdev -l 22-23 -w eventdev_device_bus_id -w device_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=P --wlcores=23 + # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=P --wlcores=23 2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) @@ -88,7 +88,7 @@ Description: Execute performance test with Ordered_atq type of stage in multi-fl 1. Run the sample with below command:: - # ./build/dpdk-test-eventdev -l 22-23 -w eventdev_device_bus_id -w device_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=O --wlcores=23 + # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=O --wlcores=23 2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) @@ -100,7 +100,7 @@ Description: Execute performance test with Atomic_queue type of stage in multi-f 1. Run the sample with below command:: - # ./build/dpdk-test-eventdev -l 22-23 -w eventdev_device_bus_id -w device_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=A --wlcores=23 + # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=A --wlcores=23 2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) @@ -112,7 +112,7 @@ Description: Execute performance test with Parallel_queue type of stage in multi 1. Run the sample with below command:: - # ./build/dpdk-test-eventdev -l 22-23 -w eventdev_device_bus_id -w device_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=P --wlcores=23 + # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=P --wlcores=23 2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) @@ -124,7 +124,7 @@ Description: Execute performance test with Ordered_queue type of stage in multi- 1. Run the sample with below command:: - # ./build/dpdk-test-eventdev -l 22-23 -w eventdev_device_bus_id -w device_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=O --wlcores=23 + # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=O --wlcores=23 2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) @@ -136,7 +136,7 @@ Description: Execute performance test with Atomic_atq type of stage in multi-flo 1. Run the sample with below command:: - # ./build/dpdk-test-eventdev -l 22-23 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=A --wlcores=23 + # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=A --wlcores=23 2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) @@ -148,7 +148,7 @@ Description: Execute performance test with Parallel_atq type of stage in multi-f 1. Run the sample with below command:: - # ./build/dpdk-test-eventdev -l 22-23 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=P --wlcores=23 + # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=P --wlcores=23 2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) @@ -160,7 +160,7 @@ Description: Execute performance test with Ordered_atq type of stage in multi-fl 1. Run the sample with below command:: - # ./build/dpdk-test-eventdev -l 22-23 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=O --wlcores=23 + # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=O --wlcores=23 2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) @@ -172,7 +172,7 @@ Description: Execute performance test with Atomic_queue type of stage in multi-f 1. Run the sample with below command:: - # ./build/dpdk-test-eventdev -l 22-23 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=A --wlcores=23 + # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=A --wlcores=23 2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) @@ -184,7 +184,7 @@ Description: Execute performance test with Parallel_queue type of stage in multi 1. Run the sample with below command:: - # ./build/dpdk-test-eventdev -l 22-23 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=P --wlcores=23 + # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=P --wlcores=23 2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) @@ -196,7 +196,7 @@ Description: Execute performance test with Ordered_queue type of stage in multi- 1. Run the sample with below command:: - # ./build/dpdk-test-eventdev -l 22-23 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=O --wlcores=23 + # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=O --wlcores=23 2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) @@ -209,7 +209,7 @@ Description: Execute performance test with Atomic_atq type of stage in multi-flo 1. Run the sample with below command:: - # ./build/dpdk-test-eventdev -l 22-23 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -w device2_bus_id -w device3_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=A --wlcores=23 + # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -w device2_bus_id -a device3_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=A --wlcores=23 2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) @@ -221,7 +221,7 @@ Description: Execute performance test with Parallel_atq type of stage in multi-f 1. Run the sample with below command:: - # ./build/dpdk-test-eventdev -l 22-23 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -w device2_bus_id -w device3_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=P --wlcores=23 + # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -a device2_bus_id -a device3_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=P --wlcores=23 2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) @@ -233,7 +233,7 @@ Description: Execute performance test with Ordered_atq type of stage in multi-fl 1. Run the sample with below command:: - # ./build/dpdk-test-eventdev -l 22-23 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -w device2_bus_id -w device3_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=O --wlcores=23 + # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -a device2_bus_id -a device3_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=O --wlcores=23 2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) @@ -245,7 +245,7 @@ Description: Execute performance test with Atomic_queue type of stage in multi-f 1. Run the sample with below command:: - # ./build/dpdk-test-eventdev -l 22-23 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -w device2_bus_id -w device3_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=A --wlcores=23 + # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -a device2_bus_id -a device3_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=A --wlcores=23 2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) @@ -257,7 +257,7 @@ Description: Execute performance test with Parallel_queue type of stage in multi 1. Run the sample with below command:: - # ./build/dpdk-test-eventdev -l 22-23 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -w device2_bus_id -w device3_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=P --wlcores=23 + # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -a device2_bus_id -a device3_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=P --wlcores=23 2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) @@ -269,7 +269,7 @@ Description: Execute performance test with Ordered_queue type of stage in multi- 1. Run the sample with below command:: - # ./build/dpdk-test-eventdev -l 22-23 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -w device2_bus_id -w device3_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=O --wlcores=23 + # ./build/dpdk-test-eventdev -l 22-23 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -a device2_bus_id -a device3_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=O --wlcores=23 2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) diff --git a/test_plans/eventdev_pipeline_perf_test_plan.rst b/test_plans/eventdev_pipeline_perf_test_plan.rst index abeab18..34464ab 100644 --- a/test_plans/eventdev_pipeline_perf_test_plan.rst +++ b/test_plans/eventdev_pipeline_perf_test_plan.rst @@ -51,12 +51,12 @@ Description: Execute performance test with Atomic_atq type of stage in multi-flo 1. Run the sample with below command:: - # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device_bus_id -- -w 0xc00000 -n=0 --dump + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device_bus_id -- -w 0xc00000 -n=0 --dump Parameters:: -c, COREMASK : Hexadecimal bitmask of cores to run on - -w, --pci-allowlist : Add a PCI device in allow list. + -a, --pci-allowlist : Add a PCI device in allow list. Only use the specified PCI devices. The argument format is <[domain:]bus:devid.func>. This option can be present several times (once per device). @@ -75,12 +75,12 @@ Description: Execute performance test with Parallel_atq type of stage in multi-f 1. Run the sample with below command:: - # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device_bus_id -- -w 0xc00000 -n=0 -p --dump + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device_bus_id -- -w 0xc00000 -n=0 -p --dump Parameters:: -c, COREMASK : Hexadecimal bitmask of cores to run on - -w, --pci-allowlist : Add a PCI device in allow list. + -a, --pci-allowlist : Add a PCI device in allow list. Only use the specified PCI devices. The argument format is <[domain:]bus:devid.func>. This option can be present several times (once per device). @@ -100,12 +100,12 @@ Description: Execute performance test with Ordered_atq type of stage in multi-fl 1. Run the sample with below command:: - # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device_bus_id -- -w 0xc00000 -n=0 -o --dump + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device_bus_id -- -w 0xc00000 -n=0 -o --dump Parameters:: -c, COREMASK : Hexadecimal bitmask of cores to run on - -w, --pci-allowlist : Add a PCI device in allow list. + -a, --pci-allowlist : Add a PCI device in allow list. Only use the specified PCI devices. The argument format is <[domain:]bus:devid.func>. This option can be present several times (once per device). @@ -125,12 +125,12 @@ Description: Execute performance test with Atomic_atq type of stage in multi-flo 1. Run the sample with below command:: - # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -- -w 0xc00000 -n=0 --dump + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- -w 0xc00000 -n=0 --dump Parameters:: -c, COREMASK : Hexadecimal bitmask of cores to run on - -w, --pci-allowlist : Add a PCI device in allow list. + -a, --pci-allowlist : Add a PCI device in allow list. Only use the specified PCI devices. The argument format is <[domain:]bus:devid.func>. This option can be present several times (once per device). @@ -149,12 +149,12 @@ Description: Execute performance test with Parallel_atq type of stage in multi-f 1. Run the sample with below command:: - # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -- -w 0xc00000 -n=0 -p --dump + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- -w 0xc00000 -n=0 -p --dump Parameters:: -c, COREMASK : Hexadecimal bitmask of cores to run on - -w, --pci-allowlist : Add a PCI device in allow list. + -a, --pci-allowlist : Add a PCI device in allow list. Only use the specified PCI devices. The argument format is <[domain:]bus:devid.func>. This option can be present several times (once per device). @@ -174,12 +174,12 @@ Description: Execute performance test with Ordered_atq type of stage in multi-fl 1. Run the sample with below command:: - # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -- -w 0xc00000 -n=0 -o --dump + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -- -w 0xc00000 -n=0 -o --dump Parameters:: -c, COREMASK : Hexadecimal bitmask of cores to run on - -w, --pci-allowlist : Add a PCI device in allow list. + -a, --pci-allowlist : Add a PCI device in allow list. Only use the specified PCI devices. The argument format is <[domain:]bus:devid.func>. This option can be present several times (once per device). @@ -199,12 +199,12 @@ Description: Execute performance test with Atomic_atq type of stage in multi-flo 1. Run the sample with below command:: - # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -w device2_bus_id -w device3_bus_id -- -w 0xc00000 -n=0 --dump + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -a device2_bus_id -a device3_bus_id -- -w 0xc00000 -n=0 --dump Parameters:: -c, COREMASK : Hexadecimal bitmask of cores to run on - -w, --pci-allowlist : Add a PCI device in allow list. + -a, --pci-allowlist : Add a PCI device in allow list. Only use the specified PCI devices. The argument format is <[domain:]bus:devid.func>. This option can be present several times (once per device). @@ -223,12 +223,12 @@ Description: Execute performance test with Parallel_atq type of stage in multi-f 1. Run the sample with below command:: - # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -w device2_bus_id -w device3_bus_id -- -w 0xc00000 -n=0 -p --dump + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -a device2_bus_id -a device3_bus_id -- -w 0xc00000 -n=0 -p --dump Parameters:: -c, COREMASK : Hexadecimal bitmask of cores to run on - -w, --pci-allowlist : Add a PCI device in allow list. + -a, --pci-allowlist : Add a PCI device in allow list. Only use the specified PCI devices. The argument format is <[domain:]bus:devid.func>. This option can be present several times (once per device). @@ -248,12 +248,12 @@ Description: Execute performance test with Ordered_atq type of stage in multi-fl 1. Run the sample with below command:: - # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -w device2_bus_id -w device3_bus_id -- -w 0xc00000 -n=0 -o --dump + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -a eventdev_device_bus_id -a device0_bus_id -a device1_bus_id -a device2_bus_id -a device3_bus_id -- -w 0xc00000 -n=0 -o --dump Parameters:: -c, COREMASK : Hexadecimal bitmask of cores to run on - -w, --pci-allowlist : Add a PCI device in allow list. + -a, --pci-allowlist : Add a PCI device in allow list. Only use the specified PCI devices. The argument format is <[domain:]bus:devid.func>. This option can be present several times (once per device). @@ -265,4 +265,4 @@ Description: Execute performance test with Ordered_atq type of stage in multi-fl 2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) -3. Observe the speed of packets received(Rx-rate) on Ixia. \ No newline at end of file +3. Observe the speed of packets received(Rx-rate) on Ixia. diff --git a/test_plans/flexible_rxd_test_plan.rst b/test_plans/flexible_rxd_test_plan.rst index b2ca34a..30ae699 100644 --- a/test_plans/flexible_rxd_test_plan.rst +++ b/test_plans/flexible_rxd_test_plan.rst @@ -94,13 +94,13 @@ Test Case 01: Check single VLAN fields in RXD (802.1Q) Launch testpmd by:: - ./x86_64-native-linux-gcc/app/testpmd -l 6-9 -n 4 -w 18:00.0,proto_xtr=vlan -- -i --rxq=32 --txq=32 --portmask=0x1 --nb-cores=2 + ./x86_64-native-linux-gcc/app/testpmd -l 6-9 -n 4 -a 18:00.0,proto_xtr=vlan -- -i --rxq=32 --txq=32 --portmask=0x1 --nb-cores=2 testpmd>set verbose 1 testpmd>set fwd io testpmd>start -Please change the core setting (-l option) and port's PCI (-w option) \ +Please change the core setting (-l option) and port's PCI (-a option) \ by your DUT environment Send a packet with VLAN tag from test network interface:: @@ -130,7 +130,7 @@ Test steps are same to ``Test Case 01``, just change the launch command of testp Launch testpmd command:: - ./x86_64-native-linux-gcc/app/testpmd -l 6-9 -n 4 -w 18:00.0,proto_xtr=vlan -- -i --rxq=32 --txq=32 --portmask=0x1 --nb-cores=2 + ./x86_64-native-linux-gcc/app/testpmd -l 6-9 -n 4 -a 18:00.0,proto_xtr=vlan -- -i --rxq=32 --txq=32 --portmask=0x1 --nb-cores=2 Test packet:: @@ -148,7 +148,7 @@ Test steps are same to ``Test Case 01``, just change the launch command of testp Launch testpmd command:: - ./x86_64-native-linux-gcc/app/testpmd -l 6-9 -n 4 -w 18:00.0,proto_xtr=vlan -- -i --rxq=32 --txq=32 --portmask=0x1 --nb-cores=2 + ./x86_64-native-linux-gcc/app/testpmd -l 6-9 -n 4 -a 18:00.0,proto_xtr=vlan -- -i --rxq=32 --txq=32 --portmask=0x1 --nb-cores=2 Test packet:: @@ -167,7 +167,7 @@ Test steps are same to ``Test Case 01``, just change the launch command of testp Launch testpmd command:: - ./x86_64-native-linux-gcc/app/testpmd -l 6-9 -n 4 -w 18:00.0,proto_xtr=vlan -- -i --rxq=32 --txq=32 --portmask=0x1 --nb-cores=2 + ./x86_64-native-linux-gcc/app/testpmd -l 6-9 -n 4 -a 18:00.0,proto_xtr=vlan -- -i --rxq=32 --txq=32 --portmask=0x1 --nb-cores=2 Test packet:: @@ -186,7 +186,7 @@ Test steps are same to ``Test Case 01``, just change the launch command of testp Launch testpmd command:: - ./x86_64-native-linux-gcc/app/testpmd -l 6-9 -n 4 -w 18:00.0,proto_xtr=ipv4 -- -i --rxq=32 --txq=32 --portmask=0x1 --nb-cores=2 + ./x86_64-native-linux-gcc/app/testpmd -l 6-9 -n 4 -a 18:00.0,proto_xtr=ipv4 -- -i --rxq=32 --txq=32 --portmask=0x1 --nb-cores=2 Test packet:: @@ -208,7 +208,7 @@ Test steps are same to ``Test Case 01``, just change the launch command of testp Launch testpmd command:: - ./x86_64-native-linux-gcc/app/testpmd -l 6-9 -n 4 -w 18:00.0,proto_xtr=ipv6 -- -i --rxq=32 --txq=32 --portmask=0x1 --nb-cores=2 + ./x86_64-native-linux-gcc/app/testpmd -l 6-9 -n 4 -a 18:00.0,proto_xtr=ipv6 -- -i --rxq=32 --txq=32 --portmask=0x1 --nb-cores=2 Test packet:: @@ -230,7 +230,7 @@ Test steps are same to ``Test Case 01``, just change the launch command of testp Launch testpmd command:: - ./x86_64-native-linux-gcc/app/testpmd -l 6-9 -n 4 -w 18:00.0,proto_xtr=ipv6_flow -- -i --rxq=32 --txq=32 --portmask=0x1 --nb-cores=2 + ./x86_64-native-linux-gcc/app/testpmd -l 6-9 -n 4 -a 18:00.0,proto_xtr=ipv6_flow -- -i --rxq=32 --txq=32 --portmask=0x1 --nb-cores=2 Test packet:: @@ -250,7 +250,7 @@ Test steps are same to ``Test Case 01``, just change the launch command of testp Launch testpmd command:: - ./x86_64-native-linux-gcc/app/testpmd -l 6-9 -n 4 -w 18:00.0,proto_xtr=tcp -- -i --rxq=32 --txq=32 --portmask=0x1 --nb-cores=2 + ./x86_64-native-linux-gcc/app/testpmd -l 6-9 -n 4 -a 18:00.0,proto_xtr=tcp -- -i --rxq=32 --txq=32 --portmask=0x1 --nb-cores=2 Test packet:: @@ -269,7 +269,7 @@ Test steps are same to ``Test Case 01``, just change the launch command of testp Launch testpmd command:: - ./x86_64-native-linux-gcc/app/testpmd -l 6-9 -n 4 -w 18:00.0,proto_xtr=tcp -- -i --rxq=32 --txq=32 --portmask=0x1 --nb-cores=2 + ./x86_64-native-linux-gcc/app/testpmd -l 6-9 -n 4 -a 18:00.0,proto_xtr=tcp -- -i --rxq=32 --txq=32 --portmask=0x1 --nb-cores=2 Test packet:: @@ -288,7 +288,7 @@ Test steps are same to ``Test Case 01``, just change the launch command of testp Launch testpmd command:: - ./x86_64-native-linux-gcc/app/testpmd -l 6-9 -n 4 -w 18:00.0,proto_xtr='[(2):ipv4,(3):ipv6,(4):tcp]' -- -i --rxq=64 --txq=64 --portmask=0x1 + ./x86_64-native-linux-gcc/app/testpmd -l 6-9 -n 4 -a 18:00.0,proto_xtr='[(2):ipv4,(3):ipv6,(4):tcp]' -- -i --rxq=64 --txq=64 --portmask=0x1 Create generic flow on NIC:: @@ -360,7 +360,7 @@ Test steps are same to ``Test Case 01``, just change the launch command of testp MPLS cases use same parameter Launch testpmd:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -w af:01.0,proto_xtr=ip_offset -- -i --portmask=0x1 --nb-cores=2 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -a af:01.0,proto_xtr=ip_offset -- -i --portmask=0x1 --nb-cores=2 check RXDID value correct:: diff --git a/test_plans/floating_veb_test_plan.rst b/test_plans/floating_veb_test_plan.rst index 17b8231..1522b53 100644 --- a/test_plans/floating_veb_test_plan.rst +++ b/test_plans/floating_veb_test_plan.rst @@ -124,14 +124,14 @@ MAC switch when PF is link down as well as up. 1. Launch PF testpmd:: ./testpmd -c 0xf -n 4 --socket-mem 1024,1024 - -w 05:00.0,enable_floating_veb=1 --file-prefix=test1 -- -i + -a 05:00.0,enable_floating_veb=1 --file-prefix=test1 -- -i testpmd> port start all testpmd> show port info all 2. VF1, run testpmd:: ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 - -w 05:02.0 --file-prefix=test2 -- -i --crc-strip + -a 05:02.0 --file-prefix=test2 -- -i --crc-strip testpmd> mac_addr add 0 vf1_mac_address testpmd> set fwd rxonly testpmd> set promisc all off @@ -140,7 +140,7 @@ MAC switch when PF is link down as well as up. VF2, run testpmd:: - ./testpmd -c 0xf00 -n 4 --socket-mem 1024,1024 -w 05:02.1 --file-prefix=test3 + ./testpmd -c 0xf00 -n 4 --socket-mem 1024,1024 -a 05:02.1 --file-prefix=test3 -- -i --crc-strip --eth-peer=0,vf1_mac_address testpmd> set fwd txonly testpmd> start @@ -162,7 +162,7 @@ send traffic from VF0 to PF, PF can't receive any packets either. 1. In PF, launch testpmd:: - ./testpmd -c 0xf -n 4 --socket-mem 1024,1024 -w 05:00.0,enable_floating_veb=1 --file-prefix=test1 -- -i + ./testpmd -c 0xf -n 4 --socket-mem 1024,1024 -a 05:00.0,enable_floating_veb=1 --file-prefix=test1 -- -i testpmd> set fwd rxonly testpmd> set promisc all off testpmd> port start all @@ -171,7 +171,7 @@ send traffic from VF0 to PF, PF can't receive any packets either. 2. VF1, run testpmd:: - ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -w 05:02.0 --file-prefix=test2 -- -i --eth-peer=0,pf_mac_addr + ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -a 05:02.0 --file-prefix=test2 -- -i --eth-peer=0,pf_mac_addr testpmd> set fwd txonly testpmd> start testpmd> show port stats all @@ -193,7 +193,7 @@ in floating mode, check VF1 can't receive traffic from tester. 2. PF, launch testpmd:: - ./testpmd -c 0xf -n 4 --socket-mem 1024,1024 -w 05:00.0,enable_floating_veb=1 --file-prefix=test1 -- -i --eth-peer=0,VF_mac_address + ./testpmd -c 0xf -n 4 --socket-mem 1024,1024 -a 05:00.0,enable_floating_veb=1 --file-prefix=test1 -- -i --eth-peer=0,VF_mac_address testpmd> set fwd mac testpmd> port start all testpmd> start @@ -201,7 +201,7 @@ in floating mode, check VF1 can't receive traffic from tester. VF1, run testpmd:: - ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -w 05:02.0 --file-prefix=test2 -- -i + ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -a 05:02.0 --file-prefix=test2 -- -i testpmd> set fwd rxonly testpmd> start testpmd> show port stats all @@ -237,7 +237,7 @@ Details: 1. Launch PF testpmd, run testpmd with floating parameters and make the link down:: ./testpmd -c 0xf -n 4 --socket-mem 1024,1024 \ - \"-w "05:00.0,enable_floating_veb=1,floating_veb_list=0;2-3\" \ + \"-a "05:00.0,enable_floating_veb=1,floating_veb_list=0;2-3\" \ --file-prefix=test1 -- -i //VF0, VF2 and VF3in floating VEB, VF1 in legacy VEB @@ -251,7 +251,7 @@ Details: VF0, run testpmd:: - ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -w 05:02.0 \ + ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -a 05:02.0 \ --file-prefix=test2 -- -i --eth-peer=0,vf1_mac_address testpmd> set fwd rxonly testpmd> mac_addr add 0 vf0_mac_address //set the vf0_mac_address @@ -260,7 +260,7 @@ Details: VF1, run testpmd:: - ./testpmd -c 0xf00 -n 4 --socket-mem 1024,1024 -w 05:02.1 \ + ./testpmd -c 0xf00 -n 4 --socket-mem 1024,1024 -a 05:02.1 \ --file-prefix=test3 -- -i --eth-peer=0,vf1_mac_address testpmd> set fwd txonly testpmd> mac_addr add 0 vf1_mac_addres @@ -275,7 +275,7 @@ Details: VF2, run testpmd:: - ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -w 05:02.2 \ + ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -a 05:02.2 \ --file-prefix=test2 -- -i testpmd> set fwd rxonly testpmd> mac_addr add 0 vf2_mac_addres @@ -284,7 +284,7 @@ Details: VF0, run testpmd:: - ./testpmd -c 0xf00 -n 4 --socket-mem 1024,1024 -w 05:02.0 \ + ./testpmd -c 0xf00 -n 4 --socket-mem 1024,1024 -a 05:02.0 \ --file-prefix=test3 -- -i --eth-peer=0,vf2_mac_address testpmd> set fwd txonly testpmd> start @@ -319,7 +319,7 @@ Details: 1. In PF, launch testpmd:: ./testpmd -c 0xf -n 4 --socket-mem 1024,1024 \ - \"-w 05:00.0,enable_floating_veb=1,floating_veb_list=0;3\" \ + \"-a 05:00.0,enable_floating_veb=1,floating_veb_list=0;3\" \ --file-prefix=test1 -- -i testpmd> set fwd rxonly testpmd> port start all @@ -328,7 +328,7 @@ Details: 2. VF0, run testpmd:: - ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -w 05:02.0 \ + ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -a 05:02.0 \ --file-prefix=test2 -- -i --eth-peer=0,pf_mac_addr testpmd> set fwd txonly testpmd> start @@ -337,7 +337,7 @@ Details: 3. VF1, run testpmd:: - ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -w 05:02.1 \ + ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -a 05:02.1 \ --file-prefix=test2 -- -i --eth-peer=0,pf_mac_addr testpmd> set fwd txonly testpmd> start @@ -346,7 +346,7 @@ Details: 4. VF0, run testpmd:: - ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -w 05:02.0 --file-prefix=test2 -- -i + ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -a 05:02.0 --file-prefix=test2 -- -i testpmd> mac_addr add 0 VF0_mac_address testpmd> set promisc all off testpmd> set fwd rxonly @@ -361,7 +361,7 @@ Details: 5. VF1, run testpmd:: - ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -w 05:02.1 --file-prefix=test2 -- -i + ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -a 05:02.1 --file-prefix=test2 -- -i testpmd> mac_addr add 0 VF1_mac_address testpmd> set promisc all off testpmd> set fwd rxonly @@ -376,7 +376,7 @@ Details: 6. VF1, run testpmd:: - ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -w 05:02.1 --file-prefix=test2 -- -i + ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -a 05:02.1 --file-prefix=test2 -- -i testpmd> mac_addr add 0 VF1_mac_address testpmd> set promisc all off testpmd> set fwd rxonly @@ -384,7 +384,7 @@ Details: VF2, run testpmd:: - ./testpmd -c 0xf00 -n 4 --socket-mem 1024,1024 -w 05:02.2 \ + ./testpmd -c 0xf00 -n 4 --socket-mem 1024,1024 -a 05:02.2 \ --file-prefix=test3 -- -i --eth-peer=0,VF1_mac_address testpmd> set fwd txonly testpmd> start diff --git a/test_plans/fortville_rss_input_test_plan.rst b/test_plans/fortville_rss_input_test_plan.rst index a73b1b5..0202f74 100644 --- a/test_plans/fortville_rss_input_test_plan.rst +++ b/test_plans/fortville_rss_input_test_plan.rst @@ -54,7 +54,7 @@ Prerequisites 2.Start testpmd on host:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w 81:00.0 -- -i --txq=8 --rxq=8 + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -a 81:00.0 -- -i --txq=8 --rxq=8 testpmd>set verbose 1 testpmd>start diff --git a/test_plans/generic_flow_api_test_plan.rst b/test_plans/generic_flow_api_test_plan.rst index 760f8e8..30a5510 100644 --- a/test_plans/generic_flow_api_test_plan.rst +++ b/test_plans/generic_flow_api_test_plan.rst @@ -99,7 +99,7 @@ Test case: Fortville fdir for L2 payload 1. Launch the app ``testpmd`` with the following arguments:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -w 05:00.0 --file-prefix=pf --socket-mem=1024,1024 -- -i --rxq=16 --txq=16 --disable-rss --pkt-filter-mode=perfect + ./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -a 05:00.0 --file-prefix=pf --socket-mem=1024,1024 -- -i --rxq=16 --txq=16 --disable-rss --pkt-filter-mode=perfect testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start @@ -133,7 +133,7 @@ Test case: Fortville fdir for flexbytes 1. Launch the app ``testpmd`` with the following arguments:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -w 05:00.0 --file-prefix=pf --socket-mem=1024,1024 -- -i --rxq=16 --txq=16 --disable-rss --pkt-filter-mode=perfect + ./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -a 05:00.0 --file-prefix=pf --socket-mem=1024,1024 -- -i --rxq=16 --txq=16 --disable-rss --pkt-filter-mode=perfect testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start @@ -217,17 +217,17 @@ Test case: Fortville fdir for ipv4 1. Launch the app ``testpmd`` with the following arguments:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -w 05:00.0 --file-prefix=pf --socket-mem=1024,1024 -- -i --rxq=16 --txq=16 --disable-rss --pkt-filter-mode=perfect + ./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -a 05:00.0 --file-prefix=pf --socket-mem=1024,1024 -- -i --rxq=16 --txq=16 --disable-rss --pkt-filter-mode=perfect testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start - ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e0000 -n 4 -w 05:02.0 --file-prefix=vf0 --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --disable-rss --pkt-filter-mode=perfect + ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e0000 -n 4 -a 05:02.0 --file-prefix=vf0 --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --disable-rss --pkt-filter-mode=perfect testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start - ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e00000 -n 4 -w 05:02.1 --file-prefix=vf1 --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --disable-rss --pkt-filter-mode=perfect + ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e00000 -n 4 -a 05:02.1 --file-prefix=vf1 --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --disable-rss --pkt-filter-mode=perfect testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start @@ -322,17 +322,17 @@ Test case: Fortville fdir for ipv6 1. Launch the app ``testpmd`` with the following arguments:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -w 05:00.0 --file-prefix=pf --socket-mem=1024,1024 -- -i --rxq=16 --txq=16 --disable-rss --pkt-filter-mode=perfect + ./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -a 05:00.0 --file-prefix=pf --socket-mem=1024,1024 -- -i --rxq=16 --txq=16 --disable-rss --pkt-filter-mode=perfect testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start - ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e0000 -n 4 -w 05:02.0 --file-prefix=vf0 --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --disable-rss --pkt-filter-mode=perfect + ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e0000 -n 4 -a 05:02.0 --file-prefix=vf0 --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --disable-rss --pkt-filter-mode=perfect testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start - ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e00000 -n 4 -w 05:02.1 --file-prefix=vf1 --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --disable-rss --pkt-filter-mode=perfect + ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e00000 -n 4 -a 05:02.1 --file-prefix=vf1 --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --disable-rss --pkt-filter-mode=perfect testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start @@ -401,7 +401,7 @@ Test case: Fortville fdir wrong parameters 1. Launch the app ``testpmd`` with the following arguments:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -w 05:00.0 --file-prefix=pf --socket-mem=1024,1024 -- -i --rxq=16 --txq=16 --disable-rss --pkt-filter-mode=perfect + ./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -a 05:00.0 --file-prefix=pf --socket-mem=1024,1024 -- -i --rxq=16 --txq=16 --disable-rss --pkt-filter-mode=perfect testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start @@ -461,7 +461,7 @@ Test case: Fortville tunnel vxlan 1. Launch the app ``testpmd`` with the following arguments:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -w 05:00.0 --file-prefix=pf --socket-mem=1024,1024 -- -i --rxq=16 --txq=16 --tx-offloads=0x8fff --disable-rss + ./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -a 05:00.0 --file-prefix=pf --socket-mem=1024,1024 -- -i --rxq=16 --txq=16 --tx-offloads=0x8fff --disable-rss testpmd> rx_vxlan_port add 4789 0 testpmd> set fwd rxonly testpmd> set verbose 1 @@ -469,7 +469,7 @@ Test case: Fortville tunnel vxlan testpmd> start the pf's mac address is 00:00:00:00:01:00 - ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e0000 -n 4 -w 05:02.0 --file-prefix=vf --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --tx-offloads=0x8fff --disable-rss + ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e0000 -n 4 -a 05:02.0 --file-prefix=vf --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --tx-offloads=0x8fff --disable-rss testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> set promisc all off @@ -564,19 +564,19 @@ Test case: Fortville tunnel nvgre 1. Launch the app ``testpmd`` with the following arguments:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -w 05:00.0 --file-prefix=pf --socket-mem=1024,1024 -- -i --rxq=16 --txq=16 --tx-offloads=0x8fff + ./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -a 05:00.0 --file-prefix=pf --socket-mem=1024,1024 -- -i --rxq=16 --txq=16 --tx-offloads=0x8fff testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> set promisc all off testpmd> start - ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e0000 -n 4 -w 05:02.0 --file-prefix=vf0 --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --tx-offloads=0x8fff + ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e0000 -n 4 -a 05:02.0 --file-prefix=vf0 --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --tx-offloads=0x8fff testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> set promisc all off testpmd> start - ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e00000 -n 4 -w 05:02.1 --file-prefix=vf1 --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --tx-offloads=0x8fff + ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e00000 -n 4 -a 05:02.1 --file-prefix=vf1 --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --tx-offloads=0x8fff testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> set promisc all off @@ -816,17 +816,17 @@ Test case: IXGBE L2-tunnel(supported by x552 and x550) 1. Launch the app ``testpmd`` with the following arguments:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -w 05:00.0 --file-prefix=pf --socket-mem=1024,1024 -- -i --rxq=16 --txq=16 --disable-rss + ./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -a 05:00.0 --file-prefix=pf --socket-mem=1024,1024 -- -i --rxq=16 --txq=16 --disable-rss testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start - ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e0000 -n 4 -w 05:02.0 --file-prefix=vf0 --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --disable-rss + ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e0000 -n 4 -a 05:02.0 --file-prefix=vf0 --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --disable-rss testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start - ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e00000 -n 4 -w 05:02.1 --file-prefix=vf1 --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --disable-rss + ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e00000 -n 4 -a 05:02.1 --file-prefix=vf1 --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --disable-rss testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start @@ -1445,7 +1445,7 @@ Test case: Fortville fdir for l2 mac ./usertools/dpdk-devbind.py -b igb_uio 0000:81:00.0 launch testpmd:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 0-3 -n 4 -w 0000:81:00.0 -- -i --rxq=4 --txq=4 + ./x86_64-native-linuxapp-gcc/app/testpmd -l 0-3 -n 4 -a 0000:81:00.0 -- -i --rxq=4 --txq=4 1. basic test for ipv4-other diff --git a/test_plans/iavf_flexible_descriptor_test_plan.rst b/test_plans/iavf_flexible_descriptor_test_plan.rst index ae28865..d03fbe4 100644 --- a/test_plans/iavf_flexible_descriptor_test_plan.rst +++ b/test_plans/iavf_flexible_descriptor_test_plan.rst @@ -129,7 +129,7 @@ VLAN cases 1. Launch testpmd by:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -w af:01.0,proto_xtr=vlan -- -i --rxq=4 --txq=4 --portmask=0x1 --nb-cores=2 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -a af:01.0,proto_xtr=vlan -- -i --rxq=4 --txq=4 --portmask=0x1 --nb-cores=2 testpmd>set verbose 1 testpmd>set fwd io testpmd>start @@ -139,7 +139,7 @@ VLAN cases expected: RXDID[17] .. note:: - Please change the core setting (-l option) and port's PCI (-w option) by your DUT environment + Please change the core setting (-l option) and port's PCI (-a option) by your DUT environment Test Case: Check single VLAN fields in RXD (802.1Q) --------------------------------------------------- @@ -218,7 +218,7 @@ Test steps are same to ``VLAN cases``, just change the launch command of testpmd Launch testpmd command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -w af:01.0,proto_xtr=ipv4 -- -i --rxq=4 --txq=4 --portmask=0x1 --nb-cores=2 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -a af:01.0,proto_xtr=ipv4 -- -i --rxq=4 --txq=4 --portmask=0x1 --nb-cores=2 check RXDID value correct:: @@ -244,7 +244,7 @@ Test steps are same to ``VLAN cases``, just change the launch command of testpmd Launch testpmd command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -w af:01.0,proto_xtr=ipv6 -- -i --rxq=4 --txq=4 --portmask=0x1 --nb-cores=2 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -a af:01.0,proto_xtr=ipv6 -- -i --rxq=4 --txq=4 --portmask=0x1 --nb-cores=2 check RXDID value correct:: @@ -270,7 +270,7 @@ Test steps are same to ``VLAN cases``, just change the launch command of testpmd Launch testpmd command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -w af:01.0,proto_xtr=ipv6_flow -- -i --rxq=4 --txq=4 --portmask=0x1 --nb-cores=2 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -a af:01.0,proto_xtr=ipv6_flow -- -i --rxq=4 --txq=4 --portmask=0x1 --nb-cores=2 check RXDID value correct:: @@ -294,7 +294,7 @@ Test steps are same to ``VLAN cases``, just change the launch command of testpmd Launch testpmd command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -w af:01.0,proto_xtr=tcp -- -i --rxq=4 --txq=4 --portmask=0x1 --nb-cores=2 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -a af:01.0,proto_xtr=tcp -- -i --rxq=4 --txq=4 --portmask=0x1 --nb-cores=2 check RXDID value correct:: @@ -317,7 +317,7 @@ Test steps are same to ``VLAN cases``, just change the launch command of testpmd Launch testpmd command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -w af:01.0,proto_xtr=tcp -- -i --rxq=4 --txq=4 --portmask=0x1 --nb-cores=2 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -a af:01.0,proto_xtr=tcp -- -i --rxq=4 --txq=4 --portmask=0x1 --nb-cores=2 check RXDID value correct:: @@ -340,7 +340,7 @@ Test steps are same to ``VLAN cases``, just change the launch command of testpmd Launch testpmd command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -w af:01.0,proto_xtr='[(2):ipv4,(3):ipv6,(4):tcp]' -- -i --rxq=16 --txq=16 --portmask=0x1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -a af:01.0,proto_xtr='[(2):ipv4,(3):ipv6,(4):tcp]' -- -i --rxq=16 --txq=16 --portmask=0x1 check RXDID value correct:: @@ -385,13 +385,13 @@ Test steps are same to ``VLAN cases``, use different "proto_xtr" parameters the use error parameter Launch testpmd:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -w af:01.0,proto_xtr=vxlan -- -i --rxq=4 --txq=4 --portmask=0x1 --nb-cores=2 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -a af:01.0,proto_xtr=vxlan -- -i --rxq=4 --txq=4 --portmask=0x1 --nb-cores=2 testpmd can't started, check "iavf_lookup_flex_desc_type(): wrong flex_desc type, it should be: vlan|ipv4|ipv6|ipv6_flow|tcp|ovs|ip_offset" in testpmd output. don't use parameter launch testpmd:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -w af:01.0 -- -i --rxq=4 --txq=4 --portmask=0x1 --nb-cores=2 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -a af:01.0 -- -i --rxq=4 --txq=4 --portmask=0x1 --nb-cores=2 testpmd started, check "iavf_configure_queues(): request RXDID[16] in Queue[0]" in testpmd output @@ -403,7 +403,7 @@ Test steps are same to ``VLAN cases``, just change the launch command of testpmd MPLS cases use same parameter Launch testpmd:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -w af:01.0,proto_xtr=ip_offset -- -i --portmask=0x1 --nb-cores=2 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -a af:01.0,proto_xtr=ip_offset -- -i --portmask=0x1 --nb-cores=2 check RXDID value correct:: diff --git a/test_plans/iavf_package_driver_error_handle_test_plan.rst b/test_plans/iavf_package_driver_error_handle_test_plan.rst index 23978ea..fe95e29 100644 --- a/test_plans/iavf_package_driver_error_handle_test_plan.rst +++ b/test_plans/iavf_package_driver_error_handle_test_plan.rst @@ -78,7 +78,7 @@ Test Case 1: Check old driver and latest commes pkg compatibility ./usertools/dpdk-devbind.py -b vfio-pci 0000:b1:01.0 4. Launch the testpmd - ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-9 -n 4 -w b1:01.0 --file-prefix=vf -- -i --rxq=16 --txq=16 --nb-cores=2 + ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-9 -n 4 -a b1:01.0 --file-prefix=vf -- -i --rxq=16 --txq=16 --nb-cores=2 5. Create a rss rule testpmd> flow create 0 ingress pattern eth / ipv4 / end actions rss types l3-dst-only end key_len 0 queues end / end diff --git a/test_plans/iavf_test_plan.rst b/test_plans/iavf_test_plan.rst index 4855e17..f79214d 100644 --- a/test_plans/iavf_test_plan.rst +++ b/test_plans/iavf_test_plan.rst @@ -425,7 +425,7 @@ create 2 VFs from 1 PF, and start PF:: echo 2 > /sys/bus/pci/devices/0000\:08\:00.0/max_vfs; ./usertools/dpdk-devbind.py --bind=vfio-pci 09:02.0 09:0a.0 - ./x86_64-native-linuxapp-gcc/app/testpmd -l 1,2 -n 4 --socket-mem=1024,1024 --file-prefix=pf -w 08:00.0 -- -i + ./x86_64-native-linuxapp-gcc/app/testpmd -l 1,2 -n 4 --socket-mem=1024,1024 --file-prefix=pf -a 08:00.0 -- -i testpmd>set vf mac addr 0 0 00:12:34:56:78:01 testpmd>set vf mac addr 0 1 00:12:34:56:78:02 @@ -433,7 +433,7 @@ create 2 VFs from 1 PF, and start PF:: start testpmd with 2VFs individually:: ./x86_64-native-linuxapp-gcc/app/testpmd -l 3-5 -n 4 --master-lcore=3 --socket-mem=1024,1024 --file-prefix=vf1 \ - -w 09:02.0 -- -i --txq=2 --rxq=2 --rxd=512 --txd=512 --nb-cores=2 --rss-ip --eth-peer=0,00:12:34:56:78:02 + -a 09:02.0 -- -i --txq=2 --rxq=2 --rxd=512 --txd=512 --nb-cores=2 --rss-ip --eth-peer=0,00:12:34:56:78:02 testpmd>set promisc all off testpmd>set fwd mac @@ -442,7 +442,7 @@ start testpmd with 2VFs individually:: :: ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-8 -n 4 --master-lcore=6 --socket-mem=1024,1024 --file-prefix=vf2 \ - -w 09:0a.0 -- -i --txq=2 --rxq=2 --rxd=512 --txd=512 --nb-cores=2 --rss-ip + -a 09:0a.0 -- -i --txq=2 --rxq=2 --rxd=512 --txd=512 --nb-cores=2 --rss-ip testpmd>set promisc all off testpmd>set fwd mac @@ -461,7 +461,7 @@ Test Case: vector vf performance 2. start testpmd for PF:: ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x6 -n 4 --socket-mem=1024,1024 --file-prefix=pf \ - -w 08:00.0 -w 08:00.1 -- -i + -a 08:00.0 -a 08:00.1 -- -i testpmd>set vf mac addr 0 0 00:12:34:56:78:01 testpmd>set vf mac addr 1 0 00:12:34:56:78:02 @@ -469,7 +469,7 @@ Test Case: vector vf performance 3. start testpmd for VF:: ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f8 -n 4 --master-lcore=3 --socket-mem=1024,1024 --file-prefix=vf \ - -w 09:0a.0 -w 09:02.0 -- -i --txq=2 --rxq=2 --rxd=512 --txd=512 --nb-cores=4 --rss-ip + -a 09:0a.0 -a 09:02.0 -- -i --txq=2 --rxq=2 --rxd=512 --txd=512 --nb-cores=4 --rss-ip testpmd>set promisc all off testpmd>set fwd mac diff --git a/test_plans/inline_ipsec_test_plan.rst b/test_plans/inline_ipsec_test_plan.rst index 3a26e18..11dfeda 100644 --- a/test_plans/inline_ipsec_test_plan.rst +++ b/test_plans/inline_ipsec_test_plan.rst @@ -144,7 +144,7 @@ Test Case: IPSec Encryption =========================== Start ipsec-secgw with two 82599 ports and assign port 1 to unprotected mode:: - sudo ./build/ipsec-secgw -l 20,21 -w 83:00.0 -w 83:00.1 --vdev + sudo ./build/ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev "crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u 0x2 --config="(0,0,20),(1,0,21)" -f ./enc.cfg @@ -194,7 +194,7 @@ Test Case: IPSec Encryption with Jumboframe =========================================== Start ipsec-secgw with two 82599 ports and assign port 1 to unprotected mode:: - sudo ./build/ipsec-secgw -l 20,21 -w 83:00.0 -w 83:00.1 --vdev + sudo ./build/ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev "crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u 0x2 --config="(0,0,20),(1,0,21)" -f ./enc.cfg @@ -214,7 +214,7 @@ Check burst esp packets can't be received from unprotected port. Set jumbo frames size as 9000, start it with port 1 assigned to unprotected mode:: - sudo ./build/ipsec-secgw -l 20,21 -w 83:00.0 -w 83:00.1 --vdev + sudo ./build/ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev "crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u 0x2 -j 9000 --config="(0,0,20),(1,0,21)" -f ./enc.cfg @@ -239,7 +239,7 @@ Create configuration file with multiple SP/SA/RT rules for different ip address. Start ipsec-secgw with two queues enabled on each port and port 1 assigned to unprotected mode:: - sudo ./build/ipsec-secgw -l 20,21 -w 83:00.0 -w 83:00.1 --vdev + sudo ./build/ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev "crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u 0x2 --config="(0,0,20),(0,1,20),(1,0,21),(1,1,21)" -f ./enc_rss.cfg @@ -259,7 +259,7 @@ Test Case: IPSec Decryption =========================== Start ipsec-secgw with two 82599 ports and assign port 1 to unprotected mode:: - sudo ./build/ipsec-secgw -l 20,21 -w 83:00.0 -w 83:00.1 --vdev + sudo ./build/ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev "crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u 0x2 --config="(0,0,20),(1,0,21)" -f ./dec.cfg @@ -275,7 +275,7 @@ Test Case: IPSec Decryption with wrong key ========================================== Start ipsec-secgw with two 82599 ports and assign port 1 to unprotected mode:: - sudo ./build/ipsec-secgw -l 20,21 -w 83:00.0 -w 83:00.1 --vdev + sudo ./build/ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev "crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u 0x2 --config="(0,0,20),(1,0,21)" -f ./dec.cfg @@ -295,7 +295,7 @@ IPsec application will produce error "IPSEC_ESP: failed crypto op". Test Case: IPSec Decryption with Jumboframe =========================================== Start ipsec-secgw with two 82599 ports and assign port 1 to unprotected mode:: - sudo ./build/ipsec-secgw -l 20,21 -w 83:00.0 -w 83:00.1 --vdev + sudo ./build/ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev "crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u 0x2 --config="(0,0,20),(1,0,21)" -f ./dec.cfg @@ -312,7 +312,7 @@ Check burst(8192) packets which have been decapsulated can't be received from pr Set jumbo frames size as 9000, start it with port 1 assigned to unprotected mode:: - sudo ./build/ipsec-secgw -l 20,21 -w 83:00.0 -w 83:00.1 --vdev + sudo ./build/ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev "crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u 0x2 -j 9000 --config="(0,0,20),(1,0,21)" -f ./dec.cfg @@ -334,8 +334,8 @@ Create configuration file with multiple SA rule for different ip address. Start ipsec-secgw with two 82599 ports and assign port 1 to unprotected mode:: - sudo ./build/ipsec-secgw -l 20,21 -w 83:00.0 -w 83:00.1 --vdev - "crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u + sudo ./build/ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev + "crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u 0x2 -config="(0,0,20),(0,1,20),(1,0,21),(1,1,21)" -f ./dec_rss.cfg Send two burst(32) esp packets with different ip to unprotected port. @@ -351,7 +351,7 @@ Test Case: IPSec Encryption/Decryption simultaneously ===================================================== Start ipsec-secgw with two 82599 ports and assign port 1 to unprotected mode:: - sudo ./build/ipsec-secgw -l 20,21 -w 83:00.0 -w 83:00.1 + sudo ./build/ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev "crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u 0x2 --config="(0,0,20),(1,0,21)" -f ./enc_dec.cfg diff --git a/test_plans/ip_pipeline_test_plan.rst b/test_plans/ip_pipeline_test_plan.rst index 0b6bb5b..1c774e3 100644 --- a/test_plans/ip_pipeline_test_plan.rst +++ b/test_plans/ip_pipeline_test_plan.rst @@ -178,7 +178,7 @@ Test Case: traffic management pipeline 3. Run ip_pipeline app as the following:: - ./build/ip_pipeline -c 0x3 -n 4 -w 0000:81:00.0 -- -s examples/traffic_manager.cli + ./build/ip_pipeline -c 0x3 -n 4 -a 0000:81:00.0 -- -s examples/traffic_manager.cli 4. Config traffic with dst ipaddr increase from 0.0.0.0 to 15.255.0.0, total 4096 streams, also config flow tracked-by dst ipaddr, verify each flow's throughput is about linerate/4096. @@ -220,7 +220,7 @@ Test Case: vf l2fwd pipeline(pf bound to dpdk driver) 2. Start testpmd with the four pf ports:: - ./testpmd -c 0xf0 -n 4 -w 05:00.0 -w 05:00.1 -w 05:00.2 -w 05:00.3 --file-prefix=pf --socket-mem 1024,1024 -- -i + ./testpmd -c 0xf0 -n 4 -a 05:00.0 -a 05:00.1 -a 05:00.2 -a 05:00.3 --file-prefix=pf --socket-mem 1024,1024 -- -i Set vf mac address from pf port:: @@ -235,8 +235,8 @@ Test Case: vf l2fwd pipeline(pf bound to dpdk driver) 4. Run ip_pipeline app as the following:: - ./build/ip_pipeline -c 0x3 -n 4 -w 0000:05:02.0 -w 0000:05:06.0 \ - -w 0000:05:0a.0 -w 0000:05:0e.0 --file-prefix=vf --socket-mem 1024,1024 -- -s examples/vf.cli + ./build/ip_pipeline -c 0x3 -n 4 -a 0000:05:02.0 -a 0000:05:06.0 \ + -a 0000:05:0a.0 -a 0000:05:0e.0 --file-prefix=vf --socket-mem 1024,1024 -- -s examples/vf.cli The exact format of port allowlist: domain:bus:devid:func @@ -331,7 +331,7 @@ Test Case: crypto pipeline - AEAD algorithm in aesni_gcm 4. Run ip_pipeline app as the following:: - ./examples/ip_pipeline/build/ip_pipeline -w 0000:81:00.0 --vdev crypto_aesni_gcm0 + ./examples/ip_pipeline/build/ip_pipeline -a 0000:81:00.0 --vdev crypto_aesni_gcm0 --socket-mem 0,2048 -l 23,24,25 -- -s ./examples/ip_pipeline/examples/flow_crypto.cli 5. Send packets with IXIA port, @@ -365,7 +365,7 @@ Test Case: crypto pipeline - cipher algorithm in aesni_mb 4. Run ip_pipeline app as the following:: - ./examples/ip_pipeline/build/ip_pipeline -w 0000:81:00.0 --vdev crypto_aesni_mb0 --socket-mem 0,2048 -l 23,24,25 -- -s ./examples/ip_pipeline/examples/flow_crypto.cli + ./examples/ip_pipeline/build/ip_pipeline -a 0000:81:00.0 --vdev crypto_aesni_mb0 --socket-mem 0,2048 -l 23,24,25 -- -s ./examples/ip_pipeline/examples/flow_crypto.cli 5. Send packets with IXIA port, Use a tool to caculate the ciphertext from plaintext and key as an expected value. @@ -395,7 +395,7 @@ Test Case: crypto pipeline - cipher_auth algorithm in aesni_mb 4. Run ip_pipeline app as the following:: - ./examples/ip_pipeline/build/ip_pipeline -w 0000:81:00.0 --vdev crypto_aesni_mb0 --socket-mem 0,2048 -l 23,24,25 -- -s ./examples/ip_pipeline/examples/flow_crypto.cli + ./examples/ip_pipeline/build/ip_pipeline -a 0000:81:00.0 --vdev crypto_aesni_mb0 --socket-mem 0,2048 -l 23,24,25 -- -s ./examples/ip_pipeline/examples/flow_crypto.cli 5. Send packets with IXIA port, Use a tool to caculate the ciphertext from plaintext and cipher key with AES-CBC algorithm. diff --git a/test_plans/ipsec_gw_and_library_test_plan.rst b/test_plans/ipsec_gw_and_library_test_plan.rst index fac2a7b..74bf407 100644 --- a/test_plans/ipsec_gw_and_library_test_plan.rst +++ b/test_plans/ipsec_gw_and_library_test_plan.rst @@ -202,7 +202,7 @@ Cryptodev AES-NI algorithm validation matrix is showed in table below. AESNI_MB device start cmd:: - ./examples/ipsec-secgw/build/ipsec-secgw --socket-mem 2048,0 --legacy-mem -w 0000:60:00.0 + ./examples/ipsec-secgw/build/ipsec-secgw --socket-mem 2048,0 --legacy-mem -a 0000:60:00.0 --vdev=net_tap0,mac=fixed --vdev crypto_aesni_mb_pmd_1 --vdev=crypto_aesni_mb_pmd_2 -l 9,10,11 -n 6 -- -P --config "(0,0,10),(1,0,11)" -u 0x1 -p 0x3 -f /root/dts/local_conf/ipsec_test.cfg @@ -230,8 +230,8 @@ Cryptodev QAT algorithm validation matrix is showed in table below. QAT device start cmd:: - ./examples/ipsec-secgw/build/ipsec-secgw --socket-mem 2048,0 --legacy-mem --vdev=net_tap0,mac=fixed -w 0000:60:00.0 - -w 0000:1a:01.0 -l 9,10,11 -n 6 -- -P --config "(0,0,10),(1,0,11)" -u 0x1 -p 0x3 + ./examples/ipsec-secgw/build/ipsec-secgw --socket-mem 2048,0 --legacy-mem --vdev=net_tap0,mac=fixed -a 0000:60:00.0 + -a 0000:1a:01.0 -l 9,10,11 -n 6 -- -P --config "(0,0,10),(1,0,11)" -u 0x1 -p 0x3 -f /root/dts/local_conf/ipsec_test.cfg AES_GCM_PMD algorithm validation matrix is showed in table below. @@ -244,7 +244,7 @@ AES_GCM_PMD algorithm validation matrix is showed in table below. AESNI_GCM device start cmd:: - ./examples/ipsec-secgw/build/ipsec-secgw --socket-mem 2048,0 --legacy-mem -w 0000:60:00.0 --vdev=net_tap0,mac=fixed + ./examples/ipsec-secgw/build/ipsec-secgw --socket-mem 2048,0 --legacy-mem -a 0000:60:00.0 --vdev=net_tap0,mac=fixed --vdev crypto_aesni_gcm_pmd_1 --vdev=crypto_aesni_gcm_pmd_2 -l 9,10,11 -n 6 -- -P --config "(0,0,10),(1,0,11)" -u 0x1 -p 0x3 -f /root/dts/local_conf/ipsec_test.cfg diff --git a/test_plans/l2tp_esp_coverage_test_plan.rst b/test_plans/l2tp_esp_coverage_test_plan.rst index 4998f0f..a768684 100644 --- a/test_plans/l2tp_esp_coverage_test_plan.rst +++ b/test_plans/l2tp_esp_coverage_test_plan.rst @@ -88,7 +88,7 @@ Test Case 1: test MAC_IPV4_L2TPv3 HW checksum offload 1. DUT enable rx checksum with "--enable-rx-cksum" when start testpmd:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -w af:01.0 -- -i --enable-rx-cksum + ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -a af:01.0 -- -i --enable-rx-cksum 2. DUT setup csum forwarding mode:: @@ -163,7 +163,7 @@ Test Case 2: test MAC_IPV4_ESP HW checksum offload 1. DUT enable rx checksum with "--enable-rx-cksum" when start testpmd, setup csum forwarding mode:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -w af:01.0 -- -i --enable-rx-cksum + ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -a af:01.0 -- -i --enable-rx-cksum 2. DUT setup csum forwarding mode:: @@ -1095,7 +1095,7 @@ Test Case 14: MAC_IPV4_L2TPv3 vlan strip on + HW checksum offload check The pre-steps are as l2tp_esp_iavf_test_plan. -1. ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-9 -n 4 -w af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum +1. ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-9 -n 4 -a af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum 2. DUT create fdir rules for MAC_IPV4_L2TPv3 with queue index and mark:: @@ -1189,7 +1189,7 @@ The pre-steps are as l2tp_esp_iavf_test_plan. Test Case 15: MAC_IPV4_L2TPv3 vlan insert on + SW checksum offload check ======================================================================== -1. ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-9 -n 4 -w af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum +1. ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-9 -n 4 -a af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum 2. DUT create fdir rules for MAC_IPV4_L2TPv3 with queue index and mark:: @@ -1279,7 +1279,7 @@ Test Case 16: MAC_IPV4_ESP vlan strip on + HW checksum offload check The pre-steps are as l2tp_esp_iavf_test_plan. -1. ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-9 -n 4 -w af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum +1. ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-9 -n 4 -a af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum 2. DUT create fdir rules for MAC_IPV4_ESP with queue index and mark:: @@ -1372,7 +1372,7 @@ The pre-steps are as l2tp_esp_iavf_test_plan. Test Case 17: MAC_IPV6_NAT-T-ESP vlan insert on + SW checksum offload check =========================================================================== -1. ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-9 -n 4 -w af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum +1. ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-9 -n 4 -a af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum 2. DUT create fdir rules for MAC_IPV6_NAT-T-ESP with queue index and mark:: diff --git a/test_plans/linux_modules_test_plan.rst b/test_plans/linux_modules_test_plan.rst index 3f286ab..57b0327 100644 --- a/test_plans/linux_modules_test_plan.rst +++ b/test_plans/linux_modules_test_plan.rst @@ -80,7 +80,7 @@ Bind the interface to the driver :: Start testpmd in a loop configuration :: - # x86_64-native-linux-gcc/app/testpmd -l 1,2 -n 4 -w xxxx:xx:xx.x \ + # x86_64-native-linux-gcc/app/testpmd -l 1,2 -n 4 -a xxxx:xx:xx.x \ -- -i --port-topology=loop Start packet forwarding :: @@ -122,7 +122,7 @@ Grant permissions for all users to access the new character device :: Start testpmd in a loop configuration :: - $ x86_64-native-linux-gcc/app/testpmd -l 1,2 -n 4 -w xxxx:xx:xx.x --in-memory \ + $ x86_64-native-linux-gcc/app/testpmd -l 1,2 -n 4 -a xxxx:xx:xx.x --in-memory \ -- -i --port-topology=loop Start packet forwarding :: diff --git a/test_plans/macsec_for_ixgbe_test_plan.rst b/test_plans/macsec_for_ixgbe_test_plan.rst index 997f921..660c2fd 100644 --- a/test_plans/macsec_for_ixgbe_test_plan.rst +++ b/test_plans/macsec_for_ixgbe_test_plan.rst @@ -113,7 +113,7 @@ Test Case 1: MACsec packets send and receive 1. Start the testpmd of rx port:: - ./testpmd -c 0xf --socket-mem 1024,0 --file-prefix=rx -w 0000:07:00.1 \ + ./testpmd -c 0xf --socket-mem 1024,0 --file-prefix=rx -a 0000:07:00.1 \ -- -i --port-topology=chained 2. Set MACsec offload on:: @@ -150,7 +150,7 @@ Test Case 1: MACsec packets send and receive 1. Start the testpmd of tx port:: - ./testpmd -c 0xf0 --socket-mem 1024,0 --file-prefix=tx -w 0000:07:00.0 \ + ./testpmd -c 0xf0 --socket-mem 1024,0 --file-prefix=tx -a 0000:07:00.0 \ -- -i --port-topology=chained 2. Set MACsec offload on:: @@ -422,7 +422,7 @@ Test Case 7: performance test of MACsec offload packets with cable, connect 05:00.0 to IXIA. Bind the three ports to dpdk driver. Start two testpmd:: - ./testpmd -c 0xf --socket-mem 1024,0 --file-prefix=rx -w 0000:07:00.1 \ + ./testpmd -c 0xf --socket-mem 1024,0 --file-prefix=rx -a 0000:07:00.1 \ -- -i --port-topology=chained testpmd> set macsec offload 0 on encrypt on replay-protect on diff --git a/test_plans/malicious_driver_event_indication_test_plan.rst b/test_plans/malicious_driver_event_indication_test_plan.rst index dfeb783..1c9d244 100644 --- a/test_plans/malicious_driver_event_indication_test_plan.rst +++ b/test_plans/malicious_driver_event_indication_test_plan.rst @@ -62,10 +62,10 @@ Test Case1: Check log output when malicious driver events is detected echo 1 > /sys/bus/pci/devices/0000\:18\:00.1/max_vfs 2. Launch PF by testpmd - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x03 -n 4 --file-prefix=test1 -w [pci of PF] -- -i + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x03 -n 4 --file-prefix=test1 -a [pci of PF] -- -i 3. Launch VF by testpmd - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x03 -n 4 --file-prefix=lei1 -w [pci of VF] -- -i + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x03 -n 4 --file-prefix=lei1 -a [pci of VF] -- -i > set fwd txonly > start @@ -83,14 +83,14 @@ Test Case2: Check the event counter number for malicious driver events echo 1 > /sys/bus/pci/devices/0000\:18\:00.1/max_vfs 2. Launch PF by testpmd - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x03 -n 4 --file-prefix=test1 -w [pci of PF] -- -i + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x03 -n 4 --file-prefix=test1 -a [pci of PF] -- -i 3. launch VF by testpmd and start txonly mode 3 times: repeat following step 3 times - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x03 -n 4 --file-prefix=lei1 -w [pci of VF] -- -i + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x03 -n 4 --file-prefix=lei1 -a [pci of VF] -- -i > set fwd txonly > start > quit 4. Check the PF can detect the malicious driver events number directly in the log: - i40e_handle_mdd_event(): TX driver issue detected on VF 0 3times \ No newline at end of file + i40e_handle_mdd_event(): TX driver issue detected on VF 0 3times diff --git a/test_plans/pmd_test_plan.rst b/test_plans/pmd_test_plan.rst index 4017a4a..58be7d5 100644 --- a/test_plans/pmd_test_plan.rst +++ b/test_plans/pmd_test_plan.rst @@ -105,7 +105,7 @@ Test Case: Packet Checking in scalar mode The linuxapp is started with the following parameters: :: - -c 0x6 -n 4 -w ,scalar_enable=1 -- -i --portmask= + -c 0x6 -n 4 -a ,scalar_enable=1 -- -i --portmask= This test is applicable for Marvell devices. The tester sends 1 packet at a diff --git a/test_plans/port_representor_test_plan.rst b/test_plans/port_representor_test_plan.rst index c54b7ec..5f6ff1c 100644 --- a/test_plans/port_representor_test_plan.rst +++ b/test_plans/port_representor_test_plan.rst @@ -59,13 +59,13 @@ Create two VFs and two VFs representor ports which are used as control plane. 4. start a testpmd with create 2 VFs representor ports as control plane named testpmd-pf:: - ./testpmd --lcores 1,2 -n 4 -w af:00.0,representor=0-1 --socket-mem 1024,1024 \ + ./testpmd --lcores 1,2 -n 4 -a af:00.0,representor=0-1 --socket-mem 1024,1024 \ --proc-type auto --file-prefix testpmd-pf -- -i --port-topology=chained 5. start two testpmd as dataplane named testpmd-vf0/testpmd-vf1(case 3 run later):: - ./testpmd --lcores 3,4 -n 4 -w af:02.0 --socket-mem 1024,1024 --proc-type auto --file-prefix testpmd-vf0 -- -i - ./testpmd --lcores 5,6 -n 4 -w af:02.1 --socket-mem 1024,1024 --proc-type auto --file-prefix testpmd-vf1 -- -i + ./testpmd --lcores 3,4 -n 4 -a af:02.0 --socket-mem 1024,1024 --proc-type auto --file-prefix testpmd-vf0 -- -i + ./testpmd --lcores 5,6 -n 4 -a af:02.1 --socket-mem 1024,1024 --proc-type auto --file-prefix testpmd-vf1 -- -i Note: Every case needs to restart testpmd. diff --git a/test_plans/qinq_filter_test_plan.rst b/test_plans/qinq_filter_test_plan.rst index bd4e284..7b0a8d1 100644 --- a/test_plans/qinq_filter_test_plan.rst +++ b/test_plans/qinq_filter_test_plan.rst @@ -134,7 +134,7 @@ Test Case 3: qinq packet filter to VF queues #. set up testpmd with fortville PF NICs:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1f -n 4 --socket-mem=1024,1024 --file-prefix=pf -w 81:00.0 -- -i --rxq=4 --txq=4 + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1f -n 4 --socket-mem=1024,1024 --file-prefix=pf -a 81:00.0 -- -i --rxq=4 --txq=4 #. enable qinq:: @@ -160,7 +160,7 @@ Test Case 3: qinq packet filter to VF queues #. set up testpmd with fortville VF0 NICs:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3e0 -n 4 --socket-mem=1024,1024 --file-prefix=vf0 -w 81:02.0 -- -i --rxq=4 --txq=4 + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3e0 -n 4 --socket-mem=1024,1024 --file-prefix=vf0 -a 81:02.0 -- -i --rxq=4 --txq=4 #. PMD fwd only receive the packets:: @@ -176,7 +176,7 @@ Test Case 3: qinq packet filter to VF queues #. set up testpmd with fortville VF1 NICs:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7c0 -n 4 --socket-mem=1024,1024 --file-prefix=vf1 -w 81:02.1 -- -i --rxq=4 --txq=4 + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7c0 -n 4 --socket-mem=1024,1024 --file-prefix=vf1 -a 81:02.1 -- -i --rxq=4 --txq=4 #. PMD fwd only receive the packets:: @@ -211,7 +211,7 @@ Test Case 4: qinq packet filter with different tpid #. set up testpmd with fortville PF NICs:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1f -n 4 --socket-mem=1024,1024 --file-prefix=pf -w 81:00.0 -- -i --rxq=4 --txq=4 + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1f -n 4 --socket-mem=1024,1024 --file-prefix=pf -a 81:00.0 -- -i --rxq=4 --txq=4 #. enable qinq:: @@ -241,7 +241,7 @@ Test Case 4: qinq packet filter with different tpid #. set up testpmd with fortville VF0 NICs:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3e0 -n 4 --socket-mem=1024,1024 --file-prefix=vf0 -w 81:02.0 -- -i --rxq=4 --txq=4 + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3e0 -n 4 --socket-mem=1024,1024 --file-prefix=vf0 -a 81:02.0 -- -i --rxq=4 --txq=4 #. PMD fwd only receive the packets:: @@ -257,7 +257,7 @@ Test Case 4: qinq packet filter with different tpid #. set up testpmd with fortville VF1 NICs:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7c0 -n 4 --socket-mem=1024,1024 --file-prefix=vf1 -w 81:02.1 -- -i --rxq=4 --txq=4 + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7c0 -n 4 --socket-mem=1024,1024 --file-prefix=vf1 -a 81:02.1 -- -i --rxq=4 --txq=4 #. PMD fwd only receive the packets:: diff --git a/test_plans/runtime_vf_queue_number_maxinum_test_plan.rst b/test_plans/runtime_vf_queue_number_maxinum_test_plan.rst index 1e287e4..333b993 100644 --- a/test_plans/runtime_vf_queue_number_maxinum_test_plan.rst +++ b/test_plans/runtime_vf_queue_number_maxinum_test_plan.rst @@ -108,7 +108,7 @@ Test case 1: VF consume max queue number on one PF port ================================================================ 1. Start the PF testpmd:: - ./testpmd -c f -n 4 -w 05:00.0 --file-prefix=test1 \ + ./testpmd -c f -n 4 -a 05:00.0 --file-prefix=test1 \ --socket-mem 1024,1024 -- -i 2. Start the two testpmd to consume maximum queues:: @@ -120,10 +120,10 @@ Test case 1: VF consume max queue number on one PF port The driver will alloc queues as power of 2, and queue must be equal or less than 16, so the second VF testpmd can only start '--rxq=8 --txq=8':: - ./testpmd -c 0xf0 -n 4 -w 05:02.0 -w 05:02.1 -w 05:02.2 -w... --file-prefix=test2 \ + ./testpmd -c 0xf0 -n 4 -a 05:02.0 -a 05:02.1 -a 05:02.2 -a... --file-prefix=test2 \ --socket-mem 1024,1024 -- -i --rxq=16 --txq=16 - ./testpmd -c 0xf00 -n 4 -w 05:05.7 --file-prefix=test3 \ + ./testpmd -c 0xf00 -n 4 -a 05:05.7 --file-prefix=test3 \ --socket-mem 1024,1024 -- -i --rxq=8 --txq=8 Check the Max possible RX queues and TX queues of the two VFs are both 16:: @@ -154,7 +154,7 @@ Test case 2: set max queue number per vf on one pf port As the feature description describe, the max value of queue-num-per-vf is 8 for Both two and four ports Fortville NIC:: - ./testpmd -c f -n 4 -w 05:00.0,queue-num-per-vf=16 --file-prefix=test1 \ + ./testpmd -c f -n 4 -a 05:00.0,queue-num-per-vf=16 --file-prefix=test1 \ --socket-mem 1024,1024 -- -i PF port failed to started with "i40e_pf_parameter_init(): diff --git a/test_plans/runtime_vf_queue_number_test_plan.rst b/test_plans/runtime_vf_queue_number_test_plan.rst index cf07619..9d4f953 100644 --- a/test_plans/runtime_vf_queue_number_test_plan.rst +++ b/test_plans/runtime_vf_queue_number_test_plan.rst @@ -41,7 +41,7 @@ the VF queue number at runtime. Snice DPDK 19.02, VF is able to request max to 16 queues and the PF EAL parameter 'queue-num-per-vf' is redefined as the number of reserved queue per VF. For example, if the PCI address of an i40e PF is aaaa:bb.cc, -with the EAL parameter -w aaaa:bb.cc,queue-num-per-vf=8, the number of +with the EAL parameter -a aaaa:bb.cc,queue-num-per-vf=8, the number of reserved queue per VF created from this PF is 8. The valid values of queue-num-per-vf inclues 1,2,4,8,16, if the value of queue-num-per-vf is invalid, it is set as 4 forcibly, if there is no queue-num-per-vf @@ -130,14 +130,14 @@ Test case 1: reserve valid vf queue number 1. Start PF testpmd with random queue-num-per-vf in [1, 2, 4, 8 ,16], for example, we use 4 as the reserved vf queue numbers:: - ./testpmd -c f -n 4 -w 18:00.0,queue-num-per-vf=4 \ + ./testpmd -c f -n 4 -a 18:00.0,queue-num-per-vf=4 \ --file-prefix=test1 --socket-mem 1024,1024 -- -i Note testpmd can be started normally without any wrong or error. 2. Start VF testpmd:: - ./testpmd -c 0xf0 -n 4 -w 03:00.0 \ + ./testpmd -c 0xf0 -n 4 -a 03:00.0 \ --file-prefix=test2 --socket-mem 1024,1024 -- -i 3. VF request a queue number that is equal to reserved queue number, and we can not find VF reset while confiuring it:: @@ -195,7 +195,7 @@ Test case 2: reserve invalid VF queue number 1. Start PF testpmd with random queue-num-per-vf in [0, 3, 5-7 , 9-15, 17], for example, we use 0 as the reserved vf queue numbers:: - ./testpmd -c f -n 4 -w 18:00.0,queue-num-per-vf=0 \ + ./testpmd -c f -n 4 -a 18:00.0,queue-num-per-vf=0 \ --file-prefix=test1 --socket-mem 1024,1024 -- -i 2. Verify testpmd started with logs as below:: @@ -207,12 +207,12 @@ Test case 3: set valid VF queue number in testpmd command-line options 1. Start PF testpmd:: - ./testpmd -c f -n 4 -w 18:00.0 \ + ./testpmd -c f -n 4 -a 18:00.0 \ --file-prefix=test1 --socket-mem 1024,1024 -- -i 2. Start VF testpmd with "--rxq=[rxq] --txq=[txq]", and random valid values from 1 to 16, take 3 for example:: - ./testpmd -c 0xf0 -n 4 -w 18:02.0 --file-prefix=test2 \ + ./testpmd -c 0xf0 -n 4 -a 18:02.0 --file-prefix=test2 \ --socket-mem 1024,1024 -- -i --rxq=3 --txq=3 3. Configure vf forwarding prerequisits and start forwarding:: @@ -254,12 +254,12 @@ Test case 4: set invalid VF queue number in testpmd command-line options 1. Start PF testpmd:: - ./testpmd -c f -n 4 -w 18:00.0 \ + ./testpmd -c f -n 4 -a 18:00.0 \ --file-prefix=test1 --socket-mem 1024,1024 -- -i 2. Start VF testpmd with "--rxq=0 --txq=0" :: - ./testpmd -c 0xf0 -n 4 -w 18:02.0 --file-prefix=test2 \ + ./testpmd -c 0xf0 -n 4 -a 18:02.0 --file-prefix=test2 \ --socket-mem 1024,1024 -- -i --rxq=0 --txq=0 Verify testpmd exited with error as below:: @@ -268,7 +268,7 @@ Test case 4: set invalid VF queue number in testpmd command-line options 3. Start VF testpmd with "--rxq=17 --txq=17" :: - ./testpmd -c 0xf0 -n 4 -w 18:02.0 --file-prefix=test2 \ + ./testpmd -c 0xf0 -n 4 -a 18:02.0 --file-prefix=test2 \ --socket-mem 1024,1024 -- -i --rxq=17 --txq=17 Verify testpmd exited with error as below:: @@ -280,12 +280,12 @@ Test case 5: set valid VF queue number with testpmd function command 1. Start PF testpmd:: - ./testpmd -c f -n 4 -w 18:00.0 \ + ./testpmd -c f -n 4 -a 18:00.0 \ --file-prefix=test1 --socket-mem 1024,1024 -- -i 2. Start VF testpmd without setting "rxq" and "txq":: - ./testpmd -c 0xf0 -n 4 -w 05:02.0 --file-prefix=test2 \ + ./testpmd -c 0xf0 -n 4 -a 05:02.0 --file-prefix=test2 \ --socket-mem 1024,1024 -- -i 3. Configure vf forwarding prerequisits and start forwarding:: @@ -307,12 +307,12 @@ Test case 6: set invalid VF queue number with testpmd function command 1. Start PF testpmd:: - ./testpmd -c f -n 4 -w 18:00.0 \ + ./testpmd -c f -n 4 -a 18:00.0 \ --file-prefix=test1 --socket-mem 1024,1024 -- -i 2. Start VF testpmd without setting "rxq" and "txq":: - ./testpmd -c 0xf0 -n 4 -w 05:02.0 --file-prefix=test2 \ + ./testpmd -c 0xf0 -n 4 -a 05:02.0 --file-prefix=test2 \ --socket-mem 1024,1024 -- -i @@ -344,7 +344,7 @@ Test case 7: Reserve VF queue number when VF bind to kernel driver 2. Reserve VF queue number :: - ./testpmd -c f -n 4 -w 18:00.0,queue-num-per-vf=2 \ + ./testpmd -c f -n 4 -a 18:00.0,queue-num-per-vf=2 \ --file-prefix=test1 --socket-mem 1024,1024 -- -i 3. Check the VF0 rxq and txq number is 2:: diff --git a/test_plans/unit_tests_dump_test_plan.rst b/test_plans/unit_tests_dump_test_plan.rst index 8978fdf..c8832ff 100644 --- a/test_plans/unit_tests_dump_test_plan.rst +++ b/test_plans/unit_tests_dump_test_plan.rst @@ -175,7 +175,7 @@ stdout. The steps to run the unit test manually are as follow:: # make -C ./app/test/ - # ./app/test/test -n 1 -c ffff -w|-b pci_address + # ./app/test/test -n 1 -c ffff -a|-b pci_address RTE>> dump_devargs The final output of the test will be the pci address of allow list diff --git a/test_plans/unit_tests_event_timer_test_plan.rst b/test_plans/unit_tests_event_timer_test_plan.rst index 58ac78c..192d983 100644 --- a/test_plans/unit_tests_event_timer_test_plan.rst +++ b/test_plans/unit_tests_event_timer_test_plan.rst @@ -12,7 +12,7 @@ test can be launched independently using the command line interface. The steps to run the unit test manually are as follow:: # make -C ./app/test/ - # ./app/test/test -n 1 -c ffff -w , + # ./app/test/test -n 1 -c ffff -a , RTE>> event_timer_adapter_test The final output of the test has to be "Test OK" diff --git a/test_plans/veb_switch_test_plan.rst b/test_plans/veb_switch_test_plan.rst index e849732..ca8bd7e 100644 --- a/test_plans/veb_switch_test_plan.rst +++ b/test_plans/veb_switch_test_plan.rst @@ -112,7 +112,7 @@ Details: 1. In VF1, run testpmd:: ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 --socket-mem 1024,1024 - -w 05:02.0 --file-prefix=test1 -- -i --crc-strip --eth-peer=0,00:11:22:33:44:12 + -a 05:02.0 --file-prefix=test1 -- -i --crc-strip --eth-peer=0,00:11:22:33:44:12 testpmd>set fwd txonly testpmd>set promisc all off testpmd>start @@ -120,7 +120,7 @@ Details: In VF2, run testpmd:: ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xa -n 4 --socket-mem 1024,1024 - -w 05:02.1 --file-prefix=test2 -- -i --crc-strip + -a 05:02.1 --file-prefix=test2 -- -i --crc-strip testpmd>set fwd rxonly testpmd>set promisc all off testpmd>start @@ -140,7 +140,7 @@ Details: 1. In VF1, run testpmd:: ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 --socket-mem 1024,1024 - -w 05:02.0 --file-prefix=test1 -- -i --crc-strip --eth-peer=0,00:11:22:33:44:12 + -a 05:02.0 --file-prefix=test1 -- -i --crc-strip --eth-peer=0,00:11:22:33:44:12 testpmd>set fwd mac testpmd>set promisc all off testpmd>start @@ -148,7 +148,7 @@ Details: In VF2, run testpmd:: ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xa -n 4 --socket-mem 1024,1024 - -w 05:02.1 --file-prefix=test2 -- -i --crc-strip + -a 05:02.1 --file-prefix=test2 -- -i --crc-strip testpmd>set fwd rxonly testpmd>set promisc all off testpmd>start @@ -174,7 +174,7 @@ Details: 2. In VF1, run testpmd:: - ./testpmd -c 0xf -n 4 --socket-mem 1024,1024 -w 0000:05:02.0 + ./testpmd -c 0xf -n 4 --socket-mem 1024,1024 -a 0000:05:02.0 --file-prefix=test1 -- -i --crc-strip --eth-peer=0,00:11:22:33:44:12 testpmd>set fwd mac testpmd>set promisc all off @@ -182,7 +182,7 @@ Details: In VF2, run testpmd:: - ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -w 0000:05:02.1 + ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -a 0000:05:02.1 --file-prefix=test2 -- -i --crc-strip testpmd>set fwd rxonly testpmd>set promisc all off @@ -216,14 +216,14 @@ Details: 1. vf->pf PF, launch testpmd:: - ./testpmd -c 0xf -n 4 --socket-mem 1024,1024 -w 0000:05:00.0 --file-prefix=test1 -- -i + ./testpmd -c 0xf -n 4 --socket-mem 1024,1024 -a 0000:05:00.0 --file-prefix=test1 -- -i testpmd>set fwd rxonly testpmd>set promisc all off testpmd>start VF1, run testpmd:: - ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -w 0000:05:02.0 --file-prefix=test2 -- -i --eth-peer=0,pf_mac_addr + ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -a 0000:05:02.0 --file-prefix=test2 -- -i --eth-peer=0,pf_mac_addr testpmd>set fwd txonly testpmd>set promisc all off testpmd>start @@ -234,14 +234,14 @@ Details: 2. pf->vf PF, launch testpmd:: - ./testpmd -c 0xf -n 4 --socket-mem 1024,1024 -w 0000:05:00.0 --file-prefix=test1 -- -i --eth-peer=0,vf1_mac_addr + ./testpmd -c 0xf -n 4 --socket-mem 1024,1024 -a 0000:05:00.0 --file-prefix=test1 -- -i --eth-peer=0,vf1_mac_addr testpmd>set fwd txonly testpmd>set promisc all off testpmd>start VF1, run testpmd:: - ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -w 0000:05:02.0 --file-prefix=test2 -- -i + ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -a 0000:05:02.0 --file-prefix=test2 -- -i testpmd>mac_addr add 0 vf1_mac_addr testpmd>set fwd rxonly testpmd>set promisc all off @@ -253,14 +253,14 @@ Details: 3. tester->vf PF, launch testpmd:: - ./testpmd -c 0xf -n 4 --socket-mem 1024,1024 -w 0000:05:00.0 --file-prefix=test1 -- -i + ./testpmd -c 0xf -n 4 --socket-mem 1024,1024 -a 0000:05:00.0 --file-prefix=test1 -- -i testpmd>set fwd mac testpmd>set promisc all off testpmd>start VF1, run testpmd:: - ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -w 0000:05:02.0 --file-prefix=test2 -- -i + ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -a 0000:05:02.0 --file-prefix=test2 -- -i testpmd>mac_addr add 0 vf1_mac_addr testpmd>set fwd rxonly testpmd>set promisc all off @@ -273,19 +273,19 @@ Details: 4. vf1->vf2 PF, launch testpmd:: - ./testpmd -c 0xf -n 4 --socket-mem 1024,1024 -w 0000:05:00.0 --file-prefix=test1 -- -i + ./testpmd -c 0xf -n 4 --socket-mem 1024,1024 -a 0000:05:00.0 --file-prefix=test1 -- -i testpmd>set promisc all off VF1, run testpmd:: - ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -w 0000:05:02.0 --file-prefix=test2 -- -i --eth-peer=0,vf2_mac_addr + ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -a 0000:05:02.0 --file-prefix=test2 -- -i --eth-peer=0,vf2_mac_addr testpmd>set fwd txonly testpmd>set promisc all off testpmd>start VF2, run testpmd:: - ./testpmd -c 0xf00 -n 4 --socket-mem 1024,1024 -w 0000:05:02.1 --file-prefix=test3 -- -i + ./testpmd -c 0xf00 -n 4 --socket-mem 1024,1024 -a 0000:05:02.1 --file-prefix=test3 -- -i testpmd>mac_addr add 0 vf2_mac_addr testpmd>set fwd rxonly testpmd>set promisc all off diff --git a/test_plans/vf_l3fwd_test_plan.rst b/test_plans/vf_l3fwd_test_plan.rst index efdedda..9fb97cc 100644 --- a/test_plans/vf_l3fwd_test_plan.rst +++ b/test_plans/vf_l3fwd_test_plan.rst @@ -156,7 +156,7 @@ take XL710 for example:: 4, Start dpdk l3fwd with 1:1 matched cores and queues:: - ./examples/l3fwd/build/l3fwd -c 0x3c -n 4 -w 0000:18:02.0 -w 0000:18:06.0 -- -p 0x3 --config '(0,0,2),(1,0,3),(0,1,4),(1,1,5)' + ./examples/l3fwd/build/l3fwd -c 0x3c -n 4 -a 0000:18:02.0 -a 0000:18:06.0 -- -p 0x3 --config '(0,0,2),(1,0,3),(0,1,4),(1,1,5)' 5, Send packet with frame size from 64bytes to 1518bytes with ixia traffic generator, make sure your traffic configuration meets LPM rules, and will go to all queues, all ports. diff --git a/test_plans/vf_macfilter_test_plan.rst b/test_plans/vf_macfilter_test_plan.rst index c2fd298..ae2250a 100644 --- a/test_plans/vf_macfilter_test_plan.rst +++ b/test_plans/vf_macfilter_test_plan.rst @@ -97,7 +97,7 @@ Test Case 1: test_kernel_2pf_2vf_1vm_iplink_macfilter disable promisc mode,set it in mac forward mode:: ./usertools/dpdk-devbind.py --bind=igb_uio 00:06.0 00:07.0 - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -w 00:06.0 -w 00:07.0 -- -i --portmask=0x3 --tx-offloads=0x8fff + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -a 00:06.0 -a 00:07.0 -- -i --portmask=0x3 --tx-offloads=0x8fff testpmd> port stop all testpmd> port config all crc-strip on @@ -175,7 +175,7 @@ Test Case 2: test_kernel_2pf_2vf_1vm_mac_add_filter VF, disable promisc mode, add a new MAC to VF0 and then start:: ./usertools/dpdk-devbind.py --bind=igb_uio 00:06.0 00:07.0 - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -w 00:06.0 -w 00:07.0 -- -i --portmask=0x3 --tx-offloads=0x8fff + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -a 00:06.0 -a 00:07.0 -- -i --portmask=0x3 --tx-offloads=0x8fff testpmd> port stop all testpmd> port config all crc-strip on @@ -269,7 +269,7 @@ Test Case 3: test_dpdk_2pf_2vf_1vm_mac_add_filter VF, disable promisc mode, add a new MAC to VF0 and then start:: ./usertools/dpdk-devbind.py --bind=igb_uio 00:06.0 00:07.0 - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -w 00:06.0 -w 00:07.0 -- -i --portmask=0x3 --tx-offloads=0x8fff + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -a 00:06.0 -a 00:07.0 -- -i --portmask=0x3 --tx-offloads=0x8fff testpmd> port stop all testpmd> port config all crc-strip on @@ -365,7 +365,7 @@ Test Case 4: test_dpdk_2pf_2vf_1vm_iplink_macfilter disable promisc mode, set it in mac forward mode:: ./usertools/dpdk-devbind.py --bind=igb_uio 00:06.0 00:07.0 - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -w 00:06.0 -w 00:07.0 -- -i --portmask=0x3 --tx-offloads=0x8fff + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -a 00:06.0 -a 00:07.0 -- -i --portmask=0x3 --tx-offloads=0x8fff testpmd> port stop all testpmd> port config all crc-strip on diff --git a/test_plans/vf_packet_rxtx_test_plan.rst b/test_plans/vf_packet_rxtx_test_plan.rst index 773758b..6c34f0b 100644 --- a/test_plans/vf_packet_rxtx_test_plan.rst +++ b/test_plans/vf_packet_rxtx_test_plan.rst @@ -96,7 +96,7 @@ Test Case 1: VF_packet_IO_kernel_PF_dpdk_VF and then start testpmd, set it in mac forward mode:: ./usertools/dpdk-devbind.py -s --bind=igb_uio 00:06.0 00:07.0 - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -w 00:06.0 -w 00:07.0 \ + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -a 00:06.0 -a 00:07.0 \ -- -i --portmask=0x3 --tx-offloads=0x8fff testpmd> set fwd mac @@ -165,7 +165,7 @@ Test Case 2: VF_packet_IO_dpdk_PF_dpdk_VF and then start testpmd, set it in mac forward mode:: ./usertools/dpdk-devbind.py --bind=igb_uio 00:06.0 00:07.0 - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -w 00:06.0 -w 00:07.0 \ + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -a 00:06.0 -a 00:07.0 \ -- -i testpmd> set fwd mac diff --git a/test_plans/vf_pf_reset_test_plan.rst b/test_plans/vf_pf_reset_test_plan.rst index d433d71..009e99a 100644 --- a/test_plans/vf_pf_reset_test_plan.rst +++ b/test_plans/vf_pf_reset_test_plan.rst @@ -160,11 +160,11 @@ Test Case 2: vf reset -- create two vfs on one pf, run testpmd separately 2. Start testpmd on two vf ports:: ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 \ - --socket-mem 1024,1024 -w 81:02.0 --file-prefix=test1 \ + --socket-mem 1024,1024 -a 81:02.0 --file-prefix=test1 \ -- -i --eth-peer=0,00:11:22:33:44:12 \ ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf0 -n 4 \ - --socket-mem 1024,1024 -w 81:02.1 --file-prefix=test2 \ + --socket-mem 1024,1024 -a 81:02.1 --file-prefix=test2 \ -- -i 3. Set fwd mode on vf0:: @@ -545,7 +545,7 @@ test Case 9: vf reset (two vfs passed through to one VM) ./usertools/dpdk-devbind.py -b igb_uio 00:05.0 00:05.1 ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 \ - -w 00:05.0 -w 00:05.1 -- -i --portmask=0x3 + -a 00:05.0 -a 00:05.1 -- -i --portmask=0x3 5. Add MAC address to the vf0 ports, set it in mac forward mode:: diff --git a/test_plans/vf_vlan_test_plan.rst b/test_plans/vf_vlan_test_plan.rst index 5eaa994..47b4249 100644 --- a/test_plans/vf_vlan_test_plan.rst +++ b/test_plans/vf_vlan_test_plan.rst @@ -87,7 +87,7 @@ Prerequisites 5. Start testpmd, set it in rxonly mode and enable verbose output:: - testpmd -c 0x0f -n 4 -w 00:04.0 -w 00:05.0 -- -i --portmask=0x3 --tx-offloads=0x8fff + testpmd -c 0x0f -n 4 -a 00:04.0 -a 00:05.0 -- -i --portmask=0x3 --tx-offloads=0x8fff testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start diff --git a/test_plans/vhost_virtio_user_interrupt_test_plan.rst b/test_plans/vhost_virtio_user_interrupt_test_plan.rst index 42e645f..2ac6a38 100644 --- a/test_plans/vhost_virtio_user_interrupt_test_plan.rst +++ b/test_plans/vhost_virtio_user_interrupt_test_plan.rst @@ -206,7 +206,7 @@ flow: Vhost <--> Virtio 1. Bind one cbdma port to igb_uio driver, then start vhost-user side:: - ./testpmd -c 0x3000 -n 4 -w 00:04.0 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0,dmas=[txq0@00:04.0]' -- -i + ./testpmd -c 0x3000 -n 4 -a 00:04.0 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0,dmas=[txq0@00:04.0]' -- -i testpmd>set fwd mac testpmd>start @@ -254,7 +254,7 @@ flow: Vhost <--> Virtio 1. Bind one cbdma port to igb_uio driver, then start vhost-user side:: - ./testpmd -c 0x3000 -n 4 -w 00:04.0 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0,dmas=[txq0@00:04.0]' -- -i + ./testpmd -c 0x3000 -n 4 -a 00:04.0 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0,dmas=[txq0@00:04.0]' -- -i testpmd>set fwd mac testpmd>start diff --git a/test_plans/virtio_pvp_regression_test_plan.rst b/test_plans/virtio_pvp_regression_test_plan.rst index df76b54..fb45c56 100644 --- a/test_plans/virtio_pvp_regression_test_plan.rst +++ b/test_plans/virtio_pvp_regression_test_plan.rst @@ -150,7 +150,7 @@ Test Case 3: pvp test with virtio 0.95 vrctor_rx path 3. On VM, bind virtio net to igb_uio and run testpmd without tx-offloads, [0000:xx.00] is [Bus,Device,Function] of virtio-net:: - ./testpmd -c 0x7 -n 3 -w 0000:xx.00,vectorized -- -i \ + ./testpmd -c 0x7 -n 3 -a 0000:xx.00,vectorized -- -i \ --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -267,7 +267,7 @@ Test Case 6: pvp test with virtio 1.0 vrctor_rx path 3. On VM, bind virtio net to igb_uio and run testpmd without tx-offloads, [0000:xx.00] is [Bus,Device,Function] of virtio-net:: - ./testpmd -c 0x7 -n 3 -w 0000:xx.00,vectorized -- -i \ + ./testpmd -c 0x7 -n 3 -a 0000:xx.00,vectorized -- -i \ --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start diff --git a/test_plans/vm2vm_virtio_pmd_test_plan.rst b/test_plans/vm2vm_virtio_pmd_test_plan.rst index 8914e7a..7280af9 100644 --- a/test_plans/vm2vm_virtio_pmd_test_plan.rst +++ b/test_plans/vm2vm_virtio_pmd_test_plan.rst @@ -48,7 +48,7 @@ Test Case 1: VM2VM vhost-user/virtio-pmd with vector_rx path 1. Bind one physical nic port to igb_uio, then launch the testpmd by below commands:: rm -rf vhost-net* - ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -78,13 +78,13 @@ Test Case 1: VM2VM vhost-user/virtio-pmd with vector_rx path 3. On VM1, bind vdev with igb_uio driver,then run testpmd, set rxonly for virtio1, [0000:xx.00] is [Bus,Device,Function] of virtio-net:: - ./testpmd -c 0x3 -n 4 -w 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024 + ./dpdk-testpmd -c 0x3 -n 4 -a 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024 testpmd>set fwd rxonly testpmd>start 4. On VM2, bind vdev with igb_uio driver,then run testpmd, set txonly for virtio2 and send 64B packets, [0000:xx.00] is [Bus,Device,Function] of virtio-net:: - ./testpmd -c 0x3 -n 4 -w 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024 + ./dpdk-testpmd -c 0x3 -n 4 -a 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024 testpmd>set fwd txonly testpmd>set txpkts 64 testpmd>start tx_first 32 @@ -103,7 +103,7 @@ Test Case 2: VM2VM vhost-user/virtio-pmd with normal path 1. Bind one physical nic port to igb_uio, then launch the testpmd by below commands:: rm -rf vhost-net* - ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -133,13 +133,13 @@ Test Case 2: VM2VM vhost-user/virtio-pmd with normal path 3. On VM1, bind vdev with igb_uio driver,then run testpmd, set rxonly for virtio1 :: - ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024 + ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024 testpmd>set fwd rxonly testpmd>start 4. On VM2, bind vdev with igb_uio driver,then run testpmd, set rxonly for virtio2 and send 64B packets :: - ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024 + ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024 testpmd>set fwd txonly testpmd>set txpkts 64 testpmd>start tx_first 32 @@ -158,7 +158,7 @@ Test Case 3: VM2VM vhost-user/virtio1.0-pmd with vector_rx path 1. Bind one physical nic port to igb_uio, then launch the testpmd by below commands:: rm -rf vhost-net* - ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -188,13 +188,13 @@ Test Case 3: VM2VM vhost-user/virtio1.0-pmd with vector_rx path 3. On VM1, bind vdev with igb_uio driver,then run testpmd, set rxonly for virtio1, [0000:xx.00] is [Bus,Device,Function] of virtio-net:: - ./testpmd -c 0x3 -n 4 -w 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024 + ./dpdk-testpmd -c 0x3 -n 4 -a 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024 testpmd>set fwd rxonly testpmd>start 4. On VM2, bind vdev with igb_uio driver,then run testpmd, set txonly for virtio2, [0000:xx.00] is [Bus,Device,Function] of virtio-net:: - ./testpmd -c 0x3 -n 4 -w 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024 + ./dpdk-testpmd -c 0x3 -n 4 -a 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024 testpmd>set fwd txonly testpmd>set txpkts 64 testpmd>start tx_first 32 @@ -213,7 +213,7 @@ Test Case 4: VM2VM vhost-user/virtio1.0-pmd with normal path 1. Bind one physical nic port to igb_uio, then launch the testpmd by below commands:: rm -rf vhost-net* - ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -243,13 +243,13 @@ Test Case 4: VM2VM vhost-user/virtio1.0-pmd with normal path 3. On VM1, bind vdev with igb_uio driver,then run testpmd, set rxonly for virtio1 :: - ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024 + ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024 testpmd>set fwd rxonly testpmd>start 4. On VM2, bind vdev with igb_uio driver,then run testpmd, set txonly for virtio2 :: - ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024 + ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024 testpmd>set fwd txonly testpmd>set txpkts 64 testpmd>start tx_first 32 @@ -267,7 +267,7 @@ Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check 1. Bind virtio with igb_uio driver, launch the testpmd by below commands:: - ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -310,7 +310,7 @@ Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check 4. Bind virtio with igb_uio driver,then run testpmd, set rxonly mode for virtio-pmd on VM1:: - ./testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 + ./dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 testpmd>set fwd rxonly testpmd>start @@ -320,7 +320,7 @@ Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check 6. On VM2, bind virtio with igb_uio driver,then run testpmd, config tx_packets to 8k length with chain mode:: - ./testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 + ./dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 testpmd>set fwd mac testpmd>set txpkts 2000,2000,2000,2000 @@ -333,7 +333,7 @@ Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check 9. Relaunch testpmd in VM1:: - ./testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 + ./dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 testpmd>set fwd rxonly testpmd>start @@ -343,7 +343,7 @@ Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check 11. Relaunch testpmd on VM2, send ten 64B packets from virtio-pmd on VM2:: - ./testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 + ./dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>set burst 1 testpmd>start tx_first 10 @@ -355,7 +355,7 @@ Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid ch 1. Bind virtio with igb_uio driver, launch the testpmd by below commands:: - ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -398,7 +398,7 @@ Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid ch 4. Bind virtio with igb_uio driver,then run testpmd, set rxonly mode for virtio-pmd on VM1:: - ./testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 + ./dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 testpmd>set fwd rxonly testpmd>start @@ -408,7 +408,7 @@ Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid ch 6. On VM2, bind virtio with igb_uio driver,then run testpmd, config tx_packets to 8k length with chain mode:: - ./testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 + ./dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 testpmd>set fwd mac testpmd>set txpkts 2000,2000,2000,2000 @@ -421,7 +421,7 @@ Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid ch 9. Relaunch testpmd in VM1:: - ./testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 + ./dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 testpmd>set fwd rxonly testpmd>start @@ -431,7 +431,7 @@ Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid ch 11. Relaunch testpmd On VM2, send ten 64B packets from virtio-pmd on VM2:: - ./testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 + ./dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 testpmd>set fwd mac testpmd>set burst 1 testpmd>start tx_first 10 @@ -443,7 +443,7 @@ Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid ch 1. Bind virtio with igb_uio driver, launch the testpmd by below commands:: - ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -486,7 +486,7 @@ Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid ch 4. Bind virtio with igb_uio driver,then run testpmd, set rxonly mode for virtio-pmd on VM1:: - ./testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 + ./dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 testpmd>set fwd rxonly testpmd>start @@ -496,7 +496,7 @@ Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid ch 6. On VM2, bind virtio with igb_uio driver,then run testpmd, config tx_packets to 8k length with chain mode:: - ./testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 + ./dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 testpmd>set fwd mac testpmd>set txpkts 2000,2000,2000,2000 @@ -509,7 +509,7 @@ Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid ch 9. Relaunch testpmd in VM1:: - ./testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 + ./dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 testpmd>set fwd rxonly testpmd>start @@ -519,7 +519,7 @@ Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid ch 11. Relaunch testpmd On VM2, send ten 64B packets from virtio-pmd on VM2:: - ./testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 + ./dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 testpmd>set fwd mac testpmd>set burst 1 testpmd>start tx_first 10 @@ -532,7 +532,7 @@ Test Case 8: VM2VM vhost-user/virtio1.1-pmd with normal path 1. Bind one physical nic port to igb_uio, then launch the testpmd by below commands:: rm -rf vhost-net* - ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -562,13 +562,13 @@ Test Case 8: VM2VM vhost-user/virtio1.1-pmd with normal path 3. On VM1, bind vdev with igb_uio driver,then run testpmd, set rxonly for virtio1 :: - ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024 + ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024 testpmd>set fwd rxonly testpmd>start 4. On VM2, bind vdev with igb_uio driver,then run testpmd, set txonly for virtio2 :: - ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024 + ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024 testpmd>set fwd txonly testpmd>set txpkts 64 testpmd>start tx_first 32 @@ -630,7 +630,7 @@ Test Case 9: VM2VM virtio-pmd split ring mergeable path 8 queues CBDMA enable wi 5. Launch testpmd in VM2, sent imix pkts from VM2:: - ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 + ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 testpmd>set mac fwd testpmd>set txpkts 64,256,512,1024,2000,64,256,512,1024,2000 testpmd>start tx_first 1 @@ -698,13 +698,13 @@ Test Case 10: VM2VM virtio-pmd split ring mergeable path dynamic queue size CBDM 4. Launch testpmd in VM1:: - ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 + ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 testpmd>set mac fwd testpmd>start 5. Launch testpmd in VM2 and send imix pkts, check imix packets can looped between two VMs for 1 mins and 4 queues (queue0 to queue3) have packets rx/tx:: - ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 + ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 testpmd>set mac fwd testpmd>set txpkts 64,256,512,1024,2000,64,256,512,1024,2000 testpmd>start tx_first 32 @@ -770,13 +770,13 @@ Test Case 11: VM2VM virtio-pmd packed ring mergeable path 8 queues CBDMA enable 4. Launch testpmd in VM1:: - ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 + ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 testpmd>set mac fwd testpmd>start 5. Launch testpmd in VM2 and send imix pkts, check imix packets can looped between two VMs for 1 mins and 8 queues all have packets rx/tx:: - ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 + ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 testpmd>set mac fwd testpmd>set txpkts 64,256,512,1024,20000,64,256,512,1024,20000 testpmd>start tx_first 32 @@ -802,7 +802,7 @@ Test Case 11: VM2VM virtio-pmd packed ring mergeable path 8 queues CBDMA enable modprobe vfio-pci echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ./usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:00:05.0 - ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 + ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 testpmd>set mac fwd testpmd>set txpkts 64,256,512,1024,20000,64,256,512,1024,20000 testpmd>start tx_first 32 -- 1.8.3.1