From: Yinan <yinan.wang@intel.com>
To: dts@dpdk.org
Cc: Wang Yinan <yinan.wang@intel.com>
Subject: [dts] [PATCH v1] test_plans:remove virtio 1.1 inorder path as not exist
Date: Mon, 15 Jul 2019 01:12:57 +0000 [thread overview]
Message-ID: <20190715011257.11152-1-yinan.wang@intel.com> (raw)
From: Wang Yinan <yinan.wang@intel.com>
Signed-off-by: Wang Yinan <yinan.wang@intel.com>
---
.../pvp_multi_paths_performance_test_plan.rst | 34 ++----------
...host_single_core_performance_test_plan.rst | 53 +++++--------------
...rtio_single_core_performance_test_plan.rst | 33 ++----------
3 files changed, 21 insertions(+), 99 deletions(-)
diff --git a/test_plans/pvp_multi_paths_performance_test_plan.rst b/test_plans/pvp_multi_paths_performance_test_plan.rst
index 6eac68b..477c68d 100644
--- a/test_plans/pvp_multi_paths_performance_test_plan.rst
+++ b/test_plans/pvp_multi_paths_performance_test_plan.rst
@@ -34,12 +34,9 @@
vhost/virtio pvp multi-paths performance test plan
==================================================
-Description
-===========
-
-Benchmark pvp multi-paths performance with 8 tx/rx paths.
+Benchmark pvp multi-paths performance with 7 tx/rx paths.
Includes mergeable, normal, vector_rx, inorder mergeable,
-inorder no-mergeable, virtio 1.1 mergeable, virtio 1.1 inorder, virtio 1.1 normal path.
+inorder no-mergeable, virtio 1.1 mergeable, virtio 1.1 normal path.
Give 1 core for vhost and virtio respectively.
Test flow
@@ -220,29 +217,4 @@ Test Case 7: pvp test with vector_rx path
3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
- testpmd>show port stats all
-
-Test Case 8: pvp test with virtio 1.1 inorder path
-==================================================
-
-1. Bind one port to igb_uio, then launch vhost by below command::
-
- rm -rf vhost-net*
- ./testpmd -n 4 -l 2-3 --socket-mem 1024,1024 --legacy-mem \
- --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' \
- -- -i --nb-cores=1 --txd=1024 --rxd=1024
- testpmd>set fwd mac
- testpmd>start
-
-2. Launch virtio-user by below command::
-
- ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
- --legacy-mem --no-pci --file-prefix=virtio \
- --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=0,in_order=1 \
- -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
- >set fwd mac
- >start
-
-3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
-
- testpmd>show port stats all
+ testpmd>show port stats all
\ No newline at end of file
diff --git a/test_plans/pvp_multi_paths_vhost_single_core_performance_test_plan.rst b/test_plans/pvp_multi_paths_vhost_single_core_performance_test_plan.rst
index 26d6cf1..e4d369f 100644
--- a/test_plans/pvp_multi_paths_vhost_single_core_performance_test_plan.rst
+++ b/test_plans/pvp_multi_paths_vhost_single_core_performance_test_plan.rst
@@ -30,23 +30,19 @@
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
OF THE POSSIBILITY OF SUCH DAMAGE.
-
========================================================
vhost/virtio pvp multi-paths vhost single core test plan
========================================================
-Description
-===========
-
-Benchmark PVP vhost single core performance with 8 tx/rx paths.
-Includes mergeable, normal, vector_rx, inorder mergeable,
-inorder no-mergeable, virtio 1.1 mergeable, virtio 1.1 inorder, virtio 1.1 normal path.
+Benchmark PVP vhost single core performance with 7 tx/rx paths.
+Includes mergeable, normal, vector_rx, inorder mergeable,
+inorder no-mergeable, virtio 1.1 mergeable, virtio 1.1 normal path.
Give 2 cores for virtio and 1 core for vhost, set io fwd at virtio side to lower the virtio workload.
Test flow
=========
-TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
+TG --> NIC --> Virtio --> Vhost --> Virtio --> NIC --> TG
Test Case 1: vhost single core performance test with virtio 1.1 mergeable path
==============================================================================
@@ -54,7 +50,7 @@ Test Case 1: vhost single core performance test with virtio 1.1 mergeable path
1. Bind one port to igb_uio, then launch vhost by below command::
rm -rf vhost-net*
- ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --legacy-mem --file-prefix=vhost \
+ ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
--vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -62,7 +58,7 @@ Test Case 1: vhost single core performance test with virtio 1.1 mergeable path
2. Launch virtio-user by below command::
./testpmd -l 7-9 -n 4 --socket-mem 1024,1024 --legacy-mem --file-prefix=virtio \
- --vdev=virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \
+ --vdev=virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1 \
-- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --txd=1024 --rxd=1024
>set fwd io
>start
@@ -75,7 +71,7 @@ Test Case 2: vhost single core performance test with virtio 1.1 normal path
1. Bind one port to igb_uio, then launch vhost by below command::
rm -rf vhost-net*
- ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --legacy-mem --file-prefix=vhost \
+ ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
--vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -83,7 +79,7 @@ Test Case 2: vhost single core performance test with virtio 1.1 normal path
2. Launch virtio-user by below command::
./testpmd -l 7-9 -n 4 --socket-mem 1024,1024 --legacy-mem --file-prefix=virtio \
- --vdev=virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \
+ --vdev=virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0 \
-- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --txd=1024 --rxd=1024
>set fwd io
>start
@@ -96,7 +92,7 @@ Test Case 3: vhost single core performance test with inorder mergeable path
1. Bind one port to igb_uio, then launch vhost by below command::
rm -rf vhost-net*
- ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --legacy-mem --file-prefix=vhost \
+ ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
--vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -117,7 +113,7 @@ Test Case 4: vhost single core performance test with inorder no-mergeable path
1. Bind one port to igb_uio, then launch vhost by below command::
rm -rf vhost-net*
- ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --legacy-mem --file-prefix=vhost \
+ ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
--vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -138,7 +134,7 @@ Test Case 5: vhost single core performance test with mergeable path
1. Bind one port to igb_uio, then launch vhost by below command::
rm -rf vhost-net*
- ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --legacy-mem --file-prefix=vhost \
+ ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
--vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -159,7 +155,7 @@ Test Case 6: vhost single core performance test with normal path
1. Bind one port to igb_uio, then launch vhost by below command::
rm -rf vhost-net*
- ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --legacy-mem --file-prefix=vhost \
+ ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
--vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -180,7 +176,7 @@ Test Case 7: vhost single core performance test with vector_rx path
1. Bind one port to igb_uio, then launch vhost by below command::
rm -rf vhost-net*
- ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --legacy-mem --file-prefix=vhost \
+ ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
--vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -193,25 +189,4 @@ Test Case 7: vhost single core performance test with vector_rx path
>set fwd io
>start
-3. Send packet with packet generator with different packet size, check the throughput.
-
-Test Case 8: vhost single core performance test with virtio 1.1 inorder path
-============================================================================
-
-1. Bind one port to igb_uio, then launch vhost by below command::
-
- rm -rf vhost-net*
- ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --legacy-mem --file-prefix=vhost \
- --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
- testpmd>set fwd mac
- testpmd>start
-
-2. Launch virtio-user by below command::
-
- ./testpmd -l 7-9 -n 4 --socket-mem 1024,1024 --legacy-mem --file-prefix=virtio \
- --vdev=virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1 \
- -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --txd=1024 --rxd=1024
- >set fwd io
- >start
-
-3. Send packet with packet generator with different packet size, check the throughput.
+3. Send packet with packet generator with different packet size, check the throughput.
\ No newline at end of file
diff --git a/test_plans/pvp_multi_paths_virtio_single_core_performance_test_plan.rst b/test_plans/pvp_multi_paths_virtio_single_core_performance_test_plan.rst
index e26794d..e4394b6 100644
--- a/test_plans/pvp_multi_paths_virtio_single_core_performance_test_plan.rst
+++ b/test_plans/pvp_multi_paths_virtio_single_core_performance_test_plan.rst
@@ -34,12 +34,9 @@
vhost/virtio pvp multi-paths virtio single core test plan
=========================================================
-Description
-===========
-
-Benchmark pvp virtio single core performance with 8 tx/rx paths.
-Includes mergeable, normal, vector_rx, inorder mergeable,
-inorder no-mergeable, virtio 1.1 mergeable, virtio 1.1 inorder, virtio 1.1 normal path.
+Benchmark pvp virtio single core performance with 7 tx/rx paths.
+Includes mergeable, normal, vector_rx, inorder mergeable,
+inorder no-mergeable, virtio 1.1 mergeable, virtio 1.1 normal path.
Give 2 cores for vhost and 1 core for virtio, set io fwd at vhost side to lower the vhost workload.
Test flow
@@ -199,26 +196,4 @@ Test Case 7: virtio single core performance test with vector_rx path
>set fwd mac
>start
-3. Send packet with packet generator with different packet size, check the throughput.
-
-Test Case 8: virtio single core performance test with virtio 1.1 inorder path
-=============================================================================
-
-1. Bind one port to igb_uio, then launch vhost by below command::
-
- rm -rf vhost-net*
- ./testpmd -n 4 -l 2-4 --socket-mem 1024,1024 --legacy-mem \
- --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=2 --txd=1024 --rxd=1024
- testpmd>set fwd io
- testpmd>start
-
-2. Launch virtio-user by below command::
-
- ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
- --legacy-mem --no-pci --file-prefix=virtio \
- --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=0,in_order=1 \
- -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
- >set fwd mac
- >start
-
-3. Send packet with packet generator with different packet size, check the throughput.
+3. Send packet with packet generator with different packet size, check the throughput.
\ No newline at end of file
--
2.17.1
next reply other threads:[~2019-07-15 8:16 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-07-15 1:12 Yinan [this message]
2019-08-07 3:37 ` Tu, Lijuan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190715011257.11152-1-yinan.wang@intel.com \
--to=yinan.wang@intel.com \
--cc=dts@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).