test suite reviews and discussions
 help / color / mirror / Atom feed
From: Wei Ling <weix.ling@intel.com>
To: dts@dpdk.org
Cc: Wei Ling <weix.ling@intel.com>
Subject: [dts][PATCH V1 1/2] test_plans/vswitch_sample_cbdma_test_plan: modify testplan by DPDK change
Date: Thu, 10 Mar 2022 15:53:17 +0800	[thread overview]
Message-ID: <20220310075317.493805-1-weix.ling@intel.com> (raw)

By DPDK commit 917229c24e, no need to modify DPDK code to test.
1.Delete modify DPDK code steps.
2.Add --total-num-mbufs 600000 params when start dpdk-vhost.

Signed-off-by: Wei Ling <weix.ling@intel.com>
---
 test_plans/vswitch_sample_cbdma_test_plan.rst | 25 +++++--------------
 1 file changed, 6 insertions(+), 19 deletions(-)

diff --git a/test_plans/vswitch_sample_cbdma_test_plan.rst b/test_plans/vswitch_sample_cbdma_test_plan.rst
index 659609e5..af2e62d1 100644
--- a/test_plans/vswitch_sample_cbdma_test_plan.rst
+++ b/test_plans/vswitch_sample_cbdma_test_plan.rst
@@ -45,19 +45,6 @@ from 21.05,packed ring also can support cbdma copy with vhost enqueue direction.
 Prerequisites
 =============
 
-Modify the testpmd code as following::
-
-	--- a/examples/vhost/main.c
-	+++ b/examples/vhost/main.c
-	@@ -29,7 +29,7 @@
-	 #include "main.h"
-
-	 #ifndef MAX_QUEUES
-	-#define MAX_QUEUES 128
-	+#define MAX_QUEUES 512
-	 #endif
-
-	 /* the maximum number of external ports supported */
 
 Test Case1: PVP performance check with CBDMA channel using vhost async driver
 =============================================================================
@@ -67,7 +54,7 @@ Test Case1: PVP performance check with CBDMA channel using vhost async driver
 2. On host, launch dpdk-vhost by below command::
 
 	./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 31-32 -n 4 -- \
-	-p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --client
+	-p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --client --total-num-mbufs 600000
 
 3. Launch virtio-user with packed ring::
 
@@ -103,7 +90,7 @@ Test Case2: PVP test with two VM and two CBDMA channels using vhost async driver
 2. On host, launch dpdk-vhost by below command::
 
 	./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- \
-	-p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat --stats 1 --socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:01.0,txd1@0000:00:01.1] --client
+	-p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat --stats 1 --socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:01.0,txd1@0000:00:01.1] --client--total-num-mbufs 600000
 
 3. launch two virtio-user ports::
 
@@ -141,7 +128,7 @@ Test Case3: VM2VM forwarding test with two CBDMA channels
 2. On host, launch dpdk-vhost by below command::
 
 	./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \
-	--socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:04.0,txd1@0000:00:04.1]  --client
+	--socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:04.0,txd1@0000:00:04.1]  --client --total-num-mbufs 600000
 
 3. Launch virtio-user::
 
@@ -188,7 +175,7 @@ Test Case4: VM2VM test with cbdma channels register/unregister stable check
 2. On host, launch dpdk-vhost by below command::
 
     ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \
-    --socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:04.0,txd1@0000:00:04.1] --client
+    --socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:04.0,txd1@0000:00:04.1] --client --total-num-mbufs 600000
 
 3. Start VM0 with qemu-5.2.0::
 
@@ -268,7 +255,7 @@ Test Case5: VM2VM split ring test with iperf and reconnect stable check
 2. On host, launch dpdk-vhost by below command::
 
 	./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \
-	--socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:04.0,txd1@0000:00:04.1] --client
+	--socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:04.0,txd1@0000:00:04.1] --client --total-num-mbufs 600000
 
 3. Start VM0 with qemu-5.2.0::
 
@@ -327,7 +314,7 @@ Test Case6: VM2VM packed ring test with iperf and reconnect stable test
 2. On host, launch dpdk-vhost by below command::
 
 	./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \
-	--socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:04.0,txd1@0000:00:04.1]
+	--socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:04.0,txd1@0000:00:04.1] --total-num-mbufs 600000
 
 3. Start VM0 with qemu-5.2.0::
 
-- 
2.25.1


             reply	other threads:[~2022-03-10  7:53 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-10  7:53 Wei Ling [this message]
  -- strict thread matches above, loose matches on Subject: below --
2022-03-11  1:44 Wei Ling
2022-03-10  7:41 Wei Ling

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220310075317.493805-1-weix.ling@intel.com \
    --to=weix.ling@intel.com \
    --cc=dts@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).