From: Wenjie Li <wenjiex.a.li@intel.com>
To: dts@dpdk.org
Cc: Wenjie Li <wenjiex.a.li@intel.com>
Subject: [dts] [PATCH V1] test_plans:fix build warning
Date: Thu, 26 Sep 2019 11:27:59 +0800 [thread overview]
Message-ID: <1569468479-115817-1-git-send-email-wenjiex.a.li@intel.com> (raw)
fix build warning
Signed-off-by: Wenjie Li <wenjiex.a.li@intel.com>
---
test_plans/index.rst | 3 +-
.../vhost_user_live_migration_test_plan.rst | 60 +++++++++----------
test_plans/vm2vm_virtio_pmd_test_plan.rst | 18 +++---
3 files changed, 41 insertions(+), 40 deletions(-)
diff --git a/test_plans/index.rst b/test_plans/index.rst
index e15823e..a10d171 100644
--- a/test_plans/index.rst
+++ b/test_plans/index.rst
@@ -158,6 +158,7 @@ The following are the test plans for the DPDK DTS automated test system.
vf_l3fwd_test_plan
softnic_test_plan
vm_hotplug_test_plan
+ mdd_test_plan
virtio_1.0_test_plan
vhost_enqueue_interrupt_test_plan
@@ -234,4 +235,4 @@ The following are the test plans for the DPDK DTS automated test system.
performance_thread_test_plan
fips_cryptodev_test_plan
- flow_filtering_test_plan
\ No newline at end of file
+ flow_filtering_test_plan
diff --git a/test_plans/vhost_user_live_migration_test_plan.rst b/test_plans/vhost_user_live_migration_test_plan.rst
index 3814196..ec32e82 100644
--- a/test_plans/vhost_user_live_migration_test_plan.rst
+++ b/test_plans/vhost_user_live_migration_test_plan.rst
@@ -94,7 +94,7 @@ On host server side:
On the backup server, run the vhost testpmd on the host and launch VM:
-4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
+4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
backup server # mkdir /mnt/huge
backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
@@ -148,20 +148,20 @@ On the backup server, run the vhost testpmd on the host and launch VM:
10. Start Live migration, ensure the traffic is continuous::
- host server # telnet localhost 3333
- host server # (qemu)migrate -d tcp:backup server:4444
- host server # (qemu)info migrate
- host server # Check if the migrate is active and not failed.
+ host server # telnet localhost 3333
+ host server # (qemu)migrate -d tcp:backup server:4444
+ host server # (qemu)info migrate
+ host server # Check if the migrate is active and not failed.
11. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done::
- host server # (qemu)info migrate
- host server # (qemu)Migration status: completed
+ host server # (qemu)info migrate
+ host server # (qemu)Migration status: completed
12. After live migration, go to the backup server and check if the virtio-pmd can continue to receive packets::
- backup server # ssh -p 5555 127.0.0.1
- backup VM # screen -r vm
+ backup server # ssh -p 5555 127.0.0.1
+ backup VM # screen -r vm
Test Case 2: migrate with virtio-pmd zero-copy enabled
======================================================
@@ -193,7 +193,7 @@ On host server side:
On the backup server, run the vhost testpmd on the host and launch VM:
-4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
+4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
backup server # mkdir /mnt/huge
backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
@@ -247,21 +247,21 @@ On the backup server, run the vhost testpmd on the host and launch VM:
10. Start Live migration, ensure the traffic is continuous::
- host server # telnet localhost 3333
- host server # (qemu)migrate -d tcp:backup server:4444
- host server # (qemu)info migrate
- host server # Check if the migrate is active and not failed.
+ host server # telnet localhost 3333
+ host server # (qemu)migrate -d tcp:backup server:4444
+ host server # (qemu)info migrate
+ host server # Check if the migrate is active and not failed.
11. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done::
- host server # (qemu)info migrate
- host server # (qemu)Migration status: completed
+ host server # (qemu)info migrate
+ host server # (qemu)Migration status: completed
12. After live migration, go to the backup server start vhost testpmd and check if the virtio-pmd can continue to receive packets::
- backup server # testpmd>start
- backup server # ssh -p 5555 127.0.0.1
- backup VM # screen -r vm
+ backup server # testpmd>start
+ backup server # ssh -p 5555 127.0.0.1
+ backup VM # screen -r vm
Test Case 3: migrate with virtio-net
====================================
@@ -294,7 +294,7 @@ On host server side:
On the backup server, run the vhost testpmd on the host and launch VM:
-4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
+4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
backup server # mkdir /mnt/huge
backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
@@ -343,13 +343,13 @@ On the backup server, run the vhost testpmd on the host and launch VM:
10. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done::
- host server # (qemu)info migrate
- host server # (qemu)Migration status: completed
+ host server # (qemu)info migrate
+ host server # (qemu)Migration status: completed
11. After live migration, go to the backup server and check if the virtio-net can continue to receive packets::
- backup server # ssh -p 5555 127.0.0.1
- backup VM # screen -r vm
+ backup server # ssh -p 5555 127.0.0.1
+ backup VM # screen -r vm
Test Case 4: adjust virtio-net queue numbers while migrating with virtio-net
============================================================================
@@ -382,7 +382,7 @@ On host server side:
On the backup server, run the vhost testpmd on the host and launch VM:
-4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
+4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
backup server # mkdir /mnt/huge
backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
@@ -431,14 +431,14 @@ On the backup server, run the vhost testpmd on the host and launch VM:
10. Change virtio-net queue numbers from 1 to 4 while migrating::
- host server # ethtool -L ens3 combined 4
+ host server # ethtool -L ens3 combined 4
11. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done::
- host server # (qemu)info migrate
- host server # (qemu)Migration status: completed
+ host server # (qemu)info migrate
+ host server # (qemu)Migration status: completed
12. After live migration, go to the backup server and check if the virtio-net can continue to receive packets::
- backup server # ssh -p 5555 127.0.0.1
- backup VM # screen -r vm
\ No newline at end of file
+ backup server # ssh -p 5555 127.0.0.1
+ backup VM # screen -r vm
diff --git a/test_plans/vm2vm_virtio_pmd_test_plan.rst b/test_plans/vm2vm_virtio_pmd_test_plan.rst
index ead7d58..06c76b8 100644
--- a/test_plans/vm2vm_virtio_pmd_test_plan.rst
+++ b/test_plans/vm2vm_virtio_pmd_test_plan.rst
@@ -326,10 +326,10 @@ Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check
11. Relaunch testpmd on VM2, send ten 64B packets from virtio-pmd on VM2::
- ./testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024
- testpmd>set fwd mac
- testpmd>set burst 1
- testpmd>start tx_first 10
+ ./testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+ testpmd>set burst 1
+ testpmd>start tx_first 10
12. Check payload is correct in each dumped packets.
@@ -408,10 +408,10 @@ Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid ch
11. Relaunch testpmd On VM2, send ten 64B packets from virtio-pmd on VM2::
- ./testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600
- testpmd>set fwd mac
- testpmd>set burst 1
- testpmd>start tx_first 10
+ ./testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600
+ testpmd>set fwd mac
+ testpmd>set burst 1
+ testpmd>start tx_first 10
12. Check payload is correct in each dumped packets.
@@ -450,4 +450,4 @@ Test Case 7: vm2vm vhost-user/virtio1.1-pmd mergeable path test with payload che
testpmd>set burst 1
testpmd>start tx_first 10
-5. Check payload is correct in each dumped packets.
\ No newline at end of file
+5. Check payload is correct in each dumped packets.
--
2.17.2
next reply other threads:[~2019-09-26 3:23 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-26 3:27 Wenjie Li [this message]
-- strict thread matches above, loose matches on Subject: below --
2019-09-23 2:53 Wenjie Li
2019-06-10 1:43 [dts] [PATCH V1] test_plans: fix " Wenjie Li
2019-05-22 10:51 Wenjie Li
2019-05-29 2:36 ` Tu, Lijuan
2019-03-01 6:45 Wenjie Li
2019-03-01 7:57 ` Tu, Lijuan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1569468479-115817-1-git-send-email-wenjiex.a.li@intel.com \
--to=wenjiex.a.li@intel.com \
--cc=dts@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).