test suite reviews and discussions
 help / color / mirror / Atom feed
* [dts] [PATCH] update vhost-user live migration script and test plan
@ 2016-08-05  2:14 Qian Xu
  2016-08-05  2:28 ` Liu, Yong
  0 siblings, 1 reply; 2+ messages in thread
From: Qian Xu @ 2016-08-05  2:14 UTC (permalink / raw)
  To: dts; +Cc: Qian Xu

Update vhost-user live migration script based on testpmd as
vhost backend application and some minor changes. 
1. Update the backend application from vhost-switch sample to testpmd since 
some switch will filter VLAN packets. The command line and the output check 
has been updated according to testpmd application.
2. Set the qemu path for this case. 
3.remove VLAN tag in the traffic settings.
4. Check if the migration is failed or not. 
5. Update the qemu monitor quit check part.

Update vhost-user live migration test plan
1. Add more details about nfs settings.
2. Change the vhost-user backend application to testpmd and the launch
step.
3. Add more details about VM access, such as telnet, ssh, scp.

Signed-off-by: Qian Xu <qian.q.xu@intel.com>

diff --git a/test_plans/vhost_user_live_migration_test_plan.rst b/test_plans/vhost_user_live_migration_test_plan.rst
index 8d4b5c9..41053e9 100644
--- a/test_plans/vhost_user_live_migration_test_plan.rst
+++ b/test_plans/vhost_user_live_migration_test_plan.rst
@@ -33,122 +33,174 @@
 ==============================
 DPDK vhost user live migration
 ==============================
-This feature is to make sure vhost user live migration works based on vhost-switch.
+This feature is to make sure vhost user live migration works based on testpmd.
 
 Prerequisites
 -------------
-Connect three ports to one switch, these three ports are from Host, Backup
-host and tester.
+HW setup
 
-Start nfs service and export nfs to backup host IP:
+1. Connect three ports to one switch, these three ports are from Host, Backup
+host and tester. Ensure the tester can send packets out, then host/backup server ports 
+can receive these packets.
+2. Better to have 2 similar machine with the same OS. 
+
+NFS configuration
+1. Make sure host nfsd module updated to v4 version(v2 not support file > 4G)
+
+2. Start nfs service and export nfs to backup host IP:
     host# service rpcbind start
-	host# service nfs start
-	host# cat /etc/exports
+	host# service nfs-server start
+	host# service nfs-mountd start 
+	host# systemctrl stop firewalld.service
+	host# vim /etc/exports
     host# /home/vm-image backup-host-ip(rw,sync,no_root_squash)
+	
+3. Mount host nfs folder on backup host: 
+	backup# mount -t nfs -o nolock,vers=4  host-ip:/home/vm-image /mnt/nfs
 
-Make sure host nfsd module updated to v4 version(v2 not support file > 4G)
+On host server side: 
 
-Create enough hugepages for vhost-switch and qemu backend memory.
+1. Create enough hugepages for vhost-switch and qemu backend memory.
     host# echo 4096 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
     host# mount -t hugetlbfs hugetlbfs /mnt/huge
 
-Bind host port to igb_uio and start vhost switch:
-    host# vhost-switch -c f -n 4 --socket-mem 1024 -- -p 0x1
-
-Start host qemu process:
-	host# qemu-system-x86_64 -name host -enable-kvm -m 2048 \
-	-drive file=/home/vm-image/vm0.img,format=raw \
-	-serial telnet:localhost:5556,server,nowait \
-	-cpu host -smp 4 \
-	-net nic,model=e1000 \
-	-net user,hostfwd=tcp:127.0.0.1:5555-:22 \
-	-chardev socket,id=char1,path=/root/dpdk_org/vhost-net \
-	-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
-	-device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 \
-	-object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
-	-numa node,memdev=mem -mem-prealloc \
-	-monitor unix:/tmp/host.sock,server,nowait \
-	-daemonize
-
-Wait for virtual machine start up and up virtIO interface:
-	host-vm# ifconfig eth1 up
-
-Check vhost-switch connected and send packet with mac+vlan can received by
-virtIO interface in VM:
-	VHOST_DATA: (0) mac 00:00:00:00:00:01 and vlan 1000 registered
-
-Mount host nfs folder on backup host: 
-	backup# mount -t nfs -o nolock,vers=4  host-ip:/home/vm-image /mnt/nfs
-
-Create enough hugepages for vhost-switch and qemu backend memory.
-    backup# echo 4096 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
-    backup# mount -t hugetlbfs hugetlbfs /mnt/huge
-
-Bind backup host port to igb_uio and start vhost switch:
-    backup# vhost-switch -c f -n 4 --socket-mem 1024 -- -p 0x1
-
-Start backup host qemu with additional parameter:
-	-incoming tcp:0:4444
-
-Test Case 1: migrate with kernel driver
-=======================================
+2. Bind host port to igb_uio and start testpmd with vhost port:
+    #./tools/dpdk-devbind.py -b igb_uio 83:00.1
+    #./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -- -i
+    testpmd>start
+	
+3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port. 
+    taskset -c 22-23 /home/qxu10/qemu-2.6.0/x86_64-softmmu/qemu-system-x86_64 -name vm1host \
+    -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/qxu10/img/vm1.img \
+    -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+    -chardev socket,id=char0,path=./vhost-net \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01 \
+    -monitor telnet::3333,server,nowait \
+    -serial telnet:localhost:5432,server,nowait \
+    -daemonize
+	
+On the backup server, run the vhost testpmd on the host and launch VM: 
+
+4.  Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host: 
+    backup server# echo 4096 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+    backup server# mount -t hugetlbfs hugetlbfs /mnt/huge
+    backup server#./tools/dpdk-devbind.py -b igb_uio 81:00.1
+    backup server#./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -- -i
+    testpmd>start
+	
+5. Launch VM on the backup server, and the script is similar to host, but note the 2 differences:
+   1. need add " -incoming tcp:0:4444 " for live migration. 
+   2. need make sure the VM image is the NFS mounted folder, VM image is the exact one on host server. 
+   
+   Backup server # 
+   /home/qxu10/qemu-2.6.0/x86_64-softmmu/qemu-system-x86_64 -name vm2 \
+   -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+   -numa node,memdev=mem -mem-prealloc -smp 4 -cpu host -drive file=/mnt/nfs/vm1.img \
+   -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+   -chardev socket,id=char0,path=./vhost-net \
+   -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+   -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01 \
+   -monitor telnet::3333,server,nowait \
+   -serial telnet:localhost:5432,server,nowait \
+   -incoming tcp:0:4444 \
+   -daemonize
+
+
+Test Case 1: migrate with virtio-pmd
+====================================
 Make sure all Prerequisites has been done
-1. Login into host virtual machine and capture incoming packets.
-	host# telnet localhost 5556
-	host vm# ifconfig eth1 up
 
-2. Send continous packets with mac(00:00:00:00:00:01) and vlan(1000)
+6. SSH to VM and scp the DPDK folder from host to VM:
+    host # ssh -p 5555 localhost, then input password to log in. 
+	host # scp  -P 5555 -r <dpdk_folder>/  localhost:/root, then input password to let the file transfer.
+	
+7. Telnet the serial port and run testpmd in VM:  
+
+    host # telnet localhost 5432
+	Input Enter, then log in to VM
+	If need leave the session, input "CTRL" + "]", then quit the telnet session. 
+	On the Host server VM, run below commands to launch testpmd
+	host vm # 
+	cd /root/dpdk
+    modprobe uio
+    insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
+    ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0 
+    echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+    ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i
+	>set fwd rxonly
+	>set verbose 1 
+	>start tx_first
+
+8.  Check host vhost pmd connect with VM’s virtio device:
+    testpmd> host testpmd message for connection
+
+9. 	Send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9)
 from tester port:
 	tester# scapy
-	tester# p = Ether(dst="00:00:00:00:00:01")/Dot1Q(vlan=1000)/IP()/UDP()/
+	tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/
 	            Raw('x'*20)
 	tester# sendp(p, iface="p5p1", inter=1, loop=1)
-
-3. Check packet normally recevied by virtIO interface
-	host vm# tcpdump -i eth1 -xxx
-
-4. Connect to qemu monitor session and start migration
-	host# nc -U /tmp/host.sock
-    host# (qemu)migrate -d tcp:backup host ip:4444
-
-5. Check host vm can receive packet before migration done
-
-6. Query stats of migrate in monitor, check status of migration
+	
+	Then check the host VM can receive the packet: 
+	host VM# testpmd> port 0/queue 0: received 1 packets
+	
+10. Start Live migration, ensure the traffic is continuous at the HOST VM side: 
+    host server # telnet localhost 3333
+	(qemu)migrate -d tcp:backup server:4444 
+	e.g: migrate -d tcp:10.239.129.176:4444
+	(qemu)info migrate
+	Check if the migrate is active and not failed.
+	
+11. Check host vm can receive packet before migration done
+
+12. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done. 
     host# (qemu)info migrate
-    host# after finished:	
-
-7. After migartion done, login into backup vm and re-enable virtIO interface
-	backup vm# ifconfig eth1 down
-	backup vm# ifconfig eth1 up	
-
-8. Check backup host reconnected and packet normally recevied
-
-Test Case 2: migrate with dpdk
-==============================
-Make sure all Prerequisites has been done
-1. Send continous packets with mac(00:00:00:00:00:01) and vlan(1000)
+    host# (qemu)	
+    Migration status: completed
+
+13. After live migration, go to the backup server and check if the virtio-pmd can continue to receive packets. 
+    Backup server # telnet localhost 5432
+	log in then see the same screen from the host server, and check if the virtio-pmd can continue receive the packets. 
+
+Test Case 2: migrate with virtio-net
+====================================
+Make sure all Prerequisites has been done.
+6. Telnet the serial port and run testpmd in VM:  
+
+    host # telnet localhost 5432
+	Input Enter, then log in to VM
+	If need leave the session, input "CTRL" + "]", then quit the telnet session. 
+	
+7. Let the virtio-net link up:     
+	host vm # ifconfig eth1 up
+
+8. Send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9)
+   from tester port:
 	tester# scapy
-	tester# p = Ether(dst="00:00:00:00:00:01")/Dot1Q(vlan=1000)/IP()/UDP()/
+	tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/
 	            Raw('x'*20)
 	tester# sendp(p, iface="p5p1", inter=1, loop=1)
-
-2. bind virtIO interface to igb_uio and start testpmd
-	host vm# testpmd -c 0x7 -n 4
-
-3. Check packet normally recevied by testpmd:
-	host vm# testpmd> set fwd rxonly
-	host vm# testpmd> set verbose 1
-	host vm# testpmd> port 0/queue 0: received 1 packets
-
-4. Connect to qemu monitor session and start migration
-	host# nc -U /tmp/host.sock
-    host# (qemu)migrate -d tcp:backup host ip:4444
-
-5. Check host vm can receive packet before migration done
-
-6. Query stats of migrate in monitor, check status of migration
+	
+9. Check the host VM can receive the packet: 
+	host VM# tcpdump -i eth1	
+	
+10. Start Live migration, ensure the traffic is continuous at the HOST VM side: 
+    host server # telnet localhost 3333
+	(qemu)migrate -d tcp:backup server:4444 
+	e.g: migrate -d tcp:10.239.129.176:4444
+	(qemu)info migrate
+	Check if the migrate is active and not failed.
+	
+11. Check host vm can receive packet before migration done
+
+12. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done. 
     host# (qemu)info migrate
-    host# after finished:	
+    host# (qemu)	
+    Migration status: completed
+
+13. After live migration, go to the backup server and check if the virtio-pmd can continue to receive packets. 
+    Backup server # telnet localhost 5432
+	log in then see the same screen from the host server, and check if the virtio-net can continue receive the packets. 
 
-7. After migartion done, login into backup vm and check packets recevied
-	backup vm# testpmd> port 0/queue 0: received 1 packets
\ No newline at end of file
diff --git a/tests/TestSuite_vhost_user_live_migration.py b/tests/TestSuite_vhost_user_live_migration.py
index 95e2fb3..e1b9cab 100644
--- a/tests/TestSuite_vhost_user_live_migration.py
+++ b/tests/TestSuite_vhost_user_live_migration.py
@@ -33,19 +33,13 @@ class TestVhostUserLiveMigration(TestCase):
         self.backup_tport = self.tester.get_local_port_bydut(self.backup_port, self.backup_dutip)
         self.backup_tintf = self.tester.get_interface(self.backup_tport)
 
-        # build backup vhost-switch
-        out = self.duts[0].send_expect("make -C examples/vhost", "# ")
-        self.verify("Error" not in out, "compilation error 1")
-        self.verify("No such file" not in out, "compilation error 2")
-
-        # build backup vhost-switch
-        out = self.duts[1].send_expect("make -C examples/vhost", "# ")
-        self.verify("Error" not in out, "compilation error 1")
-        self.verify("No such file" not in out, "compilation error 2")
-
-        self.vhost = "./examples/vhost/build/app/vhost-switch"
+        # Use testpmd as vhost-user application on host/backup server 
+        self.vhost = "./x86_64-native-linuxapp-gcc/app/testpmd"
         self.vm_testpmd = "./%s/app/testpmd -c 0x3 -n 4 -- -i" % self.target
-        self.virio_mac = "00:00:00:00:00:01"
+        self.virio_mac = "52:54:00:00:00:01"
+        
+        # Set the qemu_path for specific env
+        self.qemu_path = "/home/qxu10/qemu-2.6.0/x86_64-softmmu/qemu-system-x86_64"
 
         # flag for environment
         self.env_done = False
@@ -93,11 +87,13 @@ class TestVhostUserLiveMigration(TestCase):
             # start vhost-switch, predict hugepage on both sockets
             base_dir = crb.base_dir.replace('~', '/root')
             crb.send_expect("rm -f %s/vhost-net" % base_dir, "# ")
-            crb.send_expect("%s -c f -n 4 --socket-mem 1024 -- -p 0x1" % self.vhost, "bind to vhost-net")
+            crb.send_expect("%s -c f -n 4 --socket-mem 512,512 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i" % self.vhost, "testpmd> ",60)
+            crb.send_expect("start", "testpmd> ")
 
         try:
             # set up host virtual machine
             self.host_vm = QEMUKvm(self.duts[0], 'host', 'vhost_user_live_migration')
+            self.host_vm.set_qemu_emulator(self.qemu_path);            
             vhost_params = {}
             vhost_params['driver'] = 'vhost-user'
             # qemu command can't use ~
@@ -119,6 +115,7 @@ class TestVhostUserLiveMigration(TestCase):
             self.logger.info("Start virtual machine on backup host")
             # set up backup virtual machine
             self.backup_vm = QEMUKvm(self.duts[1], 'backup', 'vhost_user_live_migration')
+            self.backup_vm.set_qemu_emulator(self.qemu_path);
             vhost_params = {}
             vhost_params['driver'] = 'vhost-user'
             # qemu command can't use ~
@@ -156,7 +153,7 @@ class TestVhostUserLiveMigration(TestCase):
             if self.backup_serial is not None and self.backup_vm is not None:
                 self.backup_vm.close_serial_port()
 
-        self.logger.info("Stop virtual machine on host")
+
         if getattr(self, 'vm_host', None):
             if self.vm_host is not None:
                 self.host_vm.stop()
@@ -189,7 +186,7 @@ class TestVhostUserLiveMigration(TestCase):
         """
         send packet from tester
         """
-        sendp_fmt = "sendp([Ether(dst='%(DMAC)s')/Dot1Q(vlan=1000)/IP()/UDP()/Raw('x'*18)], iface='%(INTF)s', count=%(COUNT)d)"
+        sendp_fmt = "sendp([Ether(dst='%(DMAC)s')/IP()/UDP()/Raw('x'*18)], iface='%(INTF)s', count=%(COUNT)d)"
         sendp_cmd = sendp_fmt % {'DMAC': self.virio_mac, 'INTF': intf, 'COUNT': number}
         self.tester.scapy_append(sendp_cmd)
         self.tester.scapy_execute()
@@ -255,7 +252,8 @@ class TestVhostUserLiveMigration(TestCase):
 
         self.logger.info("Migrate host VM to backup host")
         # start live migration
-        self.host_vm.start_migration(self.backup_dutip, self.backup_vm.migrate_port)
+        ret = self.host_vm.start_migration(self.backup_dutip, self.backup_vm.migrate_port)
+        self.verify(ret, "Failed to migration, please check VM and qemu version")
 
         # make sure still can receive packets in migration process
         self.verify_kernel(self.host_tport, self.vm_host)
@@ -266,11 +264,11 @@ class TestVhostUserLiveMigration(TestCase):
 
         # check vhost-switch log after migration
         out = self.duts[0].get_session_output(timeout=1)
-        self.verify("device has been removed" in out, "Device not removed for host")
+        self.verify("closed" in out, "Vhost Connection NOT closed on host")
         out = self.duts[1].get_session_output(timeout=1)
-        self.verify("virtio is now ready" in out, "Device not ready on backup host")
+        self.verify("established" in out, "Device not ready on backup host")
 
-        self.logger.info("Migration process done, init backup VM")
+        self.logger.info("Migration process done, then go to backup VM")
         # connected backup VM
         self.vm_backup = self.backup_vm.migrated_start()
 
@@ -291,8 +289,10 @@ class TestVhostUserLiveMigration(TestCase):
 
         self.logger.info("Migrate host VM to backup host")
         # start live migration
-        self.host_vm.start_migration(self.backup_dutip, self.backup_vm.migrate_port)
-
+        
+        ret = self.host_vm.start_migration(self.backup_dutip, self.backup_vm.migrate_port)
+        self.verify(ret, "Failed to migration, please check VM and qemu version")
+       
         # make sure still can receive packets in migration process
         self.verify_dpdk(self.host_tport, self.host_serial)
 
@@ -302,11 +302,11 @@ class TestVhostUserLiveMigration(TestCase):
 
         # check vhost-switch log after migration
         out = self.duts[0].get_session_output(timeout=1)
-        self.verify("device has been removed" in out, "Device not removed for host")
+        self.verify("closed" in out, "Vhost Connection NOT closed on host")
         out = self.duts[1].get_session_output(timeout=1)
-        self.verify("virtio is now ready" in out, "Device not ready on backup host")
+        self.verify("established" in out, "Device not ready on backup host")
 
-        self.logger.info("Migration process done, init backup VM")
+        self.logger.info("Migration process done, then go to backup VM")
         time.sleep(5)
 
         # make sure still can receive packets
-- 
2.5.5

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [dts] [PATCH] update vhost-user live migration script and test plan
  2016-08-05  2:14 [dts] [PATCH] update vhost-user live migration script and test plan Qian Xu
@ 2016-08-05  2:28 ` Liu, Yong
  0 siblings, 0 replies; 2+ messages in thread
From: Liu, Yong @ 2016-08-05  2:28 UTC (permalink / raw)
  To: Xu, Qian Q, dts; +Cc: Xu, Qian Q

Qian, one comment below.

> -----Original Message-----
> From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Qian Xu
> Sent: Friday, August 05, 2016 10:14 AM
> To: dts@dpdk.org
> Cc: Xu, Qian Q
> Subject: [dts] [PATCH] update vhost-user live migration script and test
> plan
> 
> Update vhost-user live migration script based on testpmd as
> vhost backend application and some minor changes.
> 1. Update the backend application from vhost-switch sample to testpmd
> since
> some switch will filter VLAN packets. The command line and the output
> check
> has been updated according to testpmd application.
> 2. Set the qemu path for this case.
> 3.remove VLAN tag in the traffic settings.
> 4. Check if the migration is failed or not.
> 5. Update the qemu monitor quit check part.
> 
> Update vhost-user live migration test plan
> 1. Add more details about nfs settings.
> 2. Change the vhost-user backend application to testpmd and the launch
> step.
> 3. Add more details about VM access, such as telnet, ssh, scp.
> 
> Signed-off-by: Qian Xu <qian.q.xu@intel.com>
> 
> diff --git a/test_plans/vhost_user_live_migration_test_plan.rst
> b/test_plans/vhost_user_live_migration_test_plan.rst
> index 8d4b5c9..41053e9 100644
> --- a/test_plans/vhost_user_live_migration_test_plan.rst
> +++ b/test_plans/vhost_user_live_migration_test_plan.rst
> @@ -33,122 +33,174 @@
>  ==============================
>  DPDK vhost user live migration
>  ==============================
> -This feature is to make sure vhost user live migration works based on
> vhost-switch.
> +This feature is to make sure vhost user live migration works based on
> testpmd.
> 
>  Prerequisites
>  -------------
> -Connect three ports to one switch, these three ports are from Host,
> Backup
> -host and tester.
> +HW setup
> 
> -Start nfs service and export nfs to backup host IP:
> +1. Connect three ports to one switch, these three ports are from Host,
> Backup
> +host and tester. Ensure the tester can send packets out, then host/backup
> server ports
> +can receive these packets.
> +2. Better to have 2 similar machine with the same OS.
> +
> +NFS configuration
> +1. Make sure host nfsd module updated to v4 version(v2 not support file >
> 4G)
> +
> +2. Start nfs service and export nfs to backup host IP:
>      host# service rpcbind start
> -	host# service nfs start
> -	host# cat /etc/exports
> +	host# service nfs-server start
> +	host# service nfs-mountd start
> +	host# systemctrl stop firewalld.service
> +	host# vim /etc/exports
>      host# /home/vm-image backup-host-ip(rw,sync,no_root_squash)
> +
> +3. Mount host nfs folder on backup host:
> +	backup# mount -t nfs -o nolock,vers=4  host-ip:/home/vm-image
> /mnt/nfs
> 
> -Make sure host nfsd module updated to v4 version(v2 not support file > 4G)
> +On host server side:
> 
> -Create enough hugepages for vhost-switch and qemu backend memory.
> +1. Create enough hugepages for vhost-switch and qemu backend memory.
>      host# echo 4096 > /sys/kernel/mm/hugepages/hugepages-
> 2048kB/nr_hugepages
>      host# mount -t hugetlbfs hugetlbfs /mnt/huge
> 
> -Bind host port to igb_uio and start vhost switch:
> -    host# vhost-switch -c f -n 4 --socket-mem 1024 -- -p 0x1
> -
> -Start host qemu process:
> -	host# qemu-system-x86_64 -name host -enable-kvm -m 2048 \
> -	-drive file=/home/vm-image/vm0.img,format=raw \
> -	-serial telnet:localhost:5556,server,nowait \
> -	-cpu host -smp 4 \
> -	-net nic,model=e1000 \
> -	-net user,hostfwd=tcp:127.0.0.1:5555-:22 \
> -	-chardev socket,id=char1,path=/root/dpdk_org/vhost-net \
> -	-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
> -	-device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 \
> -	-object memory-backend-file,id=mem,size=2048M,mem-
> path=/mnt/huge,share=on \
> -	-numa node,memdev=mem -mem-prealloc \
> -	-monitor unix:/tmp/host.sock,server,nowait \
> -	-daemonize
> -
> -Wait for virtual machine start up and up virtIO interface:
> -	host-vm# ifconfig eth1 up
> -
> -Check vhost-switch connected and send packet with mac+vlan can received
> by
> -virtIO interface in VM:
> -	VHOST_DATA: (0) mac 00:00:00:00:00:01 and vlan 1000 registered
> -
> -Mount host nfs folder on backup host:
> -	backup# mount -t nfs -o nolock,vers=4  host-ip:/home/vm-image
> /mnt/nfs
> -
> -Create enough hugepages for vhost-switch and qemu backend memory.
> -    backup# echo 4096 > /sys/kernel/mm/hugepages/hugepages-
> 2048kB/nr_hugepages
> -    backup# mount -t hugetlbfs hugetlbfs /mnt/huge
> -
> -Bind backup host port to igb_uio and start vhost switch:
> -    backup# vhost-switch -c f -n 4 --socket-mem 1024 -- -p 0x1
> -
> -Start backup host qemu with additional parameter:
> -	-incoming tcp:0:4444
> -
> -Test Case 1: migrate with kernel driver
> -=======================================
> +2. Bind host port to igb_uio and start testpmd with vhost port:
> +    #./tools/dpdk-devbind.py -b igb_uio 83:00.1
> +    #./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev
> 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -- -i
> +    testpmd>start
> +
> +3. Start VM on host, here we set 5432 as the serial port, 3333 as the
> qemu monitor port, 5555 as the SSH port.
> +    taskset -c 22-23 /home/qxu10/qemu-2.6.0/x86_64-softmmu/qemu-system-
> x86_64 -name vm1host \
> +    -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
> +    -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive
> file=/home/qxu10/img/vm1.img \
> +    -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22
> \
> +    -chardev socket,id=char0,path=./vhost-net \
> +    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
> +    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01 \
> +    -monitor telnet::3333,server,nowait \
> +    -serial telnet:localhost:5432,server,nowait \
> +    -daemonize
> +
> +On the backup server, run the vhost testpmd on the host and launch VM:
> +
> +4.  Set huge page, bind one port to igb_uio and run testpmd on the backup
> server, the command is very similar to host:
> +    backup server# echo 4096 > /sys/kernel/mm/hugepages/hugepages-
> 2048kB/nr_hugepages
> +    backup server# mount -t hugetlbfs hugetlbfs /mnt/huge
> +    backup server#./tools/dpdk-devbind.py -b igb_uio 81:00.1
> +    backup server#./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n
> 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 --
> -i
> +    testpmd>start
> +
> +5. Launch VM on the backup server, and the script is similar to host, but
> note the 2 differences:
> +   1. need add " -incoming tcp:0:4444 " for live migration.
> +   2. need make sure the VM image is the NFS mounted folder, VM image is
> the exact one on host server.
> +
> +   Backup server #
> +   /home/qxu10/qemu-2.6.0/x86_64-softmmu/qemu-system-x86_64 -name vm2 \
> +   -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-
> path=/mnt/huge,share=on \
> +   -numa node,memdev=mem -mem-prealloc -smp 4 -cpu host -drive
> file=/mnt/nfs/vm1.img \
> +   -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22
> \
> +   -chardev socket,id=char0,path=./vhost-net \
> +   -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
> +   -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01 \
> +   -monitor telnet::3333,server,nowait \
> +   -serial telnet:localhost:5432,server,nowait \
> +   -incoming tcp:0:4444 \
> +   -daemonize
> +
> +
> +Test Case 1: migrate with virtio-pmd
> +====================================
>  Make sure all Prerequisites has been done
> -1. Login into host virtual machine and capture incoming packets.
> -	host# telnet localhost 5556
> -	host vm# ifconfig eth1 up
> 
> -2. Send continous packets with mac(00:00:00:00:00:01) and vlan(1000)
> +6. SSH to VM and scp the DPDK folder from host to VM:
> +    host # ssh -p 5555 localhost, then input password to log in.
> +	host # scp  -P 5555 -r <dpdk_folder>/  localhost:/root, then input
> password to let the file transfer.
> +
> +7. Telnet the serial port and run testpmd in VM:
> +
> +    host # telnet localhost 5432
> +	Input Enter, then log in to VM
> +	If need leave the session, input "CTRL" + "]", then quit the telnet
> session.
> +	On the Host server VM, run below commands to launch testpmd
> +	host vm #
> +	cd /root/dpdk
> +    modprobe uio
> +    insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
> +    ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0
> +    echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i
> +	>set fwd rxonly
> +	>set verbose 1
> +	>start tx_first
> +
> +8.  Check host vhost pmd connect with VM’s virtio device:
> +    testpmd> host testpmd message for connection
> +
> +9. 	Send continuous packets with the physical port's mac(e.g:
> 90:E2:BA:69:C9:C9)
>  from tester port:
>  	tester# scapy
> -	tester# p =
> Ether(dst="00:00:00:00:00:01")/Dot1Q(vlan=1000)/IP()/UDP()/
> +	tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/
>  	            Raw('x'*20)
>  	tester# sendp(p, iface="p5p1", inter=1, loop=1)
> -
> -3. Check packet normally recevied by virtIO interface
> -	host vm# tcpdump -i eth1 -xxx
> -
> -4. Connect to qemu monitor session and start migration
> -	host# nc -U /tmp/host.sock
> -    host# (qemu)migrate -d tcp:backup host ip:4444
> -
> -5. Check host vm can receive packet before migration done
> -
> -6. Query stats of migrate in monitor, check status of migration
> +
> +	Then check the host VM can receive the packet:
> +	host VM# testpmd> port 0/queue 0: received 1 packets
> +
> +10. Start Live migration, ensure the traffic is continuous at the HOST VM
> side:
> +    host server # telnet localhost 3333
> +	(qemu)migrate -d tcp:backup server:4444
> +	e.g: migrate -d tcp:10.239.129.176:4444
> +	(qemu)info migrate
> +	Check if the migrate is active and not failed.
> +
> +11. Check host vm can receive packet before migration done
> +
> +12. Query stats of migrate in monitor, check status of migration, when
> the status is completed, then the migration is done.
>      host# (qemu)info migrate
> -    host# after finished:
> -
> -7. After migartion done, login into backup vm and re-enable virtIO
> interface
> -	backup vm# ifconfig eth1 down
> -	backup vm# ifconfig eth1 up
> -
> -8. Check backup host reconnected and packet normally recevied
> -
> -Test Case 2: migrate with dpdk
> -==============================
> -Make sure all Prerequisites has been done
> -1. Send continous packets with mac(00:00:00:00:00:01) and vlan(1000)
> +    host# (qemu)
> +    Migration status: completed
> +
> +13. After live migration, go to the backup server and check if the
> virtio-pmd can continue to receive packets.
> +    Backup server # telnet localhost 5432
> +	log in then see the same screen from the host server, and check if
> the virtio-pmd can continue receive the packets.
> +
> +Test Case 2: migrate with virtio-net
> +====================================
> +Make sure all Prerequisites has been done.
> +6. Telnet the serial port and run testpmd in VM:
> +
> +    host # telnet localhost 5432
> +	Input Enter, then log in to VM
> +	If need leave the session, input "CTRL" + "]", then quit the telnet
> session.
> +
> +7. Let the virtio-net link up:
> +	host vm # ifconfig eth1 up
> +
> +8. Send continuous packets with the physical port's mac(e.g:
> 90:E2:BA:69:C9:C9)
> +   from tester port:
>  	tester# scapy
> -	tester# p =
> Ether(dst="00:00:00:00:00:01")/Dot1Q(vlan=1000)/IP()/UDP()/
> +	tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/
>  	            Raw('x'*20)
>  	tester# sendp(p, iface="p5p1", inter=1, loop=1)
> -
> -2. bind virtIO interface to igb_uio and start testpmd
> -	host vm# testpmd -c 0x7 -n 4
> -
> -3. Check packet normally recevied by testpmd:
> -	host vm# testpmd> set fwd rxonly
> -	host vm# testpmd> set verbose 1
> -	host vm# testpmd> port 0/queue 0: received 1 packets
> -
> -4. Connect to qemu monitor session and start migration
> -	host# nc -U /tmp/host.sock
> -    host# (qemu)migrate -d tcp:backup host ip:4444
> -
> -5. Check host vm can receive packet before migration done
> -
> -6. Query stats of migrate in monitor, check status of migration
> +
> +9. Check the host VM can receive the packet:
> +	host VM# tcpdump -i eth1
> +
> +10. Start Live migration, ensure the traffic is continuous at the HOST VM
> side:
> +    host server # telnet localhost 3333
> +	(qemu)migrate -d tcp:backup server:4444
> +	e.g: migrate -d tcp:10.239.129.176:4444
> +	(qemu)info migrate
> +	Check if the migrate is active and not failed.
> +
> +11. Check host vm can receive packet before migration done
> +
> +12. Query stats of migrate in monitor, check status of migration, when
> the status is completed, then the migration is done.
>      host# (qemu)info migrate
> -    host# after finished:
> +    host# (qemu)
> +    Migration status: completed
> +
> +13. After live migration, go to the backup server and check if the
> virtio-pmd can continue to receive packets.
> +    Backup server # telnet localhost 5432
> +	log in then see the same screen from the host server, and check if
> the virtio-net can continue receive the packets.
> 
> -7. After migartion done, login into backup vm and check packets recevied
> -	backup vm# testpmd> port 0/queue 0: received 1 packets
> \ No newline at end of file
> diff --git a/tests/TestSuite_vhost_user_live_migration.py
> b/tests/TestSuite_vhost_user_live_migration.py
> index 95e2fb3..e1b9cab 100644
> --- a/tests/TestSuite_vhost_user_live_migration.py
> +++ b/tests/TestSuite_vhost_user_live_migration.py
> @@ -33,19 +33,13 @@ class TestVhostUserLiveMigration(TestCase):
>          self.backup_tport =
> self.tester.get_local_port_bydut(self.backup_port, self.backup_dutip)
>          self.backup_tintf = self.tester.get_interface(self.backup_tport)
> 
> -        # build backup vhost-switch
> -        out = self.duts[0].send_expect("make -C examples/vhost", "# ")
> -        self.verify("Error" not in out, "compilation error 1")
> -        self.verify("No such file" not in out, "compilation error 2")
> -
> -        # build backup vhost-switch
> -        out = self.duts[1].send_expect("make -C examples/vhost", "# ")
> -        self.verify("Error" not in out, "compilation error 1")
> -        self.verify("No such file" not in out, "compilation error 2")
> -
> -        self.vhost = "./examples/vhost/build/app/vhost-switch"
> +        # Use testpmd as vhost-user application on host/backup server
> +        self.vhost = "./x86_64-native-linuxapp-gcc/app/testpmd"
>          self.vm_testpmd = "./%s/app/testpmd -c 0x3 -n 4 -- -i" %
> self.target
> -        self.virio_mac = "00:00:00:00:00:01"
> +        self.virio_mac = "52:54:00:00:00:01"
> +
> +        # Set the qemu_path for specific env
> +        self.qemu_path = "/home/qxu10/qemu-2.6.0/x86_64-softmmu/qemu-
> system-x86_64"
> 
This local path can't work on other host. If u need change default qemu path, please use qemu option in vhost_user_live_migration.cfg.
The format will be like:
qemu= 
    path=/home/qxu10/qemu-2.6.0/x86_64-softmmu/qemu-system-x86_64

>          # flag for environment
>          self.env_done = False
> @@ -93,11 +87,13 @@ class TestVhostUserLiveMigration(TestCase):
>              # start vhost-switch, predict hugepage on both sockets
>              base_dir = crb.base_dir.replace('~', '/root')
>              crb.send_expect("rm -f %s/vhost-net" % base_dir, "# ")
> -            crb.send_expect("%s -c f -n 4 --socket-mem 1024 -- -p 0x1" %
> self.vhost, "bind to vhost-net")
> +            crb.send_expect("%s -c f -n 4 --socket-mem 512,512 --vdev
> 'eth_vhost0,iface=./vhost-net,queues=1' -- -i" % self.vhost, "testpmd>
> ",60)
> +            crb.send_expect("start", "testpmd> ")
> 
>          try:
>              # set up host virtual machine
>              self.host_vm = QEMUKvm(self.duts[0], 'host',
> 'vhost_user_live_migration')
> +            self.host_vm.set_qemu_emulator(self.qemu_path);
No need to this, please change configuration file.
>              vhost_params = {}
>              vhost_params['driver'] = 'vhost-user'
>              # qemu command can't use ~
> @@ -119,6 +115,7 @@ class TestVhostUserLiveMigration(TestCase):
>              self.logger.info("Start virtual machine on backup host")
>              # set up backup virtual machine
>              self.backup_vm = QEMUKvm(self.duts[1], 'backup',
> 'vhost_user_live_migration')
> +            self.backup_vm.set_qemu_emulator(self.qemu_path);
No need to this, please change configuration file.

>              vhost_params = {}
>              vhost_params['driver'] = 'vhost-user'
>              # qemu command can't use ~
> @@ -156,7 +153,7 @@ class TestVhostUserLiveMigration(TestCase):
>              if self.backup_serial is not None and self.backup_vm is not
> None:
>                  self.backup_vm.close_serial_port()
> 
> -        self.logger.info("Stop virtual machine on host")
> +
>          if getattr(self, 'vm_host', None):
>              if self.vm_host is not None:
>                  self.host_vm.stop()
> @@ -189,7 +186,7 @@ class TestVhostUserLiveMigration(TestCase):
>          """
>          send packet from tester
>          """
> -        sendp_fmt =
> "sendp([Ether(dst='%(DMAC)s')/Dot1Q(vlan=1000)/IP()/UDP()/Raw('x'*18)],
> iface='%(INTF)s', count=%(COUNT)d)"
> +        sendp_fmt = "sendp([Ether(dst='%(DMAC)s')/IP()/UDP()/Raw('x'*18)],
> iface='%(INTF)s', count=%(COUNT)d)"
>          sendp_cmd = sendp_fmt % {'DMAC': self.virio_mac, 'INTF': intf,
> 'COUNT': number}
>          self.tester.scapy_append(sendp_cmd)
>          self.tester.scapy_execute()
> @@ -255,7 +252,8 @@ class TestVhostUserLiveMigration(TestCase):
> 
>          self.logger.info("Migrate host VM to backup host")
>          # start live migration
> -        self.host_vm.start_migration(self.backup_dutip,
> self.backup_vm.migrate_port)
> +        ret = self.host_vm.start_migration(self.backup_dutip,
> self.backup_vm.migrate_port)
> +        self.verify(ret, "Failed to migration, please check VM and qemu
> version")
> 
>          # make sure still can receive packets in migration process
>          self.verify_kernel(self.host_tport, self.vm_host)
> @@ -266,11 +264,11 @@ class TestVhostUserLiveMigration(TestCase):
> 
>          # check vhost-switch log after migration
>          out = self.duts[0].get_session_output(timeout=1)
> -        self.verify("device has been removed" in out, "Device not removed
> for host")
> +        self.verify("closed" in out, "Vhost Connection NOT closed on
> host")
>          out = self.duts[1].get_session_output(timeout=1)
> -        self.verify("virtio is now ready" in out, "Device not ready on
> backup host")
> +        self.verify("established" in out, "Device not ready on backup
> host")
> 
> -        self.logger.info("Migration process done, init backup VM")
> +        self.logger.info("Migration process done, then go to backup VM")
>          # connected backup VM
>          self.vm_backup = self.backup_vm.migrated_start()
> 
> @@ -291,8 +289,10 @@ class TestVhostUserLiveMigration(TestCase):
> 
>          self.logger.info("Migrate host VM to backup host")
>          # start live migration
> -        self.host_vm.start_migration(self.backup_dutip,
> self.backup_vm.migrate_port)
> -
> +
> +        ret = self.host_vm.start_migration(self.backup_dutip,
> self.backup_vm.migrate_port)
> +        self.verify(ret, "Failed to migration, please check VM and qemu
> version")
> +
>          # make sure still can receive packets in migration process
>          self.verify_dpdk(self.host_tport, self.host_serial)
> 
> @@ -302,11 +302,11 @@ class TestVhostUserLiveMigration(TestCase):
> 
>          # check vhost-switch log after migration
>          out = self.duts[0].get_session_output(timeout=1)
> -        self.verify("device has been removed" in out, "Device not removed
> for host")
> +        self.verify("closed" in out, "Vhost Connection NOT closed on
> host")
>          out = self.duts[1].get_session_output(timeout=1)
> -        self.verify("virtio is now ready" in out, "Device not ready on
> backup host")
> +        self.verify("established" in out, "Device not ready on backup
> host")
> 
> -        self.logger.info("Migration process done, init backup VM")
> +        self.logger.info("Migration process done, then go to backup VM")
>          time.sleep(5)
> 
>          # make sure still can receive packets
> --
> 2.5.5


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2016-08-05  2:29 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-05  2:14 [dts] [PATCH] update vhost-user live migration script and test plan Qian Xu
2016-08-05  2:28 ` Liu, Yong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).