test suite reviews and discussions
 help / color / mirror / Atom feed
* [dts] [PATCH 0/7] support vhost live migration automation
@ 2016-07-14 13:17 Marvin Liu
  2016-07-14 13:17 ` [dts] [PATCH 1/7] framework: support close ssh session without logout Marvin Liu
                   ` (6 more replies)
  0 siblings, 7 replies; 8+ messages in thread
From: Marvin Liu @ 2016-07-14 13:17 UTC (permalink / raw)
  To: dts

This patch set enabled vhost user live migration automated validation enviornment.
Suite will create virtual machine on host and backup virtual machine in other host.
Migration action will happen between these two VMs and suite will verify virtio
netdevice work fine before/between/after migration process.
Virtio device will be drived either kernel module or dpdk pmd.
Modified qemu&virt related modules to support status concept and migration functions.

Marvin Liu (7):
  framework: support close ssh session without logout
  framework tester: fix typo issue
  framework virt_base: add vm status concept
  framework qemu_kvm: support migration and serial port
  conf: add configuration file for vhost_user_live_migration suite
  test_plans: add test plan for vhost_user_live_migration suite
  tests: add vhost_user_live_migration suite

 conf/vhost_user_live_migration.cfg                 | 127 ++++++++
 framework/qemu_kvm.py                              | 123 +++++++-
 framework/ssh_connection.py                        |   4 +-
 framework/ssh_pexpect.py                           |   9 +-
 framework/tester.py                                |   4 +-
 framework/virt_base.py                             |  51 +++-
 framework/virt_dut.py                              |   6 +-
 test_plans/vhost_user_live_migration_test_plan.rst | 154 ++++++++++
 tests/TestSuite_vhost_user_live_migration.py       | 327 +++++++++++++++++++++
 9 files changed, 781 insertions(+), 24 deletions(-)
 create mode 100644 conf/vhost_user_live_migration.cfg
 create mode 100644 test_plans/vhost_user_live_migration_test_plan.rst
 create mode 100644 tests/TestSuite_vhost_user_live_migration.py

-- 
1.9.3

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dts] [PATCH 1/7] framework: support close ssh session without logout
  2016-07-14 13:17 [dts] [PATCH 0/7] support vhost live migration automation Marvin Liu
@ 2016-07-14 13:17 ` Marvin Liu
  2016-07-14 13:17 ` [dts] [PATCH 2/7] framework tester: fix typo issue Marvin Liu
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Marvin Liu @ 2016-07-14 13:17 UTC (permalink / raw)
  To: dts; +Cc: Marvin Liu

Session may not avaiable to logout like migration done.
Add paramter to close connection forcely without logout action.

Signed-off-by: Marvin Liu <yong.liu@intel.com>

diff --git a/framework/ssh_connection.py b/framework/ssh_connection.py
index 9f1aee1..edb8170 100644
--- a/framework/ssh_connection.py
+++ b/framework/ssh_connection.py
@@ -72,8 +72,8 @@ class SSHConnection(object):
         self.logger.debug(out)
         return out
 
-    def close(self):
-        self.session.close()
+    def close(self, force=False):
+        self.session.close(force)
         connection = {}
         connection[self.name] = self.session
         try:
diff --git a/framework/ssh_pexpect.py b/framework/ssh_pexpect.py
index f0348b6..1abf8a1 100644
--- a/framework/ssh_pexpect.py
+++ b/framework/ssh_pexpect.py
@@ -129,9 +129,12 @@ class SSHPexpect(object):
         output.replace("[PEXPECT]", "")
         return output
 
-    def close(self):
-        if self.isalive():
-            self.session.logout()
+    def close(self, force=False):
+        if force is True:
+            self.session.close()
+        else:
+            if self.isalive():
+                self.session.logout()
 
     def isalive(self):
         return self.session.isalive()
diff --git a/framework/virt_dut.py b/framework/virt_dut.py
index cc86827..0010e08 100644
--- a/framework/virt_dut.py
+++ b/framework/virt_dut.py
@@ -75,12 +75,12 @@ class VirtDut(DPDKdut):
     def init_log(self):
         self.logger.config_suite(self.host_dut.test_classname, 'virtdut')
 
-    def close(self):
+    def close(self, force=False):
         if self.session:
-            self.session.close()
+            self.session.close(force)
             self.session = None
         if self.alt_session:
-            self.alt_session.close()
+            self.alt_session.close(force)
             self.alt_session = None
         RemoveNicObj(self)
 
-- 
1.9.3

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dts] [PATCH 2/7] framework tester: fix typo issue
  2016-07-14 13:17 [dts] [PATCH 0/7] support vhost live migration automation Marvin Liu
  2016-07-14 13:17 ` [dts] [PATCH 1/7] framework: support close ssh session without logout Marvin Liu
@ 2016-07-14 13:17 ` Marvin Liu
  2016-07-14 13:17 ` [dts] [PATCH 3/7] framework virt_base: add vm status concept Marvin Liu
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Marvin Liu @ 2016-07-14 13:17 UTC (permalink / raw)
  To: dts; +Cc: Marvin Liu

Signed-off-by: Marvin Liu <yong.liu@intel.com>

diff --git a/framework/tester.py b/framework/tester.py
index 2c39ac9..2781376 100644
--- a/framework/tester.py
+++ b/framework/tester.py
@@ -143,8 +143,8 @@ class Tester(Crb):
         Return tester local port connect to specified port and specified dut.
         """
         for dut in self.duts:
-            if dut.crb['My IP'] == dutIP:
-                return self.dut.ports_map[remotePort]
+            if dut.crb['My IP'] == dutIp:
+                return dut.ports_map[remotePort]
 
     def get_local_index(self, pci):
         """
-- 
1.9.3

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dts] [PATCH 3/7] framework virt_base: add vm status concept
  2016-07-14 13:17 [dts] [PATCH 0/7] support vhost live migration automation Marvin Liu
  2016-07-14 13:17 ` [dts] [PATCH 1/7] framework: support close ssh session without logout Marvin Liu
  2016-07-14 13:17 ` [dts] [PATCH 2/7] framework tester: fix typo issue Marvin Liu
@ 2016-07-14 13:17 ` Marvin Liu
  2016-07-14 13:17 ` [dts] [PATCH 4/7] framework qemu_kvm: support migration and serial port Marvin Liu
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Marvin Liu @ 2016-07-14 13:17 UTC (permalink / raw)
  To: dts; +Cc: Marvin Liu

1. Add vm status concept, default status is running. Status will be
changed after hypervisor module detected status change.
2. Add new function support connect vm dut after migration done.

Signed-off-by: Marvin Liu <yong.liu@intel.com>

diff --git a/framework/virt_base.py b/framework/virt_base.py
index efad384..5cd2854 100644
--- a/framework/virt_base.py
+++ b/framework/virt_base.py
@@ -43,6 +43,10 @@ from settings import CONFIG_ROOT_PATH
 from virt_dut import VirtDut
 from utils import remove_old_rsa_key
 
+ST_NOTSTART = "NOTSTART"
+ST_PAUSE = "PAUSE"
+ST_RUNNING = "RUNNING"
+ST_UNKNOWN = "UNKNOWN"
 
 class VirtBase(object):
     """
@@ -86,6 +90,9 @@ class VirtBase(object):
         # default call back function is None
         self.callback = None
 
+        # vm status is running by default, only be changed in internal module
+        self.vm_status = ST_RUNNING
+
     def get_virt_type(self):
         """
         Get the virtual type, such as KVM, XEN or LIBVIRT.
@@ -224,7 +231,7 @@ class VirtBase(object):
         self.load_global_config()
         self.load_local_config(self.suite)
 
-    def start(self, load_config=True, set_target=True, cpu_topo='', bind_dev=True):
+    def start(self, load_config=True, set_target=True, cpu_topo=''):
         """
         Start VM and instantiate the VM with VirtDut.
         """
@@ -237,8 +244,12 @@ class VirtBase(object):
             # start virutal machine
             self._start_vm()
 
-            # connect vm dut and init running environment
-            vm_dut = self.instantiate_vm_dut(set_target, cpu_topo)
+            if self.vm_status is ST_RUNNING:
+                # connect vm dut and init running environment
+                vm_dut = self.instantiate_vm_dut(set_target, cpu_topo)
+            else:
+                vm_dut = None
+
         except Exception as vm_except:
             if self.handle_exception(vm_except):
                 print dts.RED("Handled expection " + str(type(vm_except)))
@@ -251,6 +262,25 @@ class VirtBase(object):
             return None
         return vm_dut
 
+    def migrated_start(self, set_target=True, cpu_topo=''):
+        """
+        Instantiate the VM after migration done
+        There's no need to load param and start VM because VM has been started
+        """
+        try:
+            if self.vm_status is ST_PAUSE:
+                # connect backup vm dut and it just inherited from host
+                vm_dut = self.instantiate_vm_dut(set_target, cpu_topo, bind_dev=False)
+        except Exception as vm_except:
+            if self.handle_exception(vm_except):
+                print dts.RED("Handled expection " + str(type(vm_except)))
+            else:
+                print dts.RED("Unhandled expection " + str(type(vm_except)))
+
+            return None
+
+        return vm_dut
+
     def handle_exception(self, vm_except):
         # show exception back trace
         exc_type, exc_value, exc_traceback = sys.exc_info()
@@ -357,10 +387,19 @@ class VirtBase(object):
         """
         Stop the VM.
         """
-        self.vm_dut.close()
-        self.vm_dut.logger.logger_exit()
-        self.vm_dut = None
+        # vm_dut may not init in migration case
+        if getattr(self, 'vm_dut', None):
+            if self.vm_status is ST_RUNNING:
+                self.vm_dut.close()
+            else:
+                # when vm is not running, not close session forcely
+                self.vm_dut.close(force=True)
+
+            self.vm_dut.logger.logger_exit()
+            self.vm_dut = None
+
         self._stop_vm()
+
         self.virt_pool.free_all_resource(self.vm_name)
 
     def register_exit_callback(self, callback):
-- 
1.9.3

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dts] [PATCH 4/7] framework qemu_kvm: support migration and serial port
  2016-07-14 13:17 [dts] [PATCH 0/7] support vhost live migration automation Marvin Liu
                   ` (2 preceding siblings ...)
  2016-07-14 13:17 ` [dts] [PATCH 3/7] framework virt_base: add vm status concept Marvin Liu
@ 2016-07-14 13:17 ` Marvin Liu
  2016-07-14 13:17 ` [dts] [PATCH 5/7] conf: add configuration file for vhost_user_live_migration suite Marvin Liu
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Marvin Liu @ 2016-07-14 13:17 UTC (permalink / raw)
  To: dts; +Cc: Marvin Liu

1. Add vm status concept and status updated from qemu monitor
2. Support migration function and migration status checking
3. Support connect and close serial port which is the only available
session after migration

Signed-off-by: Marvin Liu <yong.liu@intel.com>

diff --git a/framework/qemu_kvm.py b/framework/qemu_kvm.py
index 70730e2..96923e9 100644
--- a/framework/qemu_kvm.py
+++ b/framework/qemu_kvm.py
@@ -35,6 +35,7 @@ import re
 import os
 
 from virt_base import VirtBase
+from virt_base import ST_NOTSTART, ST_PAUSE, ST_RUNNING, ST_UNKNOWN
 from exception import StartVMFailedException
 from settings import get_host_ip
 
@@ -752,18 +753,60 @@ class QEMUKvm(VirtBase):
             else:
                 self.qga_sock_path = ''
 
+    def add_vm_migration(self, **options):
+        """
+        enable: yes
+        port: tcp port for live migration
+        """
+        migrate_cmd = "-incoming tcp::%(migrate_port)s"
+
+        if 'enable' in options.keys():
+            if options['enable'] == 'yes':
+                if 'port' in options.keys():
+                    self.migrate_port = options['port']
+                else:
+                    self.migrate_port = str(self.virt_pool.alloc_port(self.vm_name))
+                migrate_boot_line = migrate_cmd % {'migrate_port': self.migrate_port}
+                self.__add_boot_line(migrate_boot_line)
+
     def add_vm_serial_port(self, **options):
         """
         enable: 'yes'
         """
-        SERAIL_SOCK_PATH = "/tmp/%s_serial.sock" % self.vm_name
         if 'enable' in options.keys():
             if options['enable'] == 'yes':
-                serial_boot_line = '-serial unix:%s,server,nowait' % SERIAL_SOCK_PATH
+                self.serial_path = "/tmp/%s_serial.sock" % self.vm_name
+                serial_boot_line = '-serial unix:%s,server,nowait' % self.serial_path
                 self.__add_boot_line(serial_boot_line)
             else:
                 pass
 
+    def connect_serial_port(self, name="", first=True):
+        """
+        Connect to serial port and return connected session for usage
+        if connected failed will return None
+        """
+        if getattr(self, 'serial_path', None):
+            self.serial_session = self.host_dut.new_session(suite=name)
+            self.serial_session.send_command("nc -U %s" % self.serial_path)
+            if first:
+                # login into Fedora os, not sure can work on all distributions
+                self.serial_session.send_expect("", "login:")
+                self.serial_session.send_expect("%s" % self.username, "Password:")
+                self.serial_session.send_expect("%s" % self.password, "# ")
+            return self.serial_session
+
+        return None
+
+    def close_serial_port(self):
+        """
+        Close serial session if it existed
+        """
+        if getattr(self, 'serial_session', None):
+            # exit from nc first
+            self.serial_session.send_expect("^C", "# ")
+            self.host_dut.close_session(self.serial_session)
+
     def add_vm_vnc(self, **options):
         """
         displayNum: 1
@@ -822,12 +865,55 @@ class QEMUKvm(VirtBase):
         ret = self.host_session.send_expect(qemu_boot_line, '# ', verify=True)
         if type(ret) is int and ret != 0:
             raise StartVMFailedException('Start VM failed!!!')
-        out = self.__control_session('ping', '120')
-        if "Not responded" in out:
-            raise StartVMFailedException('Not response in 60 seconds!!!')
 
         self.__get_pci_mapping()
-        self.__wait_vmnet_ready()
+
+        # query status
+        self.update_status()
+
+        # when vm is waiting for migration, can't ping
+        if self.vm_status is not ST_PAUSE:
+            # if VM waiting for migration, can't return ping
+            out = self.__control_session('ping', '120')
+            if "Not responded" in out:
+                raise StartVMFailedException('Not response in 120 seconds!!!')
+
+            self.__wait_vmnet_ready()
+
+    def start_migration(self, remote_ip, remote_port):
+        """
+        Send migration command to host and check whether start migration
+        """
+        # send migration command
+        migration_port = 'tcp:%(IP)s:%(PORT)s' % {'IP': remote_ip, 'PORT': remote_port}
+
+        self.__monitor_session('migrate', '-d', migration_port)
+        time.sleep(2)
+        out = self.__monitor_session('info', 'migrate')
+        if "Migration status: active" in out:
+            return True
+        else:
+            return False
+
+    def wait_migration_done(self):
+        """
+        Wait for migration done. If not finished after three minutes
+        will raise exception.
+        """
+        # wait for migration done
+        count = 30
+        while count:
+            out = self.__monitor_session('info', 'migrate')
+            if "completed" in out:
+                self.host_logger.info("%s" % out)
+                # after migration done, status is pause
+                self.vm_status = ST_PAUSE
+                return True
+
+            time.sleep(6)
+            count -= 1
+
+        raise StartVMFailedException('Virtual machine can not finished in 180 seconds!!!')
 
     def generate_qemu_boot_line(self):
         """
@@ -1036,10 +1122,28 @@ class QEMUKvm(VirtBase):
         for arg in args:
             cmd += ' ' + str(arg)
 
-        out = self.host_session.send_expect('%s' % cmd, '(qemu)')
+        # after quit command, qemu will exit
+        if 'quit' in cmd:
+            out = self.host_session.send_expect('%s' % cmd, '# ')
+        else:
+            out = self.host_session.send_expect('%s' % cmd, '(qemu)')
         self.host_session.send_expect('^C', "# ")
         return out
 
+    def update_status(self):
+        """
+        Query and update VM status
+        """
+        out = self.__monitor_session('info', 'status')
+        self.host_logger.info("Virtual machine status: %s" % out)
+
+        if 'paused' in out:
+            self.vm_status = ST_PAUSE
+        elif 'running' in out:
+            self.vm_status = ST_RUNNING
+        else:
+            self.vm_status = ST_UNKNOWN
+
     def __strip_guest_pci(self):
         """
         Strip all pci-passthrough device information, based on qemu monitor
@@ -1105,5 +1209,8 @@ class QEMUKvm(VirtBase):
         """
         Stop VM.
         """
-        self.__control_session('powerdown')
+        if self.vm_status is ST_RUNNING:
+            self.__control_session('powerdown')
+        else:
+            self.__monitor_session('quit')
         time.sleep(5)
-- 
1.9.3

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dts] [PATCH 5/7] conf: add configuration file for vhost_user_live_migration suite
  2016-07-14 13:17 [dts] [PATCH 0/7] support vhost live migration automation Marvin Liu
                   ` (3 preceding siblings ...)
  2016-07-14 13:17 ` [dts] [PATCH 4/7] framework qemu_kvm: support migration and serial port Marvin Liu
@ 2016-07-14 13:17 ` Marvin Liu
  2016-07-14 13:17 ` [dts] [PATCH 6/7] test_plans: add test plan " Marvin Liu
  2016-07-14 13:17 ` [dts] [PATCH 7/7] tests: add " Marvin Liu
  6 siblings, 0 replies; 8+ messages in thread
From: Marvin Liu @ 2016-07-14 13:17 UTC (permalink / raw)
  To: dts; +Cc: Marvin Liu

Signed-off-by: Marvin Liu <yong.liu@intel.com>

diff --git a/conf/vhost_user_live_migration.cfg b/conf/vhost_user_live_migration.cfg
new file mode 100644
index 0000000..74e3b54
--- /dev/null
+++ b/conf/vhost_user_live_migration.cfg
@@ -0,0 +1,127 @@
+# QEMU options
+# name
+#       name: vm0
+#
+# enable_kvm
+#       enable: [yes | no]
+#
+# cpu
+#       model: [host | core2duo | ...]
+#           usage:
+#               choose model value from the command
+#                   qemu-system-x86_64 -cpu help
+#       number: '4' #number of vcpus
+#       cpupin: '3 4 5 6' # host cpu list
+#
+# mem
+#       size: 1024
+#
+# disk
+#       file: /path/to/image/test.img
+#
+# net
+#        type: [nic | user | tap | bridge | ...]
+#           nic
+#               opt_vlan: 0
+#                   note: Default is 0.
+#               opt_macaddr: 00:00:00:00:01:01
+#                   note: if creating a nic, it`s better to specify a MAC,
+#                         else it will get a random number.
+#               opt_model:["e1000" | "virtio" | "i82551" | ...]
+#                   note: Default is e1000.
+#               opt_name: 'nic1'
+#               opt_addr: ''
+#                   note: PCI cards only.
+#               opt_vectors:
+#                   note: This option currently only affects virtio cards.
+#           user
+#               opt_vlan: 0
+#                   note: default is 0.
+#               opt_hostfwd: [tcp|udp]:[hostaddr]:hostport-[guestaddr]:guestport
+#                   note: If not specified, it will be setted automatically.
+#           tap
+#               opt_vlan: 0
+#                   note: default is 0.
+#               opt_br: br0
+#                   note: if choosing tap, need to specify bridge name,
+#                         else it will be br0.
+#               opt_script: QEMU_IFUP_PATH
+#                   note: if not specified, default is self.QEMU_IFUP_PATH.
+#               opt_downscript: QEMU_IFDOWN_PATH
+#                   note: if not specified, default is self.QEMU_IFDOWN_PATH.
+#
+# device
+#       driver: [pci-assign | virtio-net-pci | ...]
+#           pci-assign
+#               prop_host: 08:00.0
+#               prop_addr: 00:00:00:00:01:02
+#           virtio-net-pci
+#               prop_netdev: mynet1
+#               prop_id: net1
+#               prop_mac: 00:00:00:00:01:03
+#               prop_bus: pci.0
+#               prop_addr: 0x3
+#
+# monitor
+#       port: 6061   
+#           note: if adding monitor to vm, need to specicy
+#                 this port, else it will get a free port
+#                 on the host machine.
+#
+# qga
+#       enable: [yes | no]
+#
+# serial_port
+#       enable: [yes | no]
+#
+# vnc
+#       displayNum: 1
+#           note: you can choose a number not used on the host.
+#
+# daemon
+#       enable: 'yes'
+#           note:
+#               By default VM will start with the daemonize status.
+#               Not support starting it on the stdin now.
+# migration 
+#       enable: 'yes'
+#            note:
+#                Enable migration on backup host and this VM will waiting for
+#                later command.
+#       port:
+#            note:
+#                listending tcp port
+
+# vm configuration for vhost user live migration case
+[host]
+cpu =
+    model=host,number=4,cpupin=5 6 7 8;
+mem =
+    size=2048,hugepage=yes;
+disk =
+    file=/home/vm-image/vm0.img;
+login =
+    user=root,password=tester;
+qga = 
+    enable=yes;
+daemon =
+    enable=yes;
+serial_port =
+    enable=yes;
+[backup]
+cpu =
+    model=host,number=4,cpupin=5 6 7 8;
+mem =
+    size=2048,hugepage=yes;
+disk =
+    file=/mnt/nfs/vm0.img;
+login =
+    user=root,password=tester;
+qga = 
+    enable=yes;
+daemon =
+    enable=yes;
+migration =
+    enable=yes,port=4444;
+serial_port =
+    enable=yes;
-- 
1.9.3

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dts] [PATCH 6/7] test_plans: add test plan for vhost_user_live_migration suite
  2016-07-14 13:17 [dts] [PATCH 0/7] support vhost live migration automation Marvin Liu
                   ` (4 preceding siblings ...)
  2016-07-14 13:17 ` [dts] [PATCH 5/7] conf: add configuration file for vhost_user_live_migration suite Marvin Liu
@ 2016-07-14 13:17 ` Marvin Liu
  2016-07-14 13:17 ` [dts] [PATCH 7/7] tests: add " Marvin Liu
  6 siblings, 0 replies; 8+ messages in thread
From: Marvin Liu @ 2016-07-14 13:17 UTC (permalink / raw)
  To: dts; +Cc: Marvin Liu

Signed-off-by: Marvin Liu <yong.liu@intel.com>

diff --git a/test_plans/vhost_user_live_migration_test_plan.rst b/test_plans/vhost_user_live_migration_test_plan.rst
new file mode 100644
index 0000000..8d4b5c9
--- /dev/null
+++ b/test_plans/vhost_user_live_migration_test_plan.rst
@@ -0,0 +1,154 @@
+.. Copyright (c) <2016>, Intel Corporation
+      All rights reserved.
+
+   Redistribution and use in source and binary forms, with or without
+   modification, are permitted provided that the following conditions
+   are met:
+
+   - Redistributions of source code must retain the above copyright
+     notice, this list of conditions and the following disclaimer.
+
+   - Redistributions in binary form must reproduce the above copyright
+     notice, this list of conditions and the following disclaimer in
+     the documentation and/or other materials provided with the
+     distribution.
+
+   - Neither the name of Intel Corporation nor the names of its
+     contributors may be used to endorse or promote products derived
+     from this software without specific prior written permission.
+
+   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+   FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+   COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+   INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+   ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+   OF THE POSSIBILITY OF SUCH DAMAGE.
+
+==============================
+DPDK vhost user live migration
+==============================
+This feature is to make sure vhost user live migration works based on vhost-switch.
+
+Prerequisites
+-------------
+Connect three ports to one switch, these three ports are from Host, Backup
+host and tester.
+
+Start nfs service and export nfs to backup host IP:
+    host# service rpcbind start
+	host# service nfs start
+	host# cat /etc/exports
+    host# /home/vm-image backup-host-ip(rw,sync,no_root_squash)
+
+Make sure host nfsd module updated to v4 version(v2 not support file > 4G)
+
+Create enough hugepages for vhost-switch and qemu backend memory.
+    host# echo 4096 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+    host# mount -t hugetlbfs hugetlbfs /mnt/huge
+
+Bind host port to igb_uio and start vhost switch:
+    host# vhost-switch -c f -n 4 --socket-mem 1024 -- -p 0x1
+
+Start host qemu process:
+	host# qemu-system-x86_64 -name host -enable-kvm -m 2048 \
+	-drive file=/home/vm-image/vm0.img,format=raw \
+	-serial telnet:localhost:5556,server,nowait \
+	-cpu host -smp 4 \
+	-net nic,model=e1000 \
+	-net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+	-chardev socket,id=char1,path=/root/dpdk_org/vhost-net \
+	-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
+	-device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 \
+	-object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+	-numa node,memdev=mem -mem-prealloc \
+	-monitor unix:/tmp/host.sock,server,nowait \
+	-daemonize
+
+Wait for virtual machine start up and up virtIO interface:
+	host-vm# ifconfig eth1 up
+
+Check vhost-switch connected and send packet with mac+vlan can received by
+virtIO interface in VM:
+	VHOST_DATA: (0) mac 00:00:00:00:00:01 and vlan 1000 registered
+
+Mount host nfs folder on backup host: 
+	backup# mount -t nfs -o nolock,vers=4  host-ip:/home/vm-image /mnt/nfs
+
+Create enough hugepages for vhost-switch and qemu backend memory.
+    backup# echo 4096 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+    backup# mount -t hugetlbfs hugetlbfs /mnt/huge
+
+Bind backup host port to igb_uio and start vhost switch:
+    backup# vhost-switch -c f -n 4 --socket-mem 1024 -- -p 0x1
+
+Start backup host qemu with additional parameter:
+	-incoming tcp:0:4444
+
+Test Case 1: migrate with kernel driver
+=======================================
+Make sure all Prerequisites has been done
+1. Login into host virtual machine and capture incoming packets.
+	host# telnet localhost 5556
+	host vm# ifconfig eth1 up
+
+2. Send continous packets with mac(00:00:00:00:00:01) and vlan(1000)
+from tester port:
+	tester# scapy
+	tester# p = Ether(dst="00:00:00:00:00:01")/Dot1Q(vlan=1000)/IP()/UDP()/
+	            Raw('x'*20)
+	tester# sendp(p, iface="p5p1", inter=1, loop=1)
+
+3. Check packet normally recevied by virtIO interface
+	host vm# tcpdump -i eth1 -xxx
+
+4. Connect to qemu monitor session and start migration
+	host# nc -U /tmp/host.sock
+    host# (qemu)migrate -d tcp:backup host ip:4444
+
+5. Check host vm can receive packet before migration done
+
+6. Query stats of migrate in monitor, check status of migration
+    host# (qemu)info migrate
+    host# after finished:	
+
+7. After migartion done, login into backup vm and re-enable virtIO interface
+	backup vm# ifconfig eth1 down
+	backup vm# ifconfig eth1 up	
+
+8. Check backup host reconnected and packet normally recevied
+
+Test Case 2: migrate with dpdk
+==============================
+Make sure all Prerequisites has been done
+1. Send continous packets with mac(00:00:00:00:00:01) and vlan(1000)
+	tester# scapy
+	tester# p = Ether(dst="00:00:00:00:00:01")/Dot1Q(vlan=1000)/IP()/UDP()/
+	            Raw('x'*20)
+	tester# sendp(p, iface="p5p1", inter=1, loop=1)
+
+2. bind virtIO interface to igb_uio and start testpmd
+	host vm# testpmd -c 0x7 -n 4
+
+3. Check packet normally recevied by testpmd:
+	host vm# testpmd> set fwd rxonly
+	host vm# testpmd> set verbose 1
+	host vm# testpmd> port 0/queue 0: received 1 packets
+
+4. Connect to qemu monitor session and start migration
+	host# nc -U /tmp/host.sock
+    host# (qemu)migrate -d tcp:backup host ip:4444
+
+5. Check host vm can receive packet before migration done
+
+6. Query stats of migrate in monitor, check status of migration
+    host# (qemu)info migrate
+    host# after finished:	
+
+7. After migartion done, login into backup vm and check packets recevied
+	backup vm# testpmd> port 0/queue 0: received 1 packets
\ No newline at end of file
-- 
1.9.3

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dts] [PATCH 7/7] tests: add vhost_user_live_migration suite
  2016-07-14 13:17 [dts] [PATCH 0/7] support vhost live migration automation Marvin Liu
                   ` (5 preceding siblings ...)
  2016-07-14 13:17 ` [dts] [PATCH 6/7] test_plans: add test plan " Marvin Liu
@ 2016-07-14 13:17 ` Marvin Liu
  6 siblings, 0 replies; 8+ messages in thread
From: Marvin Liu @ 2016-07-14 13:17 UTC (permalink / raw)
  To: dts; +Cc: Marvin Liu

This suite will verify virtio netdevice work fine before and after
migration. Virtio device is in virtual machine and drived either by kernel
driver or by dpdk.

Signed-off-by: Marvin Liu <yong.liu@intel.com>

diff --git a/tests/TestSuite_vhost_user_live_migration.py b/tests/TestSuite_vhost_user_live_migration.py
new file mode 100644
index 0000000..95e2fb3
--- /dev/null
+++ b/tests/TestSuite_vhost_user_live_migration.py
@@ -0,0 +1,327 @@
+# <COPYRIGHT_TAG>
+
+import re
+import time
+
+import dts
+from qemu_kvm import QEMUKvm
+from test_case import TestCase
+from exception import VirtDutInitException
+
+
+class TestVhostUserLiveMigration(TestCase):
+
+    def set_up_all(self):
+        # verify at least two duts
+        self.verify(len(self.duts) >= 2, "Insufficient duts for live migration!!!")
+
+        # each dut required one ports
+        self.dut_ports = self.dut.get_ports()
+        # Verify that enough ports are available
+        self.verify(len(self.dut_ports) >= 1, "Insufficient ports for testing")
+        self.dut_port = self.dut_ports[0]
+        dut_ip = self.dut.crb['My IP']
+        self.host_tport = self.tester.get_local_port_bydut(self.dut_port, dut_ip)
+        self.host_tintf = self.tester.get_interface(self.host_tport)
+
+        self.backup_ports = self.duts[1].get_ports()
+        # Verify that enough ports are available
+        self.verify(len(self.backup_ports) >= 1, "Insufficient ports for testing")
+        self.backup_port = self.backup_ports[0]
+        # backup host ip will be used in migrate command
+        self.backup_dutip = self.duts[1].crb['My IP']
+        self.backup_tport = self.tester.get_local_port_bydut(self.backup_port, self.backup_dutip)
+        self.backup_tintf = self.tester.get_interface(self.backup_tport)
+
+        # build backup vhost-switch
+        out = self.duts[0].send_expect("make -C examples/vhost", "# ")
+        self.verify("Error" not in out, "compilation error 1")
+        self.verify("No such file" not in out, "compilation error 2")
+
+        # build backup vhost-switch
+        out = self.duts[1].send_expect("make -C examples/vhost", "# ")
+        self.verify("Error" not in out, "compilation error 1")
+        self.verify("No such file" not in out, "compilation error 2")
+
+        self.vhost = "./examples/vhost/build/app/vhost-switch"
+        self.vm_testpmd = "./%s/app/testpmd -c 0x3 -n 4 -- -i" % self.target
+        self.virio_mac = "00:00:00:00:00:01"
+
+        # flag for environment
+        self.env_done = False
+
+    def set_up(self):
+        self.setup_vm_env()
+        pass
+
+    def bind_nic_driver(self, crb,  ports, driver=""):
+        # modprobe vfio driver
+        if driver == "vfio-pci":
+            for port in ports:
+                netdev = crb.ports_info[port]['port']
+                driver = netdev.get_nic_driver()
+                if driver != 'vfio-pci':
+                    netdev.bind_driver(driver='vfio-pci')
+
+        elif driver == "igb_uio":
+            # igb_uio should insmod as default, no need to check
+            for port in ports:
+                netdev = crb.ports_info[port]['port']
+                driver = netdev.get_nic_driver()
+                if driver != 'igb_uio':
+                    netdev.bind_driver(driver='igb_uio')
+        else:
+            for port in ports:
+                netdev = crb.ports_info[port]['port']
+                driver_now = netdev.get_nic_driver()
+                if driver == "":
+                    driver = netdev.default_driver
+                if driver != driver_now:
+                    netdev.bind_driver(driver=driver)
+
+    def setup_vm_env(self, driver='default'):
+        """
+        Create testing environment on Host and Backup
+        """
+        if self.env_done:
+            return
+
+        # start vhost-switch on host and backup machines
+        self.logger.info("Start vhost on host and backup host")
+        for crb in self.duts[:2]:
+            self.bind_nic_driver(crb, [crb.get_ports()[0]], driver="igb_uio")
+            # start vhost-switch, predict hugepage on both sockets
+            base_dir = crb.base_dir.replace('~', '/root')
+            crb.send_expect("rm -f %s/vhost-net" % base_dir, "# ")
+            crb.send_expect("%s -c f -n 4 --socket-mem 1024 -- -p 0x1" % self.vhost, "bind to vhost-net")
+
+        try:
+            # set up host virtual machine
+            self.host_vm = QEMUKvm(self.duts[0], 'host', 'vhost_user_live_migration')
+            vhost_params = {}
+            vhost_params['driver'] = 'vhost-user'
+            # qemu command can't use ~
+            base_dir = self.dut.base_dir.replace('~', '/root')
+            vhost_params['opt_path'] = base_dir + '/vhost-net'
+            vhost_params['opt_mac'] = self.virio_mac
+            self.host_vm.set_vm_device(**vhost_params)
+
+            self.logger.info("Start virtual machine on host")
+            self.vm_host = self.host_vm.start()
+
+            if self.vm_host is None:
+                raise Exception("Set up host VM ENV failed!")
+
+            self.host_serial = self.host_vm.connect_serial_port(name='vhost_user_live_migration')
+            if self.host_serial is None:
+                raise Exception("Connect host serial port failed!")
+
+            self.logger.info("Start virtual machine on backup host")
+            # set up backup virtual machine
+            self.backup_vm = QEMUKvm(self.duts[1], 'backup', 'vhost_user_live_migration')
+            vhost_params = {}
+            vhost_params['driver'] = 'vhost-user'
+            # qemu command can't use ~
+            base_dir = self.dut.base_dir.replace('~', '/root')
+            vhost_params['opt_path'] = base_dir + '/vhost-net'
+            vhost_params['opt_mac'] = self.virio_mac
+            self.backup_vm.set_vm_device(**vhost_params)
+
+            # start qemu command
+            self.backup_vm.start()
+
+        except Exception as ex:
+            if ex is VirtDutInitException:
+                self.host_vm.stop()
+                self.host_vm = None
+                # no session created yet, call internal stop function
+                self.backup_vm._stop_vm()
+                self.backup_vm = None
+            else:
+                self.destroy_vm_env()
+                raise Exception(ex)
+
+        self.env_done = True
+
+    def destroy_vm_env(self):
+        # if environment has been destroyed, just skip
+        if self.env_done is False:
+            return
+
+        if getattr(self, 'host_serial', None):
+            if self.host_vm is not None:
+                self.host_vm.close_serial_port()
+
+        if getattr(self, 'backup_serial', None):
+            if self.backup_serial is not None and self.backup_vm is not None:
+                self.backup_vm.close_serial_port()
+
+        self.logger.info("Stop virtual machine on host")
+        if getattr(self, 'vm_host', None):
+            if self.vm_host is not None:
+                self.host_vm.stop()
+                self.host_vm = None
+
+        self.logger.info("Stop virtual machine on backup host")
+        if getattr(self, 'vm_backup', None):
+            if self.vm_backup is not None:
+                self.vm_backup.kill_all()
+                # backup vm dut has been initialized, destroy backup vm
+                self.backup_vm.stop()
+                self.backup_vm = None
+
+        if getattr(self, 'backup_vm', None):
+            # only qemu start, no session created
+            if self.backup_vm is not None:
+                self.backup_vm.stop()
+                self.backup_vm = None
+
+        # after vm stopped, stop vhost-switch
+        for crb in self.duts[:2]:
+            crb.kill_all()
+
+        for crb in self.duts[:2]:
+            self.bind_nic_driver(crb, [crb.get_ports()[0]], driver="igb_uio")
+
+        self.env_done = False
+
+    def send_pkts(self, intf, number=0):
+        """
+        send packet from tester
+        """
+        sendp_fmt = "sendp([Ether(dst='%(DMAC)s')/Dot1Q(vlan=1000)/IP()/UDP()/Raw('x'*18)], iface='%(INTF)s', count=%(COUNT)d)"
+        sendp_cmd = sendp_fmt % {'DMAC': self.virio_mac, 'INTF': intf, 'COUNT': number}
+        self.tester.scapy_append(sendp_cmd)
+        self.tester.scapy_execute()
+        # sleep 10 seconds for heavy load with backup host
+        time.sleep(10)
+
+    def verify_dpdk(self, tester_port, serial_session):
+        num_pkts = 10
+
+        stats_pat = re.compile("RX-packets: (\d+)")
+        intf = self.tester.get_interface(tester_port)
+        serial_session.send_expect("stop", "testpmd> ")
+        serial_session.send_expect("set fwd rxonly", "testpmd> ")
+        serial_session.send_expect("clear port stats all", "testpmd> ")
+        serial_session.send_expect("start tx_first", "testpmd> ")
+
+        # send packets from tester
+        self.send_pkts(intf, number=num_pkts)
+
+        out = serial_session.send_expect("show port stats 0", "testpmd> ")
+        m = stats_pat.search(out)
+        if m:
+            num_received = int(m.group(1))
+        else:
+            num_received = 0
+
+        self.verify(num_received >= num_pkts, "Not receive packets as expected!!!")
+        self.logger.info("Verified %s packets recevied" % num_received)
+
+    def verify_kernel(self, tester_port, vm_dut):
+        """
+        Function to verify packets received by virtIO
+        """
+        intf = self.tester.get_interface(tester_port)
+        num_pkts = 10
+
+        # get host interface
+        vm_intf = vm_dut.ports_info[0]['port'].get_interface_name()
+        # start tcpdump the interface
+        vm_dut.send_expect("ifconfig %s up" % vm_intf, "# ")
+        vm_dut.send_expect("tcpdump -i %s -P in -v" % vm_intf, "listening on")
+        # wait for promisc on
+        time.sleep(3)
+        # send packets from tester
+        self.send_pkts(intf, number=num_pkts)
+
+        # killall tcpdump and verify packet received
+        out = vm_dut.get_session_output(timeout=1)
+        vm_dut.send_expect("^C", "# ")
+        num = out.count('UDP')
+        self.verify(num == num_pkts, "Not receive packets as expected!!!")
+        self.logger.info("Verified %s packets recevied" % num_pkts)
+
+    def test_migrate_with_kernel(self):
+        """
+        Verify migrate virtIO device from host to backup host,
+        Verify before/in/after migration, device with kernel driver can receive packets
+        """
+        # bind virtio-net back to virtio-pci
+        self.bind_nic_driver(self.vm_host, [self.vm_host.get_ports()[0]], driver="")
+        # verify host virtio-net work fine
+        self.verify_kernel(self.host_tport, self.vm_host)
+
+        self.logger.info("Migrate host VM to backup host")
+        # start live migration
+        self.host_vm.start_migration(self.backup_dutip, self.backup_vm.migrate_port)
+
+        # make sure still can receive packets in migration process
+        self.verify_kernel(self.host_tport, self.vm_host)
+
+        self.logger.info("Waiting migration process done")
+        # wait live migration done
+        self.host_vm.wait_migration_done()
+
+        # check vhost-switch log after migration
+        out = self.duts[0].get_session_output(timeout=1)
+        self.verify("device has been removed" in out, "Device not removed for host")
+        out = self.duts[1].get_session_output(timeout=1)
+        self.verify("virtio is now ready" in out, "Device not ready on backup host")
+
+        self.logger.info("Migration process done, init backup VM")
+        # connected backup VM
+        self.vm_backup = self.backup_vm.migrated_start()
+
+        # make sure still can receive packets
+        self.verify_kernel(self.backup_tport, self.vm_backup)
+
+    def test_migrate_with_dpdk(self):
+        # bind virtio-net to igb_uio
+        self.bind_nic_driver(self.vm_host, [self.vm_host.get_ports()[0]], driver="igb_uio")
+
+        # start testpmd on host vm
+        base_dir = self.vm_host.base_dir.replace('~', '/root')
+        self.host_serial.send_expect('cd %s' % base_dir, "# ")
+        self.host_serial.send_expect(self.vm_testpmd, "testpmd> ")
+
+        # verify testpmd receive packets
+        self.verify_dpdk(self.host_tport, self.host_serial)
+
+        self.logger.info("Migrate host VM to backup host")
+        # start live migration
+        self.host_vm.start_migration(self.backup_dutip, self.backup_vm.migrate_port)
+
+        # make sure still can receive packets in migration process
+        self.verify_dpdk(self.host_tport, self.host_serial)
+
+        self.logger.info("Waiting migration process done")
+        # wait live migration done
+        self.host_vm.wait_migration_done()
+
+        # check vhost-switch log after migration
+        out = self.duts[0].get_session_output(timeout=1)
+        self.verify("device has been removed" in out, "Device not removed for host")
+        out = self.duts[1].get_session_output(timeout=1)
+        self.verify("virtio is now ready" in out, "Device not ready on backup host")
+
+        self.logger.info("Migration process done, init backup VM")
+        time.sleep(5)
+
+        # make sure still can receive packets
+        self.backup_serial = self.backup_vm.connect_serial_port(name='vhost_user_live_migration', first=False)
+        if self.backup_serial is None:
+            raise Exception("Connect backup host serial port failed!")
+
+        self.verify_dpdk(self.backup_tport, self.backup_serial)
+
+        # quit testpmd
+        self.backup_serial.send_expect("quit", "# ")
+
+    def tear_down(self):
+        self.destroy_vm_env()
+        pass
+
+    def tear_down_all(self):
+        pass
-- 
1.9.3

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2016-07-14 13:18 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-14 13:17 [dts] [PATCH 0/7] support vhost live migration automation Marvin Liu
2016-07-14 13:17 ` [dts] [PATCH 1/7] framework: support close ssh session without logout Marvin Liu
2016-07-14 13:17 ` [dts] [PATCH 2/7] framework tester: fix typo issue Marvin Liu
2016-07-14 13:17 ` [dts] [PATCH 3/7] framework virt_base: add vm status concept Marvin Liu
2016-07-14 13:17 ` [dts] [PATCH 4/7] framework qemu_kvm: support migration and serial port Marvin Liu
2016-07-14 13:17 ` [dts] [PATCH 5/7] conf: add configuration file for vhost_user_live_migration suite Marvin Liu
2016-07-14 13:17 ` [dts] [PATCH 6/7] test_plans: add test plan " Marvin Liu
2016-07-14 13:17 ` [dts] [PATCH 7/7] tests: add " Marvin Liu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).