test suite reviews and discussions
 help / color / mirror / Atom feed
From: "Xu, HuilongX" <huilongx.xu@intel.com>
To: "Jiajia, SunX" <sunx.jiajia@intel.com>, "dts@dpdk.org" <dts@dpdk.org>
Subject: Re: [dts] [‘dts-v1’ 4/9] Add VM class and the virtual DUT class and the virtual resource module
Date: Mon, 18 May 2015 08:23:27 +0000	[thread overview]
Message-ID: <DF2A19295B96364286FEB7F3DDA27A460110F111@SHSMSX101.ccr.corp.intel.com> (raw)
In-Reply-To: <1431925646-1314-5-git-send-email-sunx.jiajia@intel.com>

Hi Jiajia,
I have a comment at "@@ -202,7 +213,7 @@ class DPDKdut(Dut):"

Would you check it, thanks  a lot.
-----Original Message-----
From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of sjiajiax
Sent: Monday, May 18, 2015 1:07 PM
To: dts@dpdk.org
Subject: [dts] [‘dts-v1’ 4/9] Add VM class and the virtual DUT class and the virtual resource module

Signed-off-by: sjiajiax <sunx.jiajia@intel.com>
---
 framework/qemu_kvm.py      | 912 +++++++++++++++++++++++++++++++++++++++++++++
 framework/virt_base.py     | 250 +++++++++++++
 framework/virt_dut.py      | 239 ++++++++++++
 framework/virt_resource.py | 486 ++++++++++++++++++++++++
 4 files changed, 1887 insertions(+)
 create mode 100644 framework/qemu_kvm.py
 create mode 100644 framework/virt_base.py
 create mode 100644 framework/virt_dut.py
 create mode 100644 framework/virt_resource.py

diff --git a/framework/qemu_kvm.py b/framework/qemu_kvm.py
new file mode 100644
index 0000000..5f5f665
--- /dev/null
+++ b/framework/qemu_kvm.py
@@ -0,0 +1,912 @@
+# <COPYRIGHT_TAG>
+
+import time
+import re
+import os
+
+from virt_base import VirtBase
+from exception import StartVMFailedException
+
+# This name is derictly defined in the qemu guest serivce
+# So you can not change it except it is changed by the service
+QGA_DEV_NAME = 'org.qemu.guest_agent.0'
+# This path defines an socket path on the host connected with
+# a specified VM
+QGA_SOCK_PATH_TEMPLATE = '/tmp/%(vm_name)s_qga0.sock'
+
+
+class QEMUKvm(VirtBase):
+
+    DEFAULT_BRIDGE = 'br0'
+    QEMU_IFUP = """#!/bin/sh
+
+set -x
+
+switch=%(switch)s
+
+if [ -n "$1" ];then
+    tunctl -t $1
+    ip link set $1 up
+    sleep 0.5s
+    brctl addif $switch $1
+    exit 0
+else
+    echo "Error: no interface specified"
+    exit 1
+fi
+"""
+    QEMU_IFUP_PATH = '/etc/qemu-ifup'
+
+    def __init__(self, dut, vm_name, suite_name):
+        super(QEMUKvm, self).__init__(dut, vm_name, suite_name)
+        self.set_vm_name(self.vm_name)
+        self.set_vm_enable_kvm()
+        self.set_vm_qga()
+        self.set_vm_daemon()
+
+        # initialize qemu emulator, example: qemu-system-x86_64
+        self.qemu_emulator = self.get_qemu_emulator()
+
+        # initialize qemu boot command line
+        # example: qemu-system-x86_64 -name vm1 -m 2048 -vnc :1 -daemonize
+        self.whole_qemu_kvm_boot_line = ''
+
+        self.init_vm_request_resource()
+
+        QGA_CLI_PATH = '-r dep/QMP/'
+        self.host_session.copy_file_to(QGA_CLI_PATH)
+
+    def init_vm_request_resource(self):
+        """
+        initialize vcpus what will be pinned to the VM.
+        If specify this param, the specified vcpus will
+        be pinned to VM by the command 'taskset' when
+        starting the VM.
+        example:
+            vcpus_pinned_to_vm = '1 2 3 4'
+            taskset -c 1,2,3,4 qemu-boot-command-line
+        """
+        self.vcpus_pinned_to_vm = ''
+
+        # initialize assigned PCI
+        self.assigned_pcis = []
+
+    def get_virt_type(self):
+        """
+        Get the virtual type.
+        """
+        return 'KVM'
+
+    def get_qemu_emulator(self):
+        """
+        Get the qemu emulator based on the crb.
+        """
+        arch = self.host_session.send_expect('uname -m', '# ')
+        return 'qemu-system-' + arch
+
+    def set_qemu_emulator(self, qemu_emulator):
+        """
+        Set the qemu emulator explicitly.
+        """
+        out = self.host_session.send_expect(
+            'whereis %s' % str(qemu_emulator), '[.*')
+        command_paths = out.split(':')[1:]
+        if command_paths[0].lstrip():
+            print "No emulator [ %s ] on the DUT [ %s ]" % \
+                (qemu_emulator, self.host_dut.get_ip_address())
+            return None
+        self.qemu_emulator = qemu_emulator
+
+    def has_virtual_ability(self):
+        """
+        Check if host has the virtual ability.
+        """
+        out = self.host_session.send_expect('lsmod | grep kvm', '# ')
+        if 'kvm' in out and 'kvm_intel' in out:
+            return True
+        else:
+            return False
+
+    def enable_virtual_ability(self):
+        """
+        Load the virutal module of kernel to enable the virutal ability.
+        """
+        self.host_session.send_expect('modprobe kvm', '# ')
+        self.host_session.send_expect('modprobe kvm_intel', '# ')
         We need check modprobe kvm and kvm_intel module success befor return. 
         Because when bios disable VT-D or linux kernel not support virtualization,
         if we exec modprobe kvm_intel by session, the session exec success.  
+        return True
+
+    def disk_image_is_ok(self, image):
+        """
+        Check if the image is OK and no error.
+        """
+        pass
+
+    def image_is_used(self, image_path):
+        """
+        Check if the image has been used on the host.
+        """
+        qemu_cmd_lines = self.host_session.send_expect(
+            "ps aux | grep qemu | grep -v grep", "# ")
+
+        image_name_flag = '/' + image_path.strip().split('/')[-1] + ' '
+        if image_path in qemu_cmd_lines or \
+                image_name_flag in qemu_cmd_lines:
+            return True
+        return False
+
+    def __add_boot_line(self, option_boot_line):
+        """
+        Add boot option into the boot line.
+        """
+        separator = ' '
+        self.whole_qemu_kvm_boot_line += separator + option_boot_line
+
+    def set_vm_enable_kvm(self, enable='yes'):
+        """
+        Set VM boot option to enable the option 'enable-kvm'.
+        """
+        self.params.append({'enable_kvm': [{'enable': '%s' % enable}]})
+
+    def add_vm_enable_kvm(self, **options):
+        """
+        'enable': 'yes'
+        """
+        if 'enable' in options.keys() and \
+                options['enable'] == 'yes':
+            enable_kvm_boot_line = '-enable-kvm'
+            self.__add_boot_line(enable_kvm_boot_line)
+
+    def set_vm_name(self, vm_name):
+        """
+        Set VM name.
+        """
+        self.params.append({'name': [{'name': '%s' % vm_name}]})
+
+    def add_vm_name(self, **options):
+        """
+        name: vm1
+        """
+        if 'name' in options.keys() and \
+                options['name']:
+            name_boot_line = '-name %s' % options['name']
+            self.__add_boot_line(name_boot_line)
+
+    def add_vm_cpu(self, **options):
+        """
+        model: [host | core2duo | ...]
+               usage:
+                    choose model value from the command
+                        qemu-system-x86_64 -cpu help
+        number: '4' #number of vcpus
+        cpupin: '3 4 5 6' # host cpu list
+        """
+        if 'model' in options.keys() and \
+                options['model']:
+            cpu_boot_line = '-cpu %s' % options['model']
+            self.__add_boot_line(cpu_boot_line)
+        if 'number' in options.keys() and \
+                options['number']:
+            smp_cmd_line = '-smp %d' % int(options['number'])
+            self.__add_boot_line(smp_cmd_line)
+        if 'cpupin' in options.keys() and \
+                options['cpupin']:
+            self.vcpus_pinned_to_vm = str(options['cpupin'])
+
+    def add_vm_mem(self, **options):
+        """
+        size: 1024
+        """
+        if 'size' in options.keys():
+            mem_boot_line = '-m %s' % options['size']
+            self.__add_boot_line(mem_boot_line)
+
+    def add_vm_disk(self, **options):
+        """
+        file: /home/image/test.img
+        """
+        if 'file' in options.keys():
+            disk_boot_line = '-drive file=%s' % options['file']
+            self.__add_boot_line(disk_boot_line)
+
+    def add_vm_net(self, **options):
+        """
+        Add VM net device.
+        type: [nic | user | tap | bridge | ...]
+        opt_[vlan | fd | br | mac | ...]
+            note:the sub-option will be decided according to the net type.
+        """
+        if 'type' in options.keys():
+            if 'opt_vlan' not in options.keys():
+                options['opt_vlan'] = '0'
+            if options['type'] == 'nic':
+                self.__add_vm_net_nic(**options)
+            if options['type'] == 'user':
+                self.__add_vm_net_user(**options)
+            if options['type'] == 'tap':
+                self.__add_vm_net_tap(**options)
+
+            if options['type'] == 'user':
+                self.net_type = 'hostfwd'
+            elif options['type'] in ['tap', 'bridge']:
+                self.net_type = 'bridge'
+
+    def __add_vm_net_nic(self, **options):
+        """
+        type: nic
+        opt_vlan: 0
+            note: Default is 0.
+        opt_macaddr: 00:00:00:00:01:01
+            note: if creating a nic, it`s better to specify a MAC,
+                  else it will get a random number.
+        opt_model:["e1000" | "virtio" | "i82551" | ...]
+            note: Default is e1000.
+        opt_name: 'nic1'
+        opt_addr: ''
+            note: PCI cards only.
+        opt_vectors:
+            note: This option currently only affects virtio cards.
+        """
+        net_boot_line = '-net nic'
+        separator = ','
+        if 'opt_vlan' in options.keys() and \
+                options['opt_vlan']:
+            net_boot_line += separator + 'vlan=%s' % options['opt_vlan']
+
+        # add MAC info
+        if 'opt_macaddr' in options.keys() and \
+                options['opt_macaddr']:
+            mac = options['opt_macaddr']
+        else:
+            mac = self.generate_unique_mac()
+        net_boot_line += separator + 'macaddr=%s' % mac
+
+        if 'opt_model' in options.keys() and \
+                options['opt_model']:
+            net_boot_line += separator + 'model=%s' % options['opt_model']
+        if 'opt_name' in options.keys() and \
+                options['opt_name']:
+            net_boot_line += separator + 'name=%s' % options['opt_name']
+        if 'opt_addr' in options.keys() and \
+                options['opt_addr']:
+            net_boot_line += separator + 'addr=%s' % options['opt_addr']
+        if 'opt_vectors' in options.keys() and \
+                options['opt_vectors']:
+            net_boot_line += separator + 'vectors=%s' % options['opt_vectors']
+
+        if self.__string_has_multi_fields(net_boot_line, separator):
+            self.__add_boot_line(net_boot_line)
+
+    def __add_vm_net_user(self, **options):
+        """
+        type: user
+        opt_vlan: 0
+            note: default is 0.
+        opt_hostfwd: [tcp|udp]:[hostaddr]:hostport-[guestaddr]:guestport
+        """
+        net_boot_line = '-net user'
+        separator = ','
+        if 'opt_vlan' in options.keys() and \
+                options['opt_vlan']:
+            net_boot_line += separator + 'vlan=%s' % options['opt_vlan']
+        if 'opt_hostfwd' in options.keys() and \
+                options['opt_hostfwd']:
+            self.__check_net_user_opt_hostfwd(options['opt_hostfwd'])
+            opt_hostfwd = options['opt_hostfwd']
+        else:
+            opt_hostfwd = '::-:'
+        hostfwd_line = self.__parse_net_user_opt_hostfwd(opt_hostfwd)
+        net_boot_line += separator + 'hostfwd=%s' % hostfwd_line
+
+        if self.__string_has_multi_fields(net_boot_line, separator):
+            self.__add_boot_line(net_boot_line)
+
+    def __check_net_user_opt_hostfwd(self, opt_hostfwd):
+        """
+        Use regular expression to check if hostfwd value format is correct.
+        """
+        regx_ip = '\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}'
+        regx_hostfwd = r'["tcp" | "udp"]?:%s?:\d+-%s?:\d+' % (regx_ip, regx_ip)
+        if not re.match(regx_hostfwd, opt_hostfwd):
+            raise Exception("Option opt_hostfwd format is not correct,\n" +
+                            "it is %s,\n " % opt_hostfwd +
+                            "it should be [tcp|udp]:[hostaddr]:hostport-" +
+                            "[guestaddr]:guestport.\n")
+
+    def __parse_net_user_opt_hostfwd(self, opt_hostfwd):
+        """
+        Parse the boot option 'hostfwd'.
+        """
+        separator = ':'
+        field = lambda option, index, separator=':': \
+            option.split(separator)[index]
+
+        # get the forword type
+        fwd_type = field(opt_hostfwd, 0)
+        if not fwd_type:
+            fwd_type = 'tcp'
+
+        # get the host addr
+        host_addr = field(opt_hostfwd, 1)
+        if not host_addr:
+            host_addr = str(self.host_dut.get_ip_address())
+
+        # get the host port in the option
+        host_port = field(opt_hostfwd, 2).split('-')[0]
+        if not host_port:
+            host_port = str(self.virt_pool.alloc_port(self.vm_name))
+        self.redir_port = host_port
+
+        # get the guest addr
+        try:
+            guest_addr = str(field(opt_hostfwd, 2).split('-')[1])
+        except IndexError as e:
+            guest_addr = ''
+
+        # get the guest port in the option
+        guest_port = str(field(opt_hostfwd, 3))
+        if not guest_port:
+            guest_port = '22'
+
+        hostfwd_line = fwd_type + separator + \
+            host_addr + separator + \
+            host_port + \
+            '-' + \
+            guest_addr + separator + \
+            guest_port
+
+        # init the redirect incoming TCP or UDP connections
+        # just combine host address and host port, it is enough
+        # for using ssh to connect with VM
+        self.hostfwd_addr = host_addr + separator + host_port
+
+        return hostfwd_line
+
+    def __add_vm_net_tap(self, **options):
+        """
+        type: tap
+        opt_vlan: 0
+            note: default is 0.
+        opt_br: br0
+            note: if choosing tap, need to specify bridge name,
+                  else it will be br0.
+        opt_script: QEMU_IFUP_PATH
+            note: if not specified, default is self.QEMU_IFUP_PATH.
+        opt_downscript: QEMU_IFDOWN_PATH
+            note: if not specified, default is self.QEMU_IFDOWN_PATH.
+        """
+        net_boot_line = '-net tap'
+        separator = ','
+
+        # add bridge info
+        if 'opt_br' in options.keys() and \
+                options['opt_br']:
+            bridge = options['opt_br']
+        else:
+            bridge = self.DEFAULT_BRIDGE
+        self.__generate_net_config_script(str(bridge))
+
+        if 'opt_vlan' in options.keys() and \
+                options['opt_vlan']:
+            net_boot_line += separator + 'vlan=%s' % options['opt_vlan']
+
+        # add network configure script path
+        if 'opt_script' in options.keys() and \
+                options['opt_script']:
+            script_path = options['opt_script']
+        else:
+            script_path = self.QEMU_IFUP_PATH
+        net_boot_line += separator + 'script=%s' % script_path
+
+        # add network configure downscript path
+        if 'opt_downscript' in options.keys() and \
+                options['opt_downscript']:
+            net_boot_line += separator + \
+                'downscript=%s' % options['opt_downscript']
+
+        if self.__string_has_multi_fields(net_boot_line, separator):
+            self.__add_boot_line(net_boot_line)
+
+    def __generate_net_config_script(self, switch=DEFAULT_BRIDGE):
+        """
+        Generate a script for qemu emulator to build a tap device
+        between host and guest.
+        """
+        qemu_ifup = self.QEMU_IFUP % {'switch': switch}
+        file_name = os.path.basename(self.QEMU_IFUP_PATH)
+        tmp_file_path = '/tmp/%s' % file_name
+        self.host_dut.create_file(qemu_ifup, tmp_file_path)
+        self.host_session.send_expect('mv -f ~/%s %s' % (file_name,
+                                                         self.QEMU_IFUP_PATH), '# ')
+        self.host_session.send_expect(
+            'chmod +x %s' % self.QEMU_IFUP_PATH, '# ')
+
+    def set_vm_device(self, driver='pci-assign', **props):
+        """
+        Set VM device with specified driver.
+        """
+        props['driver'] = driver
+        index = self.find_option_index('device')
+        if index:
+            self.params[index]['device'].append(props)
+        else:
+            self.params.append({'device': [props]})
+
+    def add_vm_device(self, **options):
+        """
+        driver: [pci-assign | virtio-net-pci | ...]
+        prop_[host | addr | ...]: value
+            note:the sub-property will be decided according to the driver.
+        """
+        if 'driver' in options.keys() and \
+                options['driver']:
+            if options['driver'] == 'pci-assign':
+                self.__add_vm_pci_assign(**options)
+            elif options['driver'] == 'virtio-net-pci':
+                self.__add_vm_virtio_net_pci(**options)
+
+    def __add_vm_pci_assign(self, **options):
+        """
+        driver: pci-assign
+        prop_host: 08:00.0
+        prop_addr: 00:00:00:00:01:02
+        """
+        dev_boot_line = '-device pci-assign'
+        separator = ','
+        if 'prop_host' in options.keys() and \
+                options['prop_host']:
+            dev_boot_line += separator + 'host=%s' % options['prop_host']
+        if 'prop_addr' in options.keys() and \
+                options['prop_addr']:
+            dev_boot_line += separator + 'addr=%s' % options['prop_addr']
+            self.assigned_pcis.append(options['prop_addr'])
+
+        if self.__string_has_multi_fields(dev_boot_line, separator):
+            self.__add_boot_line(dev_boot_line)
+
+    def __add_vm_virtio_net_pci(self, **options):
+        """
+        driver: virtio-net-pci
+        prop_netdev: mynet1
+        prop_id: net1
+        prop_mac: 00:00:00:00:01:03
+        prop_bus: pci.0
+        prop_addr: 0x3
+        """
+        dev_boot_line = '-device virtio-net-pci'
+        separator = ','
+        if 'prop_netdev' in options.keys() and \
+                options['prop_netdev']:
+            dev_boot_line += separator + 'netdev=%s' % options['prop_netdev']
+        if 'prop_id' in options.keys() and \
+                options['prop_id']:
+            dev_boot_line += separator + 'id=%s' % options['prop_id']
+        if 'prop_mac' in options.keys() and \
+                options['prop_mac']:
+            dev_boot_line += separator + 'mac=%s' % options['prop_mac']
+        if 'prop_bus' in options.keys() and \
+                options['prop_bus']:
+            dev_boot_line += separator + 'bus=%s' % options['prop_bus']
+        if 'prop_addr' in options.keys() and \
+                options['prop_addr']:
+            dev_boot_line += separator + 'addr=%s' % options['prop_addr']
+
+        if self.__string_has_multi_fields(self, string, separator):
+            self.__add_boot_line(dev_boot_line)
+
+    def __string_has_multi_fields(self, string, separator, field_num=2):
+        """
+        Check if string has multiple fields which is splited with
+        specified separator.
+        """
+        fields = string.split(separator)
+        number = 0
+        for field in fields:
+            if field:
+                number += 1
+        if number >= field_num:
+            return True
+        else:
+            return False
+
+    def add_vm_monitor(self, **options):
+        """
+        port: 6061   # if adding monitor to vm, need to specicy
+                       this port, else it will get a free port
+                       on the host machine.
+        """
+        if 'port' in options.keys():
+            if options['port']:
+                port = options['port']
+            else:
+                port = self.virt_pool.alloc_port(self.vm_name)
+
+            monitor_boot_line = '-monitor tcp::%d,server,nowait' % int(port)
+            self.__add_boot_line(monitor_boot_line)
+
+    def set_vm_qga(self, enable='yes'):
+        """
+        Set VM qemu-guest-agent.
+        """
+        index = self.find_option_index('qga')
+        if index:
+            self.params[index] = {'qga': [{'enable': '%s' % enable}]}
+        else:
+            self.params.append({'qga': [{'enable': '%s' % enable}]})
+        QGA_SOCK_PATH = QGA_SOCK_PATH_TEMPLATE % {'vm_name': self.vm_name}
+        self.qga_sock_path = QGA_SOCK_PATH
+
+    def add_vm_qga(self, **options):
+        """
+        enable: 'yes'
+        """
+        QGA_DEV_ID = '%(vm_name)s_qga0' % {'vm_name': self.vm_name}
+        QGA_SOCK_PATH = QGA_SOCK_PATH_TEMPLATE % {'vm_name': self.vm_name}
+
+        separator = ' '
+
+        if 'enable' in options.keys():
+            if options['enable'] == 'yes':
+                qga_boot_block = '-chardev socket,path=%(SOCK_PATH)s,server,nowait,id=%(ID)s' + \
+                                 separator + '-device virtio-serial' + separator + \
+                                 '-device virtserialport,chardev=%(ID)s,name=%(DEV_NAME)s'
+                qga_boot_line = qga_boot_block % {'SOCK_PATH': QGA_SOCK_PATH,
+                                                  'DEV_NAME': QGA_DEV_NAME,
+                                                  'ID': QGA_DEV_ID}
+                self.__add_boot_line(qga_boot_line)
+                self.qga_sock_path = QGA_SOCK_PATH
+            else:
+                self.qga_sock_path = ''
+
+    def add_vm_serial_port(self, **options):
+        """
+        enable: 'yes'
+        """
+        SERAIL_SOCK_PATH = "/tmp/%s_serial.sock" % self.vm_name
+        if 'enable' in options.keys():
+            if options['enable'] == 'yes':
+                serial_boot_line = '-serial unix:%s,server,nowait' % SERIAL_SOCK_PATH
+                self.__add_boot_line(serial_boot_line)
+            else:
+                pass
+
+    def add_vm_vnc(self, **options):
+        """
+        displayNum: 1
+        """
+        if 'displayNum' in options.keys() and \
+                options['displayNum']:
+            display_num = options['displayNum']
+        else:
+            display_num = self.virt_pool.alloc_vnc_num(self.vm_name)
+
+        vnc_boot_line = '-vnc :%d' % int(display_num)
+        self.__add_boot_line(vnc_boot_line)
+
+    def set_vm_daemon(self, enable='yes'):
+        """
+        Set VM daemon option.
+        """
+        index = self.find_option_index('daemon')
+        if index:
+            self.params[index] = {'daemon': [{'enable': '%s' % enable}]}
+        else:
+            self.params.append({'daemon': [{'enable': '%s' % enable}]})
+
+    def add_vm_daemon(self, **options):
+        """
+        enable: 'yes'
+            note:
+                By default VM will start with the daemonize status.
+                Not support starting it on the stdin now.
+        """
+        if 'daemon' in options.keys() and \
+                options['enable'] == 'no':
+            pass
+        else:
+            daemon_boot_line = '-daemonize'
+            self.__add_boot_line(daemon_boot_line)
+
+    def start_vm(self):
+        """
+        Start VM.
+        """
+        qemu_emulator = self.qemu_emulator
+
+        self.__alloc_assigned_pcis()
+
+        if self.vcpus_pinned_to_vm.strip():
+            vcpus = self.__alloc_vcpus()
+
+            if vcpus.strip():
+                whole_qemu_kvm_boot_line = 'taskset -c %s ' % vcpus + \
+                    qemu_emulator + ' ' + \
+                    self.whole_qemu_kvm_boot_line
+        else:
+            whole_qemu_kvm_boot_line = qemu_emulator + ' ' + \
+                self.whole_qemu_kvm_boot_line
+
+        # Start VM using the qemu command
+        out = self.host_session.send_expect(whole_qemu_kvm_boot_line, '# ')
+        time.sleep(30)
+        if out:
+            raise StartVMFailedException(out)
+
+    def __alloc_vcpus(self):
+        """
+        Allocate virtual CPUs for VM.
+        """
+        req_cpus = self.vcpus_pinned_to_vm.split()
+        cpus = self.virt_pool.alloc_cpu(vm=self.vm_name, corelist=req_cpus)
+
+        vcpus_pinned_to_vm = ''
+        for cpu in cpus:
+            vcpus_pinned_to_vm += ',' + cpu
+        vcpus_pinned_to_vm = vcpus_pinned_to_vm.lstrip(',')
+
+        if len(req_cpus) != len(cpus):
+            print "WARNING: Just pin vcpus [ %s ] to VM!" % vcpus_pinned_to_vm
+
+        return vcpus_pinned_to_vm
+
+    def __alloc_assigned_pcis(self):
+        """
+        Record the PCI device info
+        Struct: {dev pci: {'is_vf': [True | False],
+                            'pf_pci': pci}}
+        example:
+            {'08:10.0':{'is_vf':True, 'pf_pci': 08:00.0}}
+        """
+        assigned_pcis_info = {}
+        for pci in self.assigned_pcis:
+            assigned_pcis_info[pci] = {}
+            if self.__is_vf_pci(pci):
+                assigned_pcis_info[pci]['is_vf'] = True
+                pf_pci = self.__map_vf_to_pf(pci)
+                assgined_pcis_info[pci]['pf_pci'] = pf_pci
+                if self.virt_pool.alloc_vf_from_pf(vm=self.vm_name,
+                                                   pf_pci=pf_pci,
+                                                   *[pci]):
+                    port = self.__get_vf_port(pci)
+                    port.unbind_driver()
+                    port.bind_driver('pci-stub')
+            else:
+                # check that if any VF of specified PF has been
+                # used, raise exception
+                vf_pci = self.__vf_has_been_assinged(pci, **assinged_pcis_info)
+                if vf_pci:
+                    raise Exception(
+                        "Error: A VF [%s] generated by PF [%s] has " %
+                        (vf_pci, pci) +
+                        "been assigned to VM, so this PF can not be " +
+                        "assigned to VM again!")
+                # get the port instance of PF
+                port = self.__get_net_device_by_pci(pci)
+
+                if self.virt_pool.alloc_pf(vm=self.vm_name,
+                                           *[pci]):
+                    port.unbind_driver()
+
+    def __is_vf_pci(self, dev_pci):
+        """
+        Check if the specified PCI dev is a VF.
+        """
+        for port_info in self.host_dut.ports_info:
+            if 'sriov_vfs_pci' in port_info.keys():
+                if dev_pci in port_info['sriov_vfs_pci']:
+                    return True
+        return False
+
+    def __map_vf_to_pf(self, dev_pci):
+        """
+        Map the specified VF to PF.
+        """
+        for port_info in self.host_dut.ports_info:
+            if 'sriov_vfs_pci' in port_info.keys():
+                if dev_pci in port_info['sriov_vfs_pci']:
+                    return port_info['pci']
+        return None
+
+    def __get_vf_port(self, dev_pci):
+        """
+        Get the NetDevice instance of specified VF.
+        """
+        for port_info in self.host_dut.ports_info:
+            if 'vfs_port' in port_info.keys():
+                for port in port_info['vfs_port']:
+                    if dev_pci == port.pci:
+                        return port
+        return None
+
+    def __vf_has_been_assigned(self, pf_pci, **assigned_pcis_info):
+        """
+        Check if the specified VF has been used.
+        """
+        for pci in assigned_pcis_info.keys():
+            if assigned_pcis_info[pci]['is_vf'] and \
+                    assigned_pcis_info[pci]['pf_pci'] == pf_pci:
+                return pci
+        return False
+
+    def __get_net_device_by_pci(self, net_device_pci):
+        """
+        Get NetDevice instance by the specified PCI bus number.
+        """
+        port_info = self.host_dut.get_port_info(net_device_pci)
+        return port_info['port']
+
+    def get_vm_ip(self):
+        """
+        Get VM IP.
+        """
+        get_vm_ip = getattr(self, "get_vm_ip_%s" % self.net_type)
+        return get_vm_ip()
+
+    def get_vm_ip_hostfwd(self):
+        """
+        Get IP which VM is connected by hostfwd.
+        """
+        return self.hostfwd_addr
+
+    def get_vm_ip_bridge(self):
+        """
+        Get IP which VM is connected by bridge.
+        """
+        out = self.__control_session('ping', '60')
+        if not out:
+            time.sleep(10)
+            out = self.__control_session('ifconfig')
+            ips = re.findall(r'inet (\d+\.\d+\.\d+\.\d+)', out)
+
+            if '127.0.0.1' in ips:
+                ips.remove('127.0.0.1')
+
+            num = 3
+            for ip in ips:
+                out = self.host_session.send_expect(
+                    'ping -c %d %s' % (num, ip), '# ')
+                if '0% packet loss' in out:
+                    return ip
+        return ''
+
+    def __control_session(self, command, *args):
+        """
+        Use the qemu guest agent service to control VM.
+        Note:
+            :command: there are these commands as below:
+                       cat, fsfreeze, fstrim, halt, ifconfig, info,\
+                       ping, powerdown, reboot, shutdown, suspend
+            :args: give different args by the different commands.
+        """
+        if not self.qga_sock_path:
+            self.host_logger.info(
+                "No QGA service between host [ %s ] and guest [ %s ]" %
+                (self.host_dut.Name, self.vm_name))
+            return None
+
+        cmd_head = '~/QMP/' + \
+            "qemu-ga-client " + \
+            "--address=%s %s" % \
+            (self.qga_sock_path, command)
+
+        cmd = cmd_head
+        for arg in args:
+            cmd = cmd_head + ' ' + str(arg)
+
+        out = self.host_session.send_expect(cmd, '# ')
+
+        return out
+
+    def stop(self):
+        """
+        Stop VM.
+        """
+        self.__control_session('powerdown')
+        time.sleep(5)
+        self.virt_pool.free_all_resource(self.vm_name)
+
+
+if __name__ == "__main__":
+    import subprocess
+    import sys
+    import pdb
+    from serializer import Serializer
+    from crbs import crbs
+    from tester import Tester
+    from dut import Dut
+    import dts
+    from virt_proxy import VirtProxy
+
+    command = "ifconfig br0"
+    subp = subprocess.Popen(command.split(), stdout=subprocess.PIPE)
+    subp.wait()
+
+    intf_info = subp.stdout.readlines()
+    for line_info in intf_info:
+        regx = re.search(r'inet (\d+\.\d+\.\d+\.\d+)', line_info)
+        if regx:
+            dutIP = regx.group(1)
+            break
+
+    print "DEBUG: dutIp: ", dutIP
+
+    # look up in crbs - to find the matching IP
+    crbInst = None
+    for crb in crbs:
+        if crb['IP'] == dutIP:
+            crbInst = crb
+            break
+
+    # only run on the dut in known crbs
+    if crbInst is None:
+        raise Exception("No available crb instance!!!")
+
+    # initialize the dut and tester
+    serializer = Serializer()
+    serializer.set_serialized_filename('../.%s.cache' % crbInst['IP'])
+    serializer.load_from_file()
+
+    read_cache = None
+    skip_setup = None
+
+    project = "dpdk"
+    dts.Package = 'dep/dpdk.tar.gz'
+    dut = dts.get_project_obj(project, Dut, crbInst, serializer)
+    tester = dts.get_project_obj(project, Tester, crbInst, serializer)
+    dut.tester = tester
+    dut.base_dir = 'dpdk'
+    dut.set_nic_type('niantic')
+    tester.dut = dut
+
+    tester.set_test_types(True, False)
+    dut.set_test_types(True, False)
+
+    tester.set_speedup_options(read_cache, skip_setup)
+    tester.tester_prerequisites()
+    dut.set_speedup_options(read_cache, skip_setup)
+    dut.dut_prerequisites()
+
+    # test that generating and destroying VF
+    port0 = dut.ports_info[0]['port']
+    dut.generate_sriov_vfs_by_port(0, 4)
+    print "port 0 sriov vfs: ", dut.ports_info[0]
+
+    dut.destroy_sriov_vfs_by_port(0)
+
+    time.sleep(2)
+
+    # test that binding and unbing the NIC
+    port0_pci = dut.ports_info[0]['pci']
+    port0.unbind_driver()
+
+    dut.logger.info("JUST TESTING!!!")
+
+    # Start VM by the qemu kvm config file
+    vm1 = QEMUKvm(dut, 'vm1', 'pmd_sriov')
+    print "VM config params:"
+    print vm1.params
+    vm1_dut = vm1.start()
+
+    try:
+        host_ip = vm1.session.send_expect("ifconfig", '# ')
+        print "Host IP:"
+        print host_ip
+
+        vm1_ip = vm1.get_vm_ip()
+        print "VM1 IP:"
+        print vm1_ip
+
+        print "VM1 PCI device:"
+        print vm_dut.session.send_expect('lspci -nn | grep -i eth', '# ')
+    except Exception as e:
+        print e
+        vm1_dut.stop()
+        port0.bind_driver()
+    # Stop VM
+    vm1.stop()
+    port0.bind_driver()
+
+    dut.host_logger.logger_exit()
+    dut.logger.logger_exit()
+    tester.logger.logger_exit()
+
+    print "Start and stop VM over!"
diff --git a/framework/virt_base.py b/framework/virt_base.py
new file mode 100644
index 0000000..625c309
--- /dev/null
+++ b/framework/virt_base.py
@@ -0,0 +1,250 @@
+# <COPYRIGHT_TAG>
+
+from random import randint
+from itertools import imap
+
+import dts
+from dut import Dut
+from config import VirtConf
+from config import VIRTCONF
+from logger import getLogger
+from settings import CONFIG_ROOT_PATH
+from virt_dut import VirtDut
+
+
+class VirtBase(object):
+
+    """
+    Basic module for customer special virtual type. This module implement functions
+    configurated and composed the VM boot command. With these function, we can get
+    and set the VM boot command, and instantiate the VM.
+    """
+
+    def __init__(self, dut, vm_name, suite_name):
+        """
+        Initialize the VirtBase.
+        dut: the instance of Dut
+        vm_name: the name of VM which you have confiured in the configure
+        suite_name: the name of test suite
+        """
+        self.host_dut = dut
+        self.vm_name = vm_name
+        self.suite = suite_name
+
+        # init the host session and logger for VM
+        self.host_dut.init_host_session()
+
+        # replace dut session
+        self.host_session = self.host_dut.host_session
+        self.host_logger = self.host_dut.logger
+
+        # init the host resouce pool for VM
+        self.virt_pool = self.host_dut.virt_pool
+
+        if not self.has_virtual_ability():
+            if not self.enable_virtual_ability():
+                raise Exception(
+                    "Dut [ %s ] cannot have the virtual ability!!!")
+
+        self.virt_type = self.get_virt_type()
+        self.load_global_config()
+        self.load_local_config(suite_name)
+
+    def get_virt_type(self):
+        """
+        Get the virtual type, such as KVM, XEN or LIBVIRT.
+        """
+        NotImplemented
+
+    def has_virtual_ability(self):
+        """
+        Check if the host have the ability of virtualization.
+        """
+        NotImplemented
+
+    def enable_virtual_ability(self):
+        """
+        Enalbe the virtual ability on the DUT.
+        """
+        NotImplemented
+
+    def load_global_config(self):
+        """
+        Load global configure in the path DTS_ROOT_PAHT/conf.
+        """
+        conf = VirtConf(VIRTCONF)
+        conf.load_virt_config(self.virt_type)
+        self.params = conf.get_virt_config()
+
+    def load_local_config(self, suite_name):
+        """
+        Load local configure in the path DTS_ROOT_PATH/conf.
+        """
+        # load local configuration by suite and vm name
+        conf = VirtConf(CONFIG_ROOT_PATH + suite_name + '.cfg')
+        conf.load_virt_config(self.vm_name)
+        localparams = conf.get_virt_config()
+        # replace global configurations with local configurations
+        for param in localparams:
+            if 'mem' in param.keys():
+                self.__save_local_config('mem', param['mem'])
+                continue
+            if 'cpu' in param.keys():
+                self.__save_local_config('cpu', param['cpu'])
+                continue
+            # save local configurations
+            self.params.append(param)
+
+    def __save_local_config(self, key, value):
+        """
+        Save the local config into the global dict self.param.
+        """
+        for param in self.params:
+            if key in param.keys():
+                param[key] = value
+
+    def compose_boot_param(self):
+        """
+        Compose all boot param for starting the VM.
+        """
+        for param in self.params:
+            key = param.keys()[0]
+            value = param[key]
+            try:
+                param_func = getattr(self, 'add_vm_' + key)
+                if callable(param_func):
+                    for option in value:
+                        param_func(**option)
+                else:
+                    print "Virt %s function not implemented!!!" % key
+            except Exception as e:
+                print "Failed: ", e
+
+    def find_option_index(self, option):
+        """
+        Find the boot option in the params which is generated from
+        the global and local configures, and this function will
+        return the index by which option can be indexed in the
+        param list.
+        """
+        index = 0
+        for param in self.params:
+            key = param.keys()[0]
+            if key.strip() == option.strip():
+                return index
+            index += 1
+
+        return None
+
+    def generate_unique_mac(self):
+        """
+        Generate a unique MAC based on the DUT.
+        """
+        mac_head = '00:00:00:'
+        mac_tail = ':'.join(
+            ['%02x' % x for x in imap(lambda x:randint(0, 255), range(3))])
+        return mac_head + mac_tail
+
+    def get_vm_ip(self):
+        """
+        Get the VM IP.
+        """
+        NotImplemented
+
+    def start(self):
+        """
+        Start VM and instantiate the VM with VirtDut.
+        """
+        self.compose_boot_param()
+        try:
+            self.start_vm()
+        except Exception as e:
+            self.host_logger.error(e)
+            return None
+        try:
+            vm_dut = self.instantiate_vm_dut()
+        except Exception as e:
+            self.host_logger.error(e)
+            self.stop()
+            return None
+        return vm_dut
+
+    def start_vm(self):
+        """
+        Start VM.
+        """
+        NotImplemented
+
+    def instantiate_vm_dut(self):
+        """
+        Instantiate the Dut class for VM.
+        """
+        crb = self.host_dut.crb.copy()
+        crb['bypass core0'] = False
+        vm_ip = self.get_vm_ip()
+        crb['IP'] = vm_ip
+        if ':' not in vm_ip:
+            remote_ip = vm_ip.strip()
+            redirect_port = ''
+        else:
+            remote_addr = vm_ip.split(':')
+            remote_ip = remote_addr[0].strip()
+            redirect_port = remote_addr[1].strip()
+        self.__remove_old_rsa_key(remote_ip, redirect_port)
+
+        serializer = self.host_dut.serializer
+
+        try:
+            vm_dut = VirtDut(
+                crb,
+                serializer,
+                self.virt_type,
+                self.vm_name,
+                self.suite)
+        except Exception as e:
+            raise Exception(e)
+        vm_dut.nic_type = 'any'
+        vm_dut.tester = self.host_dut.tester
+        vm_dut.host_dut = self.host_dut
+        vm_dut.host_session = self.host_session
+
+        read_cache = False
+        skip_setup = self.host_dut.skip_setup
+        base_dir = self.host_dut.base_dir
+        vm_dut.set_speedup_options(read_cache, skip_setup)
+        func_only = self.host_dut.want_func_tests
+        perf_only = self.host_dut.want_perf_tests
+        vm_dut.set_test_types(func_tests=func_only, perf_tests=perf_only)
+        # base_dir should be set before prerequisites
+        vm_dut.set_directory(base_dir)
+
+        # setting up dpdk in vm, must call at last
+        vm_dut.prerequisites(dts.Package, dts.Patches)
+
+        target = self.host_dut.target
+        if target:
+            vm_dut.set_target(target)
+        else:
+            raise Exception("Cannot get the HOST DUT test target!")
+
+        return vm_dut
+
+    def __remove_old_rsa_key(self, remote_ip, redirect_port):
+        """
+        Remove the old RSA key of specified remote IP.
+        """
+        rsa_key_path = "~/.ssh/known_hosts"
+        if redirect_port:
+            remove_rsa_key_cmd = "sed -i '/^\[%s\]:%d/d' %s" % \
+                (remote_ip.strip(), int(
+                 redirect_port), rsa_key_path)
+        else:
+            remove_rsa_key_cmd = "sed -i '/^%s/d' %s" % \
+                (remote_ip.strip(), rsa_key_path)
+        self.host_dut.tester.send_expect(remove_rsa_key_cmd, "# ")
+
+    def stop(self):
+        """
+        Stop the VM by the name of VM.
+        """
+        NotImplemented
diff --git a/framework/virt_dut.py b/framework/virt_dut.py
new file mode 100644
index 0000000..1073253
--- /dev/null
+++ b/framework/virt_dut.py
@@ -0,0 +1,239 @@
+# BSD LICENSE
+#
+# Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+#   * Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+#   * Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in
+#     the documentation and/or other materials provided with the
+#     distribution.
+#   * Neither the name of Intel Corporation nor the names of its
+#     contributors may be used to endorse or promote products derived
+#     from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+import os
+import re
+import time
+import dts
+import settings
+from config import PortConf
+from settings import NICS, LOG_NAME_SEP
+from ssh_connection import SSHConnection
+from project_dpdk import DPDKdut
+from dut import Dut
+from net_device import NetDevice
+from logger import getLogger
+
+
+class VirtDut(DPDKdut):
+
+    """
+    A connection to the CRB under test.
+    This class sends commands to the CRB and validates the responses. It is
+    implemented using either ssh for linuxapp or the terminal server for
+    baremetal.
+    All operations are in fact delegated to an instance of either CRBLinuxApp
+    or CRBBareMetal.
+    """
+
+    def __init__(self, crb, serializer, virttype, vm_name, suite):
+        super(Dut, self).__init__(crb, serializer)
+        self.vm_ip = self.get_ip_address()
+        self.NAME = 'virtdut' + LOG_NAME_SEP + '%s' % self.vm_ip
+        # load port config from suite cfg
+        self.suite = suite
+        self.logger = getLogger(self.NAME)
+        self.logger.config_execution('vmdut')
+        self.session = SSHConnection(self.vm_ip, self.NAME,
+                                     self.get_password())
+        self.session.init_log(self.logger)
+
+        # if redirect ssh port, there's only one session enabled
+        self.alt_session = SSHConnection(self.vm_ip, self.NAME + '_alt',
+                                         self.get_password())
+        self.alt_session.init_log(self.logger)
+
+        self.number_of_cores = 0
+        self.tester = None
+        self.cores = []
+        self.architecture = None
+        self.ports_info = None
+        self.ports_map = []
+        self.virttype = virttype
+        self.vmtype = ''
+        if self.virttype == 'XEN':
+            self.vmtype = 'domu'
+            self.virttype = 'host'
+
+    def set_nic_type(self, nic_type):
+        """
+        Set CRB NICS ready to validated.
+        """
+        self.nic_type = nic_type
+        # vm_dut config will load from vm configuration file
+
+    def load_portconf(self):
+        """
+        Load port config for this virtual machine
+        """
+        return
+
+    def set_target(self, target):
+        """
+        Set env variable, these have to be setup all the time. Some tests
+        need to compile example apps by themselves and will fail otherwise.
+        Set hugepage on DUT and install modules required by DPDK.
+        Configure default ixgbe PMD function.
+        """
+        self.set_toolchain(target)
+
+        # set env variable
+        # These have to be setup all the time. Some tests need to compile
+        # example apps by themselves and will fail otherwise.
+        self.send_expect("export RTE_TARGET=" + target, "#")
+        self.send_expect("export RTE_SDK=`pwd`", "#")
+
+        if not self.skip_setup:
+            self.build_install_dpdk(target)
+
+        self.setup_memory(hugepages=512)
+        self.setup_modules(target)
+
+        self.bind_interfaces_linux('igb_uio')
+
+    def prerequisites(self, pkgName, patch):
+        """
+        Prerequest function should be called before execute any test case.
+        Will call function to scan all lcore's information which on DUT.
+        Then call pci scan function to collect nic device information.
+        At last setup DUT' environment for validation.
+        """
+        self.prepare_package(pkgName, patch)
+
+        self.send_expect("cd %s" % self.base_dir, "# ")
+        self.host_session.send_expect("cd %s" % self.base_dir, "# ")
+        self.send_expect("alias ls='ls --color=none'", "#")
+
+        if self.get_os_type() == 'freebsd':
+            self.send_expect('alias make=gmake', '# ')
+            self.send_expect('alias sed=gsed', '# ')
+
+        self.init_core_list()
+        self.pci_devices_information()
+
+        # scan ports before restore interface
+        self.scan_ports()
+        # restore dut ports to kernel
+        if self.vmtype != 'domu':
+            self.restore_interfaces()
+        else:
+            self.restore_interfaces_domu()
+        # rescan ports after interface up
+        self.rescan_ports()
+
+        # no need to rescan ports for guest os just bootup
+        # load port infor from config file
+        self.load_portconf()
+
+        # enable tester port ipv6
+        self.host_dut.enable_tester_ipv6()
+        self.mount_procfs()
+        # auto detect network topology
+        self.map_available_ports()
+        # disable tester port ipv6
+        self.host_dut.disable_tester_ipv6()
+
+        # print latest ports_info
+        for port_info in self.ports_info:
+            self.logger.info(port_info)
+
+    def restore_interfaces_domu(self):
+        """
+        Restore Linux interfaces.
+        """
+        for port in self.ports_info:
+            pci_bus = port['pci']
+            pci_id = port['type']
+            driver = settings.get_nic_driver(pci_id)
+            if driver is not None:
+                addr_array = pci_bus.split(':')
+                bus_id = addr_array[0]
+                devfun_id = addr_array[1]
+                port = NetDevice(self, bus_id, devfun_id)
+                itf = port.get_interface_name()
+                self.send_expect("ifconfig %s up" % itf, "# ")
+                time.sleep(30)
+                print self.send_expect("ip link ls %s" % itf, "# ")
+            else:
+                self.logger.info(
+                    "NOT FOUND DRIVER FOR PORT (%s|%s)!!!" % (pci_bus, pci_id))
+
+    def pci_devices_information(self):
+        self.pci_devices_information_uncached()
+
+    def get_memory_channels(self):
+        """
+        Virtual machine has no memory channel concept, so always return 1
+        """
+        return 1
+
+    def check_ports_available(self, pci_bus, pci_id):
+        """
+        Check that whether auto scanned ports ready to use
+        """
+        pci_addr = "%s:%s" % (pci_bus, pci_id)
+        if pci_id == "8086:100e":
+            return False
+        return True
+        # pci_addr = "%s:%s" % (pci_bus, pci_id)
+        # if self.nic_type == 'any':
+        # load vm port conf need another function
+        # need add vitrual function device into NICS
+
+    def scan_ports(self):
+        """
+        Scan ports information, for vm will always scan
+        """
+        self.scan_ports_uncached()
+
+    def scan_ports_uncached(self):
+        """
+        Scan ports and collect port's pci id, mac adress, ipv6 address.
+        """
+        scan_ports_uncached = getattr(
+            self, 'scan_ports_uncached_%s' % self.get_os_type())
+        return scan_ports_uncached()
+
+    def map_available_ports(self):
+        """
+        Load or generate network connection mapping list.
+        """
+        self.map_available_ports_uncached()
+        self.logger.warning("DUT PORT MAP: " + str(self.ports_map))
+
+    def send_ping6(self, localPort, ipv6, mac=''):
+        """
+        Send ping6 packet from local port with destination ipv6 address.
+        """
+        if self.ports_info[localPort]['type'] == 'ixia':
+            pass
+        else:
+            return self.send_expect("ping6 -w 1 -c 1 -A -I %s %s" % (self.ports_info[localPort]['intf'], ipv6), "# ", 10)
diff --git a/framework/virt_resource.py b/framework/virt_resource.py
new file mode 100644
index 0000000..856f9dc
--- /dev/null
+++ b/framework/virt_resource.py
@@ -0,0 +1,486 @@
+#!/usr/bin/python
+# BSD LICENSE
+#
+# Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+#   * Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+#   * Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in
+#     the documentation and/or other materials provided with the
+#     distribution.
+#   * Neither the name of Intel Corporation nor the names of its
+#     contributors may be used to endorse or promote products derived
+#     from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+from random import randint
+
+from utils import get_obj_funcs
+
+INIT_FREE_PORT = 6060
+
+
+class VirtResource(object):
+
+    """
+    Class handle dut resource, like cpu, memory, net devices
+    """
+
+    def __init__(self, dut):
+        self.dut = dut
+
+        self.cores = [int(core['thread']) for core in dut.cores]
+        # initialized unused cores
+        self.unused_cores = self.cores[:]
+        # initialized used cores
+        self.used_cores = [-1] * len(self.unused_cores)
+
+        self.ports_info = dut.ports_info
+        # initialized unused ports
+        self.ports = [port['pci'] for port in dut.ports_info]
+        self.unused_ports = self.ports[:]
+        # initialized used ports
+        self.used_ports = ['unused'] * len(self.unused_ports)
+
+        # initialized vf ports
+        self.vfs_info = []
+        self.vfs = []
+        self.unused_vfs = []
+        self.used_vfs = []
+
+        # save allocated cores and related vm
+        self.allocated_info = {}
+
+    def __port_used(self, pci):
+        index = self.ports.index(pci)
+        self.used_ports[index] = pci
+        self.unused_ports[index] = 'used'
+
+    def __port_unused(self, pci):
+        index = self.ports.index(pci)
+        self.unused_ports[index] = pci
+        self.used_ports[index] = 'unused'
+
+    def __port_on_socket(self, pci, socket):
+        for port in self.ports_info:
+            if port['pci'] == pci:
+                if socket is -1:
+                    return True
+
+                if port['numa'] == socket:
+                    return True
+                else:
+                    return False
+
+        return False
+
+    def __vf_used(self, pci):
+        index = self.vfs.index(pci)
+        self.used_vfs[index] = pci
+        self.unused_vfs[index] = 'used'
+
+    def __vf_unused(self, pci):
+        index = self.vfs.index(pci)
+        self.used_vfs[index] = 'unused'
+        self.unused_vfs[index] = pci
+
+    def __core_used(self, core):
+        core = int(core)
+        index = self.cores.index(core)
+        self.used_cores[index] = core
+        self.unused_cores[index] = -1
+
+    def __core_unused(self, core):
+        core = int(core)
+        index = self.cores.index(core)
+        self.unused_cores[index] = core
+        self.used_cores[index] = -1
+
+    def __core_on_socket(self, core, socket):
+        for dut_core in self.dut.cores:
+            if int(dut_core['thread']) == core:
+                if socket is -1:
+                    return True
+
+                if int(dut_core['socket']) == socket:
+                    return True
+                else:
+                    return False
+
+        return False
+
+    def __core_isused(self, core):
+        index = self.cores.index(core)
+        if self.used_cores[index] != -1:
+            return True
+        else:
+            return False
+
+    def reserve_cpu(self, coremask=''):
+        """
+        Reserve dpdk used cpus by mask
+        """
+        val = int(coremask, base=16)
+        cpus = []
+        index = 0
+        while val != 0:
+            if val & 0x1:
+                cpus.append(index)
+
+            val = val >> 1
+            index += 1
+
+        for cpu in cpus:
+            self.__core_used(cpu)
+
+    def alloc_cpu(self, vm='', number=-1, socket=-1, corelist=None):
+        """
+        There're two options for request cpu resouce for vm.
+        If number is not -1, just allocate cpu from not used cores.
+        If list is not None, will allocate cpu after checked.
+        """
+        cores = []
+
+        if vm == '':
+            print "Alloc cpu request vitual machine name!!!"
+            return cores
+
+        if number != -1:
+            for core in self.unused_cores:
+                if core != -1 and number != 0:
+                    if self.__core_on_socket(core, socket) is True:
+                        self.__core_used(core)
+                        cores.append(str(core))
+                        number = number - 1
+            if number != 0:
+                print "Can't allocated requested cpu!!!"
+
+        if corelist is not None:
+            for core in corelist:
+                if self.__core_isused(int(core)) is True:
+                    print "Core %s has been used!!!" % core
+                else:
+                    if self.__core_on_socket(int(core), socket) is True:
+                        self.__core_used(int(core))
+                        cores.append(core)
+
+        if vm not in self.allocated_info:
+            self.allocated_info[vm] = {}
+
+        self.allocated_info[vm]['cores'] = cores
+        return cores
+
+    def __vm_has_resource(self, vm, resource=''):
+        if vm == '':
+            self.dut.logger.info("VM name cannt be NULL!!!")
+            raise Exception("VM name cannt be NULL!!!")
+        if vm not in self.allocated_info:
+            self.dut.logger.info(
+                "There is no resource allocated to VM [%s]." % vm)
+            return False
+        if resource == '':
+            return True
+        if resource not in self.allocated_info[vm]:
+            self.dut.logger.info(
+                "There is no resource [%s] allocated to VM [%s] " %
+                (resource, vm))
+            return False
+        return True
+
+    def free_cpu(self, vm):
+        if self.__vm_has_resource(vm, 'cores'):
+            for core in self.allocated_info[vm]['cores']:
+                self.__core_unused(core)
+            self.allocated_info[vm].pop('cores')
+
+    def alloc_pf(self, vm='', number=-1, socket=-1, pflist=[]):
+        """
+        There're two options for request pf devices for vm.
+        If number is not -1, just allocate pf device from not used pfs.
+        If list is not None, will allocate pf devices after checked.
+        """
+        ports = []
+
+        if number != -1:
+            for pci in self.unused_ports:
+                if pci != 'unused' and number != 0:
+                    if self.__port_on_socket(pci, socket) is True:
+                        self.__port_used(pci)
+                        ports.append(pci)
+                        number = number - 1
+            if number != 0:
+                print "Can't allocated requested PF devices!!!"
+
+        if pflist is not None:
+            for pci in pflist:
+                if self.__port_isused(pci) is True:
+                    print "Port %s has been used!!!" % pci
+                else:
+                    if self.__port_on_socket(pci, socket) is True:
+                        self.__port_used(core)
+                        ports.append(core)
+
+        if vm not in self.allocated_info:
+            self.allocated_info[vm] = {}
+
+        self.allocated_info[vm]['ports'] = ports
+        return ports
+
+    def free_pf(self, vm):
+        if self.__vm_has_resource(vm, 'ports'):
+            for pci in self.allocated_info[vm]['ports']:
+                self.__port_unused(pci)
+            self.allocated_info[vm].pop('ports')
+
+    def alloc_vf_from_pf(self, vm='', pf_pci='', number=-1, vflist=[]):
+        """
+        There're two options for request vf devices of pf device.
+        If number is not -1, just allocate vf device from not used vfs.
+        If list is not None, will allocate vf devices after checked.
+        """
+        vfs = []
+        if vm == '':
+            print "Alloc VF request vitual machine name!!!"
+            return vfs
+
+        if pf_pci == '':
+            print "Alloc VF request PF pci address!!!"
+            return vfs
+
+        for vf_info in self.vfs_info:
+            if vf_info['pf_pci'] == pf_pci:
+                if vf_info['pci'] in vflist:
+                    vfs.append(vf_info['pci'])
+                    continue
+
+                if number > 0:
+                    vfs.append(vf_info['pci'])
+                    number = number - 1
+
+        for vf in vfs:
+            self.__vf_used(vf)
+
+        if vm not in self.allocated_info:
+            self.allocated_info[vm] = {}
+
+        self.allocated_info[vm]['vfs'] = vfs
+        return vfs
+
+    def free_vf(self, vm):
+        if self.__vm_has_resource(vm, 'vfs'):
+            for pci in self.allocated_info[vm]['vfs']:
+                self.__vf_unused(pci)
+            self.allocated_info[vm].pop('vfs')
+
+    def add_vf_on_pf(self, pf_pci='', vflist=[]):
+        """
+        Add vf devices generated by specified pf devices.
+        """
+        # add vfs into vf info list
+        vfs = []
+        for vf in vflist:
+            if vf not in self.vfs:
+                self.vfs_info.append({'pci': vf, 'pf_pci': pf_pci})
+                vfs.append(vf)
+        used_vfs = ['unused'] * len(vflist)
+        self.unused_vfs += vfs
+        self.used_vfs += used_vfs
+        self.vfs += vfs
+
+    def del_vf_on_pf(self, pf_pci='', vflist=[]):
+        """
+        Remove vf devices generated by specified pf devices.
+        """
+        vfs = []
+        for vf in vflist:
+            for vfs_info in self.vfs_info:
+                if vfs_info['pci'] == vf:
+                    vfs.append(vf)
+
+        for vf in vfs:
+            try:
+                index = self.vfs.index(vf)
+            except:
+                continue
+            del self.vfs_info[index]
+            del self.unused_vfs[index]
+            del self.used_vfs[index]
+            del self.vfs[index]
+
+    def alloc_port(self, vm=''):
+        """
+        Allocate unused host port for vm
+        """
+        if vm == '':
+            print "Alloc host port request vitual machine name!!!"
+            return None
+
+        port_start = INIT_FREE_PORT + randint(1, 100)
+        port_step = randint(1, 10)
+        port = None
+        count = 20
+        while True:
+            if self.dut.check_port_occupied(port_start) is False:
+                port = port_start
+                break
+            count -= 1
+            if count < 0:
+                print 'No available port on the host!!!'
+                break
+            port_start += port_step
+
+        if vm not in self.allocated_info:
+            self.allocated_info[vm] = {}
+
+        self.allocated_info[vm]['hostport'] = port
+        return port
+
+    def free_port(self, vm):
+        if self.__vm_has_resource(vm, 'hostport'):
+            self.allocated_info[vm].pop('hostport')
+
+    def alloc_vnc_num(self, vm=''):
+        """
+        Allocate unused host VNC display number for VM.
+        """
+        if vm == '':
+            print "Alloc vnc display number request vitual machine name!!!"
+            return None
+
+        max_vnc_display_num = self.dut.get_maximal_vnc_num()
+        free_vnc_display_num = max_vnc_display_num + 1
+
+        if vm not in self.allocated_info:
+            self.allocated_info[vm] = {}
+
+        self.allocated_info[vm]['vnc_display_num'] = free_vnc_display_num
+
+        return free_vnc_display_num
+
+    def free_vnc_num(self, vm):
+        if self.__vm_has_resource(vm, 'vnc_display_num'):
+            self.allocated_info[vm].pop('vnc_display_num')
+
+    def free_all_resource(self, vm):
+        all_free_funcs = get_obj_funcs(self, r'free_')
+        for func in all_free_funcs:
+            if func.__name__ == 'free_all_resource':
+                continue
+            func(vm)
+        if self.__vm_has_resource(vm):
+            self.allocated_info.pop(vm)
+
+    def get_cpu_on_vm(self, vm=''):
+        """
+        Return core list on specifid VM.
+        """
+        if vm in self.allocated_info:
+            if "cores" in self.allocated_info[vm]:
+                return self.allocated_info[vm]['cores']
+
+    def get_vfs_on_vm(self, vm=''):
+        """
+        Return vf device list on specifid VM.
+        """
+        if vm in self.allocated_info:
+            if 'vfs' in self.allocated_info[vm]:
+                return self.allocated_info[vm]['vfs']
+
+    def get_pfs_on_vm(self, vm=''):
+        """
+        Return pf device list on specifid VM.
+        """
+        if vm in self.allocated_info:
+            if 'ports' in self.allocated_info[vm]:
+                return self.allocated_info[vm]['ports']
+
+
+class simple_dut(object):
+
+    def __init__(self):
+        self.ports_info = []
+        self.cores = []
+
+    def check_port_occupied(self, port):
+        return False
+
+if __name__ == "__main__":
+    dut = simple_dut()
+    dut.cores = [{'thread': '1', 'socket': '0'}, {'thread': '2', 'socket': '0'},
+                 {'thread': '3', 'socket': '0'}, {'thread': '4', 'socket': '0'},
+                 {'thread': '5', 'socket': '0'}, {'thread': '6', 'socket': '0'},
+                 {'thread': '7', 'socket': '1'}, {'thread': '8', 'socket': '1'},
+                 {'thread': '9', 'socket': '1'}, {'thread': '10', 'socket': '1'},
+                 {'thread': '11', 'socket': '1'}, {'thread': '12', 'socket': '1'}]
+
+    dut.ports_info = [{'intf': 'p786p1', 'source': 'cfg', 'mac': '90:e2:ba:69:e5:e4',
+                       'pci': '08:00.0', 'numa': 0, 'ipv6': 'fe80::92e2:baff:fe69:e5e4',
+                       'peer': 'IXIA:6.5', 'type': '8086:10fb'},
+                      {'intf': 'p786p2', 'source': 'cfg', 'mac': '90:e2:ba:69:e5:e5',
+                       'pci': '08:00.1', 'numa': 0, 'ipv6': 'fe80::92e2:baff:fe69:e5e5',
+                       'peer': 'IXIA:6.6', 'type': '8086:10fb'},
+                      {'intf': 'p787p1', 'source': 'cfg', 'mac': '90:e2:ba:69:e5:e6',
+                       'pci': '84:00.0', 'numa': 1, 'ipv6': 'fe80::92e2:baff:fe69:e5e6',
+                       'peer': 'IXIA:6.7', 'type': '8086:10fb'},
+                      {'intf': 'p787p2', 'source': 'cfg', 'mac': '90:e2:ba:69:e5:e7',
+                       'pci': '84:00.1', 'numa': 1, 'ipv6': 'fe80::92e2:baff:fe69:e5e7',
+                       'peer': 'IXIA:6.8', 'type': '8086:10fb'}]
+
+    virt_pool = VirtResource(dut)
+    print "Alloc two PF devices on socket 1 from VM"
+    print virt_pool.alloc_pf(vm='test1', number=2, socket=1)
+
+    virt_pool.add_vf_on_pf(pf_pci='08:00.0', vflist=[
+                           '08:10.0', '08:10.2', '08:10.4', '08:10.6'])
+    virt_pool.add_vf_on_pf(pf_pci='08:00.1', vflist=[
+                           '08:10.1', '08:10.3', '08:10.5', '08:10.7'])
+    print "Add VF devices to resource pool"
+    print virt_pool.vfs_info
+
+    print "Alloc VF device from resource pool"
+    print virt_pool.alloc_vf_from_pf(vm='test1', pf_pci='08:00.0', number=2)
+    print virt_pool.used_vfs
+    print "Alloc VF device from resource pool"
+    print virt_pool.alloc_vf_from_pf(vm='test2', pf_pci='08:00.1', vflist=['08:10.3', '08:10.5'])
+    print virt_pool.used_vfs
+
+    print "Del VF devices from resource pool"
+    virt_pool.del_vf_on_pf(pf_pci='08:00.0', vflist=['08:10.4', '08:10.2'])
+    print virt_pool.vfs_info
+
+    virt_pool.reserve_cpu('e')
+    print "Reserve three cores from resource pool"
+    print virt_pool.unused_cores
+    print "Alloc two cores on socket1 for VM-test1"
+    print virt_pool.alloc_cpu(vm="test1", number=2, socket=1)
+    print "Alloc two cores in list for VM-test2"
+    print virt_pool.alloc_cpu(vm="test2", corelist=['4', '5'])
+    print "Alloc two cores for VM-test3"
+    print virt_pool.alloc_cpu(vm="test3", number=2)
+    print "Alloc port for VM-test1"
+    print virt_pool.alloc_port(vm='test1')
+    print "Alloc information after allcated"
+    print virt_pool.allocated_info
+
+    print "Get cores on VM-test1"
+    print virt_pool.get_cpu_on_vm("test1")
+    print "Get pfs on VM-test1"
+    print virt_pool.get_pfs_on_vm("test1")
+    print "Get vfs on VM-test2"
+    print virt_pool.get_vfs_on_vm("test2")
-- 
1.9.0


  reply	other threads:[~2015-05-18  8:23 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-05-18  5:07 [dts] [‘dts-v1’ 0/9] sjiajiax
2015-05-18  5:07 ` [dts] [‘dts-v1’ 1/9] Abstract the NIC device as the single class NetDevice sjiajiax
2015-05-18  7:46   ` Xu, HuilongX
2015-05-18  8:58     ` Jiajia, SunX
2015-05-18  5:07 ` [dts] [‘dts-v1’ 2/9] Optimize ssh connection sjiajiax
2015-05-18  7:06   ` Liu, Yong
2015-05-18  7:43     ` Jiajia, SunX
2015-05-19  0:38       ` Liu, Yong
2015-05-19  7:05         ` Jiajia, SunX
2015-05-18  5:07 ` [dts] [‘dts-v1’ 3/9] Add some params and functions related to the virtual test sjiajiax
2015-05-18  7:26   ` Liu, Yong
2015-05-18  8:08     ` Jiajia, SunX
2015-05-18  7:59   ` Xu, HuilongX
2015-05-18  9:08     ` Jiajia, SunX
2015-05-18  5:07 ` [dts] [‘dts-v1’ 4/9] Add VM class and the virtual DUT class and the virtual resource module sjiajiax
2015-05-18  8:23   ` Xu, HuilongX [this message]
2015-05-18 13:57   ` Liu, Yong
2015-05-19  5:46     ` Jiajia, SunX
2015-05-18  5:07 ` [dts] [‘dts-v1’ 5/9] Add qemu-agent-guest for QEMU VM sjiajiax
2015-05-18 14:00   ` Liu, Yong
2015-05-18  5:07 ` [dts] [‘dts-v1’ 6/9] Add a global virtual configure sjiajiax
2015-05-18  6:32   ` Liu, Yong
2015-05-18  6:48     ` Jiajia, SunX
2015-05-18  5:07 ` [dts] [‘dts-v1’ 7/9] add some pmd functions for tester to code the testpmd cases sjiajiax
2015-05-18  8:28   ` Xu, HuilongX
2015-05-18  8:45     ` Liu, Yong
2015-05-18  9:05       ` Jiajia, SunX
2015-05-18  9:20     ` Jiajia, SunX
2015-05-18  5:07 ` [dts] [‘dts-v1’ 8/9] Add two tar files for ACL testing sjiajiax
2015-05-18 14:02   ` Liu, Yong
2015-05-19  5:49     ` Jiajia, SunX
2015-05-18  5:07 ` [dts] [‘dts-v1’ 9/9] Add a suite to test SRIOV mirror with KVM sjiajiax
2015-05-18  6:29 ` [dts] [‘dts-v1’ 0/9] Liu, Yong
2015-05-18  6:47   ` Jiajia, SunX

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DF2A19295B96364286FEB7F3DDA27A460110F111@SHSMSX101.ccr.corp.intel.com \
    --to=huilongx.xu@intel.com \
    --cc=dts@dpdk.org \
    --cc=sunx.jiajia@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).