test suite reviews and discussions
 help / color / mirror / Atom feed
* [dts] [PATCH V1] add VM vfio_pci driver options
@ 2017-05-18  9:24 xu,gang
  2017-05-18  9:24 ` [dts] [PATCH V1] add test suite pf pass through xu,gang
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: xu,gang @ 2017-05-18  9:24 UTC (permalink / raw)
  To: dts; +Cc: xu,gang

Fortville pci does not support pci-stub driver to start VM with PF pci pass-through,
so used vfio driver.
Refer:
http://www.dpdk.org/doc/guides/rel_notes/known_issues.html#uio-pci-generic-module-bind-failed-in-x710-xl710-xxv710
http://www.dpdk.org/doc/guides/rel_notes/known_issues.html#igb-uio-legacy-mode-can-not-be-used-in-x710-xl710-xxv710

Signed-off-by: xu,gang <gangx.xu@intel.com>
---
 framework/qemu_kvm.py | 102 ++++++++++++++++++++++++++++++++++++--------------
 1 file changed, 73 insertions(+), 29 deletions(-)

diff --git a/framework/qemu_kvm.py b/framework/qemu_kvm.py
index 79e8417..79095bd 100644
--- a/framework/qemu_kvm.py
+++ b/framework/qemu_kvm.py
@@ -159,9 +159,11 @@ class QEMUKvm(VirtBase):
             self.host_logger.error("No emulator [ %s ] on the DUT [ %s ]" %
                                    (qemu_emulator_path, self.host_dut.get_ip_address()))
             return None
-        out = self.host_session.send_expect("[ -x %s ];echo $?" % qemu_emulator_path, '# ')
+        out = self.host_session.send_expect(
+            "[ -x %s ];echo $?" % qemu_emulator_path, '# ')
         if out != '0':
-            self.host_logger.error("Emulator [ %s ] not executable on the DUT [ %s ]" %
+            self.host_logger.error(
+                "Emulator [ %s ] not executable on the DUT [ %s ]" %
                                    (qemu_emulator_path, self.host_dut.get_ip_address()))
             return None
         self.qemu_emulator = qemu_emulator_path
@@ -177,12 +179,14 @@ class QEMUKvm(VirtBase):
         """
         Check if host has the virtual ability.
         """
-        out = self.host_session.send_expect('cat /proc/cpuinfo | grep flags', '# ')
+        out = self.host_session.send_expect(
+            'cat /proc/cpuinfo | grep flags', '# ')
         rgx = re.search(' vmx ', out)
         if rgx:
             pass
         else:
-            self.host_logger.warning("Hardware virtualization disabled on host!!!")
+            self.host_logger.warning(
+                "Hardware virtualization disabled on host!!!")
             return False
 
         out = self.host_session.send_expect('lsmod | grep kvm', '# ')
@@ -252,9 +256,11 @@ class QEMUKvm(VirtBase):
         self.__pid_file = '/tmp/.%s.pid' % self.vm_name
         index = self.find_option_index('pid_file')
         if index:
-            self.params[index] = {'pid_file': [{'name': '%s' % self.__pid_file}]}
+            self.params[index] = {
+                'pid_file': [{'name': '%s' % self.__pid_file}]}
         else:
-            self.params.append({'pid_file': [{'name': '%s' % self.__pid_file}]})
+            self.params.append(
+                {'pid_file': [{'name': '%s' % self.__pid_file}]})
 
     def add_vm_pid_file(self, **options):
         """
@@ -263,7 +269,6 @@ class QEMUKvm(VirtBase):
         if 'name' in options.keys():
             self.__add_boot_line('-pidfile %s' % options['name'])
 
-
     def set_vm_name(self, vm_name):
         """
         Set VM name.
@@ -592,6 +597,30 @@ class QEMUKvm(VirtBase):
                 self.__add_vm_virtio_user_pci(**options)
             elif options['driver'] == 'vhost-cuse':
                 self.__add_vm_virtio_cuse_pci(**options)
+            if options['driver'] == 'vfio-pci':
+                self.__add_vm_pci_vfio(**options)
+
+    def __add_vm_pci_vfio(self, **options):
+        """
+        driver: vfio-pci
+        opt_host: 08:00.0
+        opt_addr: 00:00:00:00:01:02
+        """
+        dev_boot_line = '-device vfio-pci'
+        separator = ','
+        if 'opt_host' in options.keys() and \
+                options['opt_host']:
+            dev_boot_line += separator + 'host=%s' % options['opt_host']
+            dev_boot_line += separator + 'id=pt_%d' % self.pt_idx
+            self.pt_idx += 1
+            self.pt_devices.append(options['opt_host'])
+        if 'opt_addr' in options.keys() and \
+                options['opt_addr']:
+            dev_boot_line += separator + 'addr=%s' % options['opt_addr']
+            self.assigned_pcis.append(options['opt_addr'])
+
+        if self.__string_has_multi_fields(dev_boot_line, separator):
+            self.__add_boot_line(dev_boot_line)
 
     def __add_vm_pci_assign(self, **options):
         """
@@ -627,21 +656,26 @@ class QEMUKvm(VirtBase):
             dev_boot_line = '-chardev socket'
             char_id = 'char%d' % self.char_idx
             if 'opt_server' in options.keys() and options['opt_server']:
-                dev_boot_line += separator + 'id=%s' % char_id + separator + 'path=%s' %options['opt_path'] + separator + '%s' % options['opt_server']
-		self.char_idx += 1
+                dev_boot_line += separator + 'id=%s' % char_id + separator + \
+                    'path=%s' % options[
+                        'opt_path'] + separator + '%s' % options['opt_server']
+                self.char_idx += 1
                 self.__add_boot_line(dev_boot_line)
             else:
-                dev_boot_line += separator + 'id=%s' % char_id + separator + 'path=%s' %options['opt_path']
+                dev_boot_line += separator + 'id=%s' % char_id + \
+                    separator + 'path=%s' % options['opt_path']
                 self.char_idx += 1
                 self.__add_boot_line(dev_boot_line)
             # netdev parameter
             netdev_id = 'netdev%d' % self.netdev_idx
             self.netdev_idx += 1
             if 'opt_queue' in options.keys() and options['opt_queue']:
-                queue_num=options['opt_queue']
-                dev_boot_line = '-netdev type=vhost-user,id=%s,chardev=%s,vhostforce,queues=%s' % (netdev_id, char_id,queue_num)
+                queue_num = options['opt_queue']
+                dev_boot_line = '-netdev type=vhost-user,id=%s,chardev=%s,vhostforce,queues=%s' % (
+                    netdev_id, char_id, queue_num)
             else:
-                dev_boot_line = '-netdev type=vhost-user,id=%s,chardev=%s,vhostforce' % (netdev_id, char_id)
+                dev_boot_line = '-netdev type=vhost-user,id=%s,chardev=%s,vhostforce' % (
+                    netdev_id, char_id)
             self.__add_boot_line(dev_boot_line)
             # device parameter
             opts = {'opt_netdev': '%s' % netdev_id}
@@ -650,7 +684,7 @@ class QEMUKvm(VirtBase):
                 opts['opt_mac'] = options['opt_mac']
             if 'opt_settings' in options.keys() and options['opt_settings']:
                 opts['opt_settings'] = options['opt_settings']
-	self.__add_vm_virtio_net_pci(**opts)
+        self.__add_vm_virtio_net_pci(**opts)
 
     def __add_vm_virtio_cuse_pci(self, **options):
         """
@@ -664,15 +698,17 @@ class QEMUKvm(VirtBase):
         else:
             cuse_id = 'vhost%d' % self.cuse_id
             self.cuse_id += 1
-        dev_boot_line += separator + 'id=%s' % cuse_id + separator + 'ifname=tap_%s' % cuse_id + separator + "vhost=on" + separator + "script=no"
+        dev_boot_line += separator + 'id=%s' % cuse_id + separator + \
+            'ifname=tap_%s' % cuse_id + separator + \
+            "vhost=on" + separator + "script=no"
         self.__add_boot_line(dev_boot_line)
         # device parameter
         opts = {'opt_netdev': '%s' % cuse_id,
                 'opt_id': '%s_net' % cuse_id}
         if 'opt_mac' in options.keys() and options['opt_mac']:
-                opts['opt_mac'] = options['opt_mac']
+            opts['opt_mac'] = options['opt_mac']
         if 'opt_settings' in options.keys() and options['opt_settings']:
-                opts['opt_settings'] = options['opt_settings']
+            opts['opt_settings'] = options['opt_settings']
 
         self.__add_vm_virtio_net_pci(**opts)
 
@@ -742,7 +778,8 @@ class QEMUKvm(VirtBase):
         path: if adding monitor to vm, need to specify unix socket patch
         """
         if 'path' in options.keys():
-            monitor_boot_line = '-monitor unix:%s,server,nowait' % options['path']
+            monitor_boot_line = '-monitor unix:%s,server,nowait' % options[
+                'path']
             self.__add_boot_line(monitor_boot_line)
             self.monitor_sock_path = options['path']
         else:
@@ -795,8 +832,10 @@ class QEMUKvm(VirtBase):
                 if 'port' in options.keys():
                     self.migrate_port = options['port']
                 else:
-                    self.migrate_port = str(self.virt_pool.alloc_port(self.vm_name))
-                migrate_boot_line = migrate_cmd % {'migrate_port': self.migrate_port}
+                    self.migrate_port = str(
+                        self.virt_pool.alloc_port(self.vm_name))
+                migrate_boot_line = migrate_cmd % {
+                    'migrate_port': self.migrate_port}
                 self.__add_boot_line(migrate_boot_line)
 
     def add_vm_serial_port(self, **options):
@@ -822,7 +861,8 @@ class QEMUKvm(VirtBase):
             if first:
                 # login into Fedora os, not sure can work on all distributions
                 self.serial_session.send_expect("", "login:")
-                self.serial_session.send_expect("%s" % self.username, "Password:")
+                self.serial_session.send_expect(
+                    "%s" % self.username, "Password:")
                 self.serial_session.send_expect("%s" % self.password, "# ")
             return self.serial_session
 
@@ -915,7 +955,8 @@ class QEMUKvm(VirtBase):
         Send migration command to host and check whether start migration
         """
         # send migration command
-        migration_port = 'tcp:%(IP)s:%(PORT)s' % {'IP': remote_ip, 'PORT': remote_port}
+        migration_port = 'tcp:%(IP)s:%(PORT)s' % {
+            'IP': remote_ip, 'PORT': remote_port}
 
         self.__monitor_session('migrate', '-d', migration_port)
         time.sleep(2)
@@ -943,7 +984,8 @@ class QEMUKvm(VirtBase):
             time.sleep(6)
             count -= 1
 
-        raise StartVMFailedException('Virtual machine can not finished in 180 seconds!!!')
+        raise StartVMFailedException(
+            'Virtual machine can not finished in 180 seconds!!!')
 
     def generate_qemu_boot_line(self):
         """
@@ -977,7 +1019,8 @@ class QEMUKvm(VirtBase):
             time.sleep(6)
             count -= 1
 
-        raise StartVMFailedException('Virtual machine control net not ready in 120 seconds!!!')
+        raise StartVMFailedException(
+            'Virtual machine control net not ready in 120 seconds!!!')
 
     def __alloc_vcpus(self):
         """
@@ -987,7 +1030,8 @@ class QEMUKvm(VirtBase):
         cpus = self.virt_pool.alloc_cpu(vm=self.vm_name, corelist=req_cpus)
 
         if len(req_cpus) != len(cpus):
-            self.host_logger.warning("VCPUs not enough, required [ %s ], just [ %s ]" %
+            self.host_logger.warning(
+                "VCPUs not enough, required [ %s ], just [ %s ]" %
                                      (req_cpus, cpus))
             raise Exception("No enough required vcpus!!!")
 
@@ -1146,7 +1190,8 @@ class QEMUKvm(VirtBase):
                 (self.host_dut.NAME, self.vm_name))
             return None
 
-        self.host_session.send_expect('nc -U %s' % self.monitor_sock_path, '(qemu)')
+        self.host_session.send_expect(
+            'nc -U %s' % self.monitor_sock_path, '(qemu)')
 
         cmd = command
         for arg in args:
@@ -1183,8 +1228,6 @@ class QEMUKvm(VirtBase):
         except:
             self.host_logger.info("Failed to capture pid!!!")
 
-
-
     def __strip_guest_pci(self):
         """
         Strip all pci-passthrough device information, based on qemu monitor
@@ -1204,7 +1247,8 @@ class QEMUKvm(VirtBase):
             m = re.match(pci_reg, line)
             n = re.match(id_reg, line)
             if m:
-                pci = "%02d:%02d.%d" % (int(m.group(2)), int(m.group(4)), int(m.group(5)))
+                pci = "%02d:%02d.%d" % (
+                    int(m.group(2)), int(m.group(4)), int(m.group(5)))
             if n:
                 dev_id = n.group(1)
                 if dev_id != '':
-- 
1.9.3

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [dts] [PATCH V1] add test suite pf pass through
  2017-05-18  9:24 [dts] [PATCH V1] add VM vfio_pci driver options xu,gang
@ 2017-05-18  9:24 ` xu,gang
  2017-05-18  9:24 ` [dts] [PATCH V1] add test plan " xu,gang
  2017-05-22 10:07 ` [dts] [PATCH V1] add VM vfio_pci driver options Liu, Yong
  2 siblings, 0 replies; 5+ messages in thread
From: xu,gang @ 2017-05-18  9:24 UTC (permalink / raw)
  To: dts; +Cc: xu,gang

Signed-off-by: xu,gang <gangx.xu@intel.com>
---
 tests/TestSuite_pf_pass_through.py | 121 +++++++++++++++++++++++++++++++++++++
 1 file changed, 121 insertions(+)
 create mode 100644 tests/TestSuite_pf_pass_through.py

diff --git a/tests/TestSuite_pf_pass_through.py b/tests/TestSuite_pf_pass_through.py
new file mode 100644
index 0000000..8885744
--- /dev/null
+++ b/tests/TestSuite_pf_pass_through.py
@@ -0,0 +1,121 @@
+# <COPYRIGHT_TAG>
+
+import re
+import time
+
+from qemu_kvm import QEMUKvm
+from test_case import TestCase
+from pmd_output import PmdOutput
+
+VM_CORES_MASK = 'all'
+
+
+class TestPfPassThrough(TestCase):
+
+    def set_up_all(self):
+        '''
+        Run at the start of each test suite.
+        '''
+        self.dut_ports = self.dut.get_ports(self.nic)
+        self.verify(len(self.dut_ports) > 1, "Insufficient ports")
+        self.vm0 = None
+        self.env_done = False
+        self.dut.send_expect("modprobe vfio", "#", 5)
+        self.dut.send_expect("modprobe vfio-pci", "#", 5)
+
+    def set_up(self):
+        '''
+        Run before each test case.
+        '''
+        self.setup_vm_env()
+
+    def setup_vm_env(self, driver='default'):
+
+        # Start vm with the two PFs on the DUT
+        self.used_dut_port_0 = self.dut_ports[0]
+        port = self.dut.ports_info[self.used_dut_port_0]['port']
+        port.bind_driver('vfio-pci')
+
+        self.used_dut_port_1 = self.dut_ports[1]
+        port = self.dut.ports_info[self.used_dut_port_1]['port']
+        port.bind_driver('vfio-pci')
+
+        try:
+
+            time.sleep(1)
+            self.pci0 = self.dut.ports_info[self.used_dut_port_0]['pci']
+            self.pci1 = self.dut.ports_info[self.used_dut_port_1]['pci']
+            vf0_prop = {'opt_host': self.pci0}
+            vf1_prop = {'opt_host': self.pci1}
+
+            # Set up VM0 ENV
+            self.vm0 = QEMUKvm(self.dut, 'vm0', 'pf_pass_through')
+            self.vm0.set_vm_device(driver='vfio-pci', **vf0_prop)
+            self.vm0.set_vm_device(driver='vfio-pci', **vf1_prop)
+            self.vm_dut_0 = self.vm0.start()
+            if self.vm_dut_0 is None:
+                raise Exception("Set up VM0 ENV failed!")
+
+        except Exception as e:
+            self.destroy_vm_env()
+            raise Exception(e)
+
+        self.env_done = True
+
+    def destroy_vm_env(self):
+        if getattr(self, 'vm0', None):
+            # destroy testpmd in vm0
+            if getattr(self, 'vm0_testpmd', None):
+                self.vm0_testpmd.execute_cmd('stop')
+                self.vm0_testpmd.execute_cmd('quit', '# ')
+                self.vm0_testpmd = None
+            self.vm0_dut_ports = None
+            # destroy vm0
+            self.vm0.stop()
+            self.vm0 = None
+
+        self.dut.virt_exit()
+        time.sleep(3)
+
+        for port_id in self.dut_ports:
+            port = self.dut.ports_info[port_id]['port']
+            port.bind_driver()
+
+        self.env_done = False
+
+    def test_pf_pass_through(self):
+
+        self.verify(
+            self.kdriver in ["ixgbe", "i40e"], "%s NIC not support pf pass-through" % self.kdriver)
+        # Start testpmd in VM
+        self.vm0_dut_ports = self.vm_dut_0.get_ports('any')
+        self.vm0_testpmd = PmdOutput(self.vm_dut_0)
+        self.vm0_testpmd.start_testpmd(VM_CORES_MASK)
+        self.vm0_testpmd.execute_cmd('set fwd mac')
+        self.vm0_testpmd.execute_cmd('start')
+
+        tgen_ports = []
+        tx_port = self.tester.get_local_port(self.dut_ports[0])
+        rx_port = self.tester.get_local_port(self.dut_ports[1])
+        tgen_ports.append((tx_port, rx_port))
+
+        dst_mac = self.vm0_testpmd.get_port_mac(0)
+        src_mac = self.tester.get_mac(tx_port)
+        pkt_param = [("ether", {'dst': dst_mac, 'src': src_mac})]
+
+        result = self.tester.check_random_pkts(
+            tgen_ports, allow_miss=False, params=pkt_param)
+        self.verify(result != False, "VF0 failed to forward packets to VF1")
+
+    def tear_down(self):
+        """
+        Run after each test case.
+        """
+        pass
+
+    def tear_down_all(self):
+        """
+        Run after each test suite.
+        """
+        self.destroy_vm_env()
+        pass
-- 
1.9.3

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [dts] [PATCH V1] add test plan pass through
  2017-05-18  9:24 [dts] [PATCH V1] add VM vfio_pci driver options xu,gang
  2017-05-18  9:24 ` [dts] [PATCH V1] add test suite pf pass through xu,gang
@ 2017-05-18  9:24 ` xu,gang
  2017-06-09 12:44   ` Liu, Yong
  2017-05-22 10:07 ` [dts] [PATCH V1] add VM vfio_pci driver options Liu, Yong
  2 siblings, 1 reply; 5+ messages in thread
From: xu,gang @ 2017-05-18  9:24 UTC (permalink / raw)
  To: dts; +Cc: xu,gang

Signed-off-by: xu,gang <gangx.xu@intel.com>
---
 test_plans/pf_pass_through_test_plan.rst | 93 ++++++++++++++++++++++++++++++++
 1 file changed, 93 insertions(+)
 create mode 100644 test_plans/pf_pass_through_test_plan.rst

diff --git a/test_plans/pf_pass_through_test_plan.rst b/test_plans/pf_pass_through_test_plan.rst
new file mode 100644
index 0000000..e77d627
--- /dev/null
+++ b/test_plans/pf_pass_through_test_plan.rst
@@ -0,0 +1,93 @@
+.. Copyright (c) < 2017 >, Intel Corporation
+      All rights reserved.
+
+   Redistribution and use in source and binary forms, with or without
+   modification, are permitted provided that the following conditions
+   are met:
+   
+   - Redistributions of source code must retain the above copyright
+     notice, this list of conditions and the following disclaimer.
+   
+   - Redistributions in binary form must reproduce the above copyright
+     notice, this list of conditions and the following disclaimer in
+     the documentation and/or other materials provided with the
+     distribution.
+   
+   - Neither the name of Intel Corporation nor the names of its
+     contributors may be used to endorse or promote products derived
+     from this software without specific prior written permission.
+   
+   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+   FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+   COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+   INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+   ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+   OF THE POSSIBILITY OF SUCH DAMAGE.
+
+This test suite aims to validate the scenario of PF pci pass-through to vm.
+Basic RX/TX will be tested.
+
+Prerequisites
+=============
+Support i40e and ixgbe driver.
+
+Fortville pci does not support pci-stub driver to start VM with PF pci pass-through,
+so used vfio driver.
+Refer:
+http://www.dpdk.org/doc/guides/rel_notes/known_issues.html#uio-pci-generic-module-bind-failed-in-x710-xl710-xxv710
+http://www.dpdk.org/doc/guides/rel_notes/known_issues.html#igb-uio-legacy-mode-can-not-be-used-in-x710-xl710-xxv710
+
+Set up basic scenario:
+
+1. got the pci device id of DUT, for example::
+   ./usertools/dpdk-devbind.py --st
+   0000:85:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=ens260f0 drv=ixgbe unused=igb_uio 
+   0000:85:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=ens260f1 drv=ixgbe unused=igb_uio 
+
+2. Detach PFs from the host, bind them to vfio-pci driver::
+   modprobe vfio
+   modprobe vfio-pci      
+   ./usertools/dpdk-devbind.py --bind=vfio-pci 0000:85:00.0 0000:85:00.1
+
+   ./usertools/dpdk-devbind.py --st
+   0000:85:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if= drv=vfio-pci unused=ixgbe,igb_uio 
+   0000:85:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if= drv=vfio-pci unused=ixgbe,igb_uio 
+
+It can be seen that 85:00.0 & 85:00.1 's drv is vfio-pci.
+
+Or using the following way::
+   /sbin/modprobe vfio-pci
+   echo 0000:85:00.0 > /sys/bus/pci/devices/0000:85:00.0/driver/unbind
+   echo 0000:85:00.0 > /sys/bus/pci/drivers/vfio-pci/bind
+
+   echo 0000:85:00.1 > /sys/bus/pci/devices/0000:85:00.1/driver/unbind
+   echo 0000:85:00.1 > /sys/bus/pci/drivers/vfio-pci/bind
+
+3. Passthrough PFs 85:00.0 & 85:00.1 to vm0 and start vm0::
+
+    /usr/bin/qemu-system-x86_64  -name vm0 -enable-kvm \
+        -cpu host -smp 4 -m 2048 -drive file=/home/image/sriov-fc25-1.img -vnc :1 \
+        -device vfio-pci,host=85:00.0,id=pt_0 \
+        -device vfio-pci,host=85:00.1,id=pt_1
+
+4. login vm0, got VFs pci device id in vm0, assume they are 00:04.0 & 00:05.0, 
+bind them to igb_uio driver::
+    ./usertools/dpdk-devbind.py --bind=igb_uio 00:04.0 00:05.0
+
+
+5. Start testpmd with mac forward mode::
+    testpmd -c 0x0f -n 4 -w 00:04.0 -w 00:05.0 -- -i --portmask=0x3 --txqflags=0
+    testpmd> set fwd mac
+    testpmd> start
+
+Test Case : PF PCI pass-through
+===============================
+Send 2000 random packets from tester port0 to PF0 ,
+verify the packets can be forward to PF1 correctly and can be recdived by tester port1.
+
-- 
1.9.3

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dts] [PATCH V1] add VM vfio_pci driver options
  2017-05-18  9:24 [dts] [PATCH V1] add VM vfio_pci driver options xu,gang
  2017-05-18  9:24 ` [dts] [PATCH V1] add test suite pf pass through xu,gang
  2017-05-18  9:24 ` [dts] [PATCH V1] add test plan " xu,gang
@ 2017-05-22 10:07 ` Liu, Yong
  2 siblings, 0 replies; 5+ messages in thread
From: Liu, Yong @ 2017-05-22 10:07 UTC (permalink / raw)
  To: xu,gang, dts

Gang, I am a little bit confused with your commit log. I remember that 
FVL device can be pass-throughed into VM by pci-stub driver.
Could you please give more details?


Thanks,
Marvin

On 05/18/2017 05:24 PM, xu,gang wrote:
> Fortville pci does not support pci-stub driver to start VM with PF pci pass-through,
> so used vfio driver.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dts] [PATCH V1] add test plan pass through
  2017-05-18  9:24 ` [dts] [PATCH V1] add test plan " xu,gang
@ 2017-06-09 12:44   ` Liu, Yong
  0 siblings, 0 replies; 5+ messages in thread
From: Liu, Yong @ 2017-06-09 12:44 UTC (permalink / raw)
  To: xu,gang, dts

Gang,
Still confused with your comment, in known issue list only mentioned 
about uio_pci_generic and igb_uio legacy mode.
It has no relation to pci stub pass-through. As I known, Forville can 
work with pci-stub and vfio-pci pass-through.

Please check again and add pci-stub pass-through in the test plan.

Thanks,
Marvin

On 05/18/2017 05:24 PM, xu,gang wrote:
> +
> +Prerequisites
> +=============
> +Support i40e and ixgbe driver.
> +
> +Fortville pci does not support pci-stub driver to start VM with PF pci pass-through,
> +so used vfio driver.
> +Refer:
> +http://www.dpdk.org/doc/guides/rel_notes/known_issues.html#uio-pci-generic-module-bind-failed-in-x710-xl710-xxv710
> +http://www.dpdk.org/doc/guides/rel_notes/known_issues.html#igb-uio-legacy-mode-can-not-be-used-in-x710-xl710-xxv710
> +

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2017-06-09  3:57 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-18  9:24 [dts] [PATCH V1] add VM vfio_pci driver options xu,gang
2017-05-18  9:24 ` [dts] [PATCH V1] add test suite pf pass through xu,gang
2017-05-18  9:24 ` [dts] [PATCH V1] add test plan " xu,gang
2017-06-09 12:44   ` Liu, Yong
2017-05-22 10:07 ` [dts] [PATCH V1] add VM vfio_pci driver options Liu, Yong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).