test suite reviews and discussions
 help / color / mirror / Atom feed
* Re: [dts] [PATCH V2] tests/vswitch_sample_cbdma:add test suite sync with test plan
  2021-01-07 13:36 [dts] [PATCH V2] tests/vswitch_sample_cbdma:add test suite sync with test plan Ling Wei
@ 2021-01-07  5:45 ` Ling, WeiX
  2021-01-07  5:48 ` Wang, Yinan
  1 sibling, 0 replies; 4+ messages in thread
From: Ling, WeiX @ 2021-01-07  5:45 UTC (permalink / raw)
  To: Ling, WeiX, dts

[-- Attachment #1: Type: text/plain, Size: 336 bytes --]

Tested-by: Wei Ling <weix.ling@intel.com>

Regards,
Ling Wei

> -----Original Message-----
> From: Ling Wei <weix.ling@intel.com>
> Sent: Thursday, January 7, 2021 09:36 PM
> To: dts@dpdk.org
> Cc: Ling, WeiX <weix.ling@intel.com>
> Subject: [dts][PATCH V2] tests/vswitch_sample_cbdma:add test suite sync
> with test plan


[-- Attachment #2: TestVswitchSampleCBDMA_func.log --]
[-- Type: application/octet-stream, Size: 756387 bytes --]

[-- Attachment #3: TestVswitchSampleCBDMA_perf.log --]
[-- Type: application/octet-stream, Size: 195098 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dts] [PATCH V2] tests/vswitch_sample_cbdma:add test suite sync with test plan
  2021-01-07 13:36 [dts] [PATCH V2] tests/vswitch_sample_cbdma:add test suite sync with test plan Ling Wei
  2021-01-07  5:45 ` Ling, WeiX
@ 2021-01-07  5:48 ` Wang, Yinan
  2021-01-13  6:23   ` Tu, Lijuan
  1 sibling, 1 reply; 4+ messages in thread
From: Wang, Yinan @ 2021-01-07  5:48 UTC (permalink / raw)
  To: Ling, WeiX, dts; +Cc: Ling, WeiX

Acked-by: Wang, Yinan <yinan.wang@intel.com>

> -----Original Message-----
> From: dts <dts-bounces@dpdk.org> On Behalf Of Ling Wei
> Sent: 2021?1?7? 21:36
> To: dts@dpdk.org
> Cc: Ling, WeiX <weix.ling@intel.com>
> Subject: [dts] [PATCH V2] tests/vswitch_sample_cbdma:add test suite sync
> with test plan
> 
> v1:add test suite sync with test plan.
> 
> v2:modify Copyright(c) <2019> to Copyright(c) <2021>.
> 
> Signed-off-by: Ling Wei <weix.ling@intel.com>


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [dts] [PATCH V2] tests/vswitch_sample_cbdma:add test suite sync with test plan
@ 2021-01-07 13:36 Ling Wei
  2021-01-07  5:45 ` Ling, WeiX
  2021-01-07  5:48 ` Wang, Yinan
  0 siblings, 2 replies; 4+ messages in thread
From: Ling Wei @ 2021-01-07 13:36 UTC (permalink / raw)
  To: dts; +Cc: Ling Wei

v1:add test suite sync with test plan.

v2:modify Copyright(c) <2019> to Copyright(c) <2021>.

Signed-off-by: Ling Wei <weix.ling@intel.com>
---
 tests/TestSuite_vswitch_sample_cbdma.py | 625 ++++++++++++++++++++++++
 1 file changed, 625 insertions(+)
 create mode 100644 tests/TestSuite_vswitch_sample_cbdma.py

diff --git a/tests/TestSuite_vswitch_sample_cbdma.py b/tests/TestSuite_vswitch_sample_cbdma.py
new file mode 100644
index 00000000..72caa42f
--- /dev/null
+++ b/tests/TestSuite_vswitch_sample_cbdma.py
@@ -0,0 +1,625 @@
+# BSD LICENSE\r
+#\r
+# Copyright(c) <2021> Intel Corporation. All rights reserved.\r
+# All rights reserved.\r
+#\r
+# Redistribution and use in source and binary forms, with or without\r
+# modification, are permitted provided that the following conditions\r
+# are met:\r
+#\r
+#   * Redistributions of source code must retain the above copyright\r
+#     notice, this list of conditions and the following disclaimer.\r
+#   * Redistributions in binary form must reproduce the above copyright\r
+#     notice, this list of conditions and the following disclaimer in\r
+#     the documentation and/or other materials provided with the\r
+#     distribution.\r
+#   * Neither the name of Intel Corporation nor the names of its\r
+#     contributors may be used to endorse or promote products derived\r
+#     from this software without specific prior written permission.\r
+#\r
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\r
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\r
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\r
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\r
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\r
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\r
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\r
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\r
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\r
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\r
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\r
+\r
+"""\r
+DPDK Test suite.\r
+"""\r
+\r
+import utils\r
+import re\r
+import time\r
+import os\r
+from test_case import TestCase\r
+from packet import Packet\r
+from pktgen import PacketGeneratorHelper\r
+from pmd_output import PmdOutput\r
+from virt_common import VM\r
+\r
+\r
+class TestVswitchSampleCBDMA(TestCase):\r
+\r
+    def set_up_all(self):\r
+        """\r
+        Run at the start of each test suite.\r
+        """\r
+        self.set_max_queues(512)\r
+        self.dut_ports = self.dut.get_ports()\r
+        self.verify(len(self.dut_ports) >= 1, "Insufficient ports for testing")\r
+        self.tester_tx_port_num = 1\r
+        self.ports_socket = self.dut.get_numa_id(self.dut_ports[0])\r
+        self.dut_ports = self.dut.get_ports()\r
+        self.socket = self.dut.get_numa_id(self.dut_ports[0])\r
+        self.cores = self.dut.get_core_list("all", socket=self.socket)\r
+        self.vhost_core_list = self.cores[0:2]\r
+        self.vuser0_core_list = self.cores[2:4]\r
+        self.vuser1_core_list = self.cores[4:6]\r
+        self.vhost_core_mask = utils.create_mask(self.vhost_core_list)\r
+        self.mem_channels = self.dut.get_memory_channels()\r
+        # get cbdma device\r
+        self.cbdma_dev_infos = []\r
+        self.dmas_info = None\r
+        self.device_str = None\r
+        self.out_path = '/tmp'\r
+        out = self.tester.send_expect('ls -d %s' % self.out_path, '# ')\r
+        if 'No such file or directory' in out:\r
+            self.tester.send_expect('mkdir -p %s' % self.out_path, '# ')\r
+        self.base_dir = self.dut.base_dir.replace('~', '/root')\r
+        txport = self.tester.get_local_port(self.dut_ports[0])\r
+        self.txItf = self.tester.get_interface(txport)\r
+        self.virtio_dst_mac0 = '00:11:22:33:44:10'\r
+        self.virtio_dst_mac1 = '00:11:22:33:44:11'\r
+        self.vm_dst_mac0 = '52:54:00:00:00:01'\r
+        self.vm_dst_mac1 = '52:54:00:00:00:02'\r
+        self.vm_num = 2\r
+        self.vm_dut = []\r
+        self.vm = []\r
+        self.app_testpmd_path = self.dut.apps_name['test-pmd']\r
+        # create an instance to set stream field setting\r
+        self.pktgen_helper = PacketGeneratorHelper()\r
+\r
+\r
+    def set_up(self):\r
+        """\r
+        Run before each test case.\r
+        """\r
+        self.dut.send_expect("rm -rf %s/vhost-net*" % self.base_dir, "#")\r
+        self.dut.send_expect("killall -I qemu-system-x86_64", '#', 20)\r
+        self.vhost_user = self.dut.new_session(suite="vhost-user")\r
+        self.virtio_user0 = self.dut.new_session(suite="virtio-user0")\r
+        self.virtio_user1 = self.dut.new_session(suite="virtio-user1")\r
+        self.virtio_user0_pmd = PmdOutput(self.dut, self.virtio_user0)\r
+        self.virtio_user1_pmd = PmdOutput(self.dut, self.virtio_user1)\r
+\r
+    def set_async_threshold(self, async_threshold=256):\r
+        self.logger.info("Configure async_threshold to {}".format(async_threshold))\r
+        self.dut.send_expect("sed -i -e 's/f.async_threshold = .*$/f.async_threshold = {};/' "\r
+                             "./examples/vhost/main.c".format(async_threshold), "#", 20)\r
+\r
+    def set_max_queues(self, max_queues=512):\r
+        self.logger.info("Configure MAX_QUEUES to {}".format(max_queues))\r
+        self.dut.send_expect("sed -i -e 's/#define MAX_QUEUES .*$/#define MAX_QUEUES {}/' "\r
+                             "./examples/vhost/main.c".format(max_queues), "#", 20)\r
+\r
+    def build_vhost_app(self):\r
+        out = self.dut.build_dpdk_apps('./examples/vhost')\r
+        self.verify('Error' not in out, 'compilation vhost error')\r
+\r
+    @property\r
+    def check_2M_env(self):\r
+        out = self.dut.send_expect("cat /proc/meminfo |grep Hugepagesize|awk '{print($2)}'", "# ")\r
+        return True if out == '2048' else False\r
+\r
+    def start_vhost_app(self, with_cbdma=True, cbdma_num=1, socket_num=1, client_mode=False):\r
+        """\r
+        launch the vhost app on vhost side\r
+        """\r
+        self.app_path = self.dut.apps_name['vhost']\r
+        socket_file_param = ''\r
+        for item in range(socket_num):\r
+            socket_file_param += '--socket-file ./vhost-net{} '.format(item)\r
+        allow_pci = [self.dut.ports_info[0]['pci']]\r
+        for item in range(cbdma_num):\r
+            allow_pci.append(self.cbdma_dev_infos[item])\r
+        allow_option = ''\r
+        for item in allow_pci:\r
+            allow_option += ' -a {}'.format(item)\r
+        if with_cbdma:\r
+            if client_mode:\r
+                params = (" -c {} -n {} {} -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat --stats 1 "\r
+                          + socket_file_param + "--dmas [{}] --client").format(self.vhost_core_mask, self.mem_channels,\r
+                                                                               allow_option, self.dmas_info)\r
+            else:\r
+                params = (" -c {} -n {} {} -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat --stats 1 "\r
+                          + socket_file_param + "--dmas [{}]").format(self.vhost_core_mask, self.mem_channels,\r
+                                                                      allow_option, self.dmas_info)\r
+        else:\r
+            params = (" -c {} -n {} {} -- -p 0x1 --mergeable 1 --vm2vm 1 --stats 1 " + socket_file_param).format(\r
+                self.vhost_core_mask, self.mem_channels, allow_option)\r
+        self.command_line = self.app_path + params\r
+        self.vhost_user.send_command(self.command_line)\r
+        # After started dpdk-vhost app, wait 3 seconds\r
+        time.sleep(3)\r
+\r
+    def start_virtio_testpmd(self, pmd_session, dev_mac, dev_id, cores, prefix,  enable_queues=1, server_mode=False,\r
+                             nb_cores=1, used_queues=1):\r
+        """\r
+        launch the testpmd as virtio with vhost_net0\r
+        """\r
+        if server_mode:\r
+            eal_params = " --vdev=net_virtio_user0,mac={},path=./vhost-net{},queues={},server=1".format(dev_mac, dev_id,\r
+                                                                                                        enable_queues)\r
+        else:\r
+            eal_params = " --vdev=net_virtio_user0,mac={},path=./vhost-net{},queues={}".format(dev_mac, dev_id,\r
+                                                                                               enable_queues)\r
+        if self.check_2M_env:\r
+            eal_params += " --single-file-segments"\r
+        params = "--nb-cores={} --rxq={} --txq={} --txd=1024 --rxd=1024".format(nb_cores, used_queues, used_queues)\r
+        pmd_session.start_testpmd(cores=cores, param=params, eal_param=eal_params, no_pci=True, ports=[], prefix=prefix,\r
+                                  fixed_prefix=True)\r
+\r
+    def start_vms(self, mode=0, mergeable=True):\r
+        """\r
+        start two VM, each VM has one virtio device\r
+        """\r
+        if mode == 0:\r
+            setting_args = "disable-modern=true"\r
+        elif mode == 1:\r
+            setting_args = "disable-modern=false"\r
+        elif mode == 2:\r
+            setting_args = "disable-modern=false,packed=on"\r
+        if mergeable is True:\r
+            setting_args += "," + "mrg_rxbuf=on"\r
+        else:\r
+            setting_args += "," + "mrg_rxbuf=off"\r
+        setting_args += ",csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on"\r
+\r
+        for i in range(self.vm_num):\r
+            vm_dut = None\r
+            vm_info = VM(self.dut, 'vm%d' % i, 'vhost_sample')\r
+            vm_params = {}\r
+            vm_params['driver'] = 'vhost-user'\r
+            vm_params['opt_path'] = self.base_dir + '/vhost-net%d' % i\r
+            vm_params['opt_mac'] = "52:54:00:00:00:0%d" % (i+1)\r
+            vm_params['opt_settings'] = setting_args\r
+            vm_info.set_vm_device(**vm_params)\r
+            time.sleep(3)\r
+            try:\r
+                vm_dut = vm_info.start()\r
+                if vm_dut is None:\r
+                    raise Exception("Set up VM ENV failed")\r
+            except Exception as e:\r
+                print((utils.RED("Failure for %s" % str(e))))\r
+                raise e\r
+            self.vm_dut.append(vm_dut)\r
+            self.vm.append(vm_info)\r
+\r
+    def start_vm_testpmd(self, pmd_session):\r
+        """\r
+        launch the testpmd in vm\r
+        """\r
+        self.vm_cores = [1,2]\r
+        param = "--rxq=1 --txq=1 --nb-cores=1 --txd=1024 --rxd=1024"\r
+        pmd_session.start_testpmd(cores=self.vm_cores, param=param)\r
+\r
+    def repeat_bind_driver(self, dut, repeat_times=50):\r
+        i = 0\r
+        while i < repeat_times:\r
+            dut.unbind_interfaces_linux()\r
+            dut.bind_interfaces_linux(driver='virtio-pci')\r
+            dut.bind_interfaces_linux(driver='vfio-pci')\r
+            i += 1\r
+\r
+    def get_cbdma_ports_info_and_bind_to_dpdk(self, cbdma_num):\r
+        """\r
+        get all cbdma ports\r
+        """\r
+        str_info = 'Misc (rawdev) devices using kernel driver'\r
+        out = self.dut.send_expect('./usertools/dpdk-devbind.py --status-dev misc', '# ', 30)\r
+        device_info = out.split('\n')\r
+        for device in device_info:\r
+            pci_info = re.search('\s*(0000:\d*:\d*.\d*)', device)\r
+            if pci_info is not None:\r
+                dev_info = pci_info.group(1)\r
+                # the numa id of ioat dev, only add the device which on same socket with nic dev\r
+                bus = int(dev_info[5:7], base=16)\r
+                if bus >= 128:\r
+                    cur_socket = 1\r
+                else:\r
+                    cur_socket = 0\r
+                if self.ports_socket == cur_socket:\r
+                    self.cbdma_dev_infos.append(pci_info.group(1))\r
+        self.verify(len(self.cbdma_dev_infos) >= cbdma_num, 'There no enough cbdma device to run this suite')\r
+        used_cbdma = self.cbdma_dev_infos[0:cbdma_num]\r
+        dmas_info = ''\r
+        for dmas in used_cbdma:\r
+            number = used_cbdma.index(dmas)\r
+            dmas = 'txd{}@{},'.format(number, dmas.replace('0000:', ''))\r
+            dmas_info += dmas\r
+        self.dmas_info = dmas_info[:-1]\r
+        self.device_str = ' '.join(used_cbdma)\r
+        self.dut.setup_modules(self.target, "igb_uio","None")\r
+        self.dut.send_expect('./usertools/dpdk-devbind.py --force --bind=%s %s' % ("igb_uio", self.device_str), '# ', 60)\r
+\r
+    def send_vlan_packet(self, dts_mac, pkt_size=64, pkt_count=1):\r
+        """\r
+        Send a vlan packet with vlan id 1000\r
+        """\r
+        pkt = Packet(pkt_type='VLAN_UDP', pkt_len=pkt_size)\r
+        pkt.config_layer('ether', {'dst': dts_mac})\r
+        pkt.config_layer('vlan', {'vlan': 1000})\r
+        pkt.send_pkt(self.tester, tx_port=self.txItf, count=pkt_count)\r
+\r
+    def verify_receive_packet(self, pmd_session, expected_pkt_count):\r
+        out = pmd_session.execute_cmd("show port stats all")\r
+        rx_num = re.compile('RX-packets: (.*?)\s+?').findall(out, re.S)\r
+        self.verify((int(rx_num[0]) >= int(expected_pkt_count)), "Can't receive enough packets from tester")\r
+\r
+    def bind_cbdma_device_to_kernel(self):\r
+        if self.device_str is not None:\r
+            self.dut.send_expect('modprobe ioatdma', '# ')\r
+            self.dut.send_expect('./usertools/dpdk-devbind.py -u %s' % self.device_str, '# ', 30)\r
+            self.dut.send_expect('./usertools/dpdk-devbind.py --force --bind=ioatdma  %s' % self.device_str, '# ', 60)\r
+\r
+    def config_stream(self, frame_size, port_num, dst_mac_list):\r
+        tgen_input = []\r
+        rx_port = self.tester.get_local_port(self.dut_ports[0])\r
+        tx_port = self.tester.get_local_port(self.dut_ports[0])\r
+        for item in range(port_num):\r
+            for dst_mac in dst_mac_list:\r
+                pkt = Packet(pkt_type='VLAN_UDP', pkt_len=frame_size)\r
+                pkt.config_layer('ether', {'dst': dst_mac})\r
+                pkt.config_layer('vlan', {'vlan': 1000})\r
+                pcap = os.path.join(self.out_path, "vswitch_sample_cbdma_%s_%s_%s.pcap" % (item, dst_mac, frame_size))\r
+                pkt.save_pcapfile(None, pcap)\r
+                tgen_input.append((rx_port, tx_port, pcap))\r
+        return tgen_input\r
+\r
+    def perf_test(self, frame_size, dst_mac_list):\r
+        # Create test results table\r
+        table_header = ['Frame Size(Byte)', 'Throughput(Mpps)']\r
+        self.result_table_create(table_header)\r
+        # Begin test perf\r
+        test_result = {}\r
+        for frame_size in frame_size:\r
+            self.logger.info("Test running at parameters: " + "framesize: {}".format(frame_size))\r
+            tgenInput = self.config_stream(frame_size, self.tester_tx_port_num, dst_mac_list)\r
+            # clear streams before add new streams\r
+            self.tester.pktgen.clear_streams()\r
+            # run packet generator\r
+            streams = self.pktgen_helper.prepare_stream_from_tginput(tgenInput, 100, None, self.tester.pktgen)\r
+            # set traffic option\r
+            traffic_opt = {'duration': 5}\r
+            _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams, options=traffic_opt)\r
+            self.verify(pps > 0, "No traffic detected")\r
+            throughput = pps / 1000000.0\r
+            test_result[frame_size] = throughput\r
+            self.result_table_add([frame_size, throughput])\r
+        self.result_table_print()\r
+        return test_result\r
+\r
+    def pvp_test_with_cbdma(self, socket_num=1, with_cbdma=True, cbdma_num=1):\r
+        self.frame_sizes = [64, 512, 1024, 1518]\r
+        self.start_vhost_app(with_cbdma=with_cbdma, cbdma_num=cbdma_num, socket_num=socket_num, client_mode=False)\r
+        self.start_virtio_testpmd(pmd_session=self.virtio_user0_pmd, dev_mac=self.virtio_dst_mac0, dev_id=0,\r
+                                  cores=self.vuser0_core_list, prefix='testpmd0', enable_queues=1, server_mode=False,\r
+                                  nb_cores=1, used_queues=1)\r
+        self.virtio_user0_pmd.execute_cmd('set fwd mac')\r
+        self.virtio_user0_pmd.execute_cmd('start tx_first')\r
+        self.virtio_user0_pmd.execute_cmd('stop')\r
+        self.virtio_user0_pmd.execute_cmd('start')\r
+        dst_mac_list = [self.virtio_dst_mac0]\r
+        perf_result = self.perf_test(frame_size=self.frame_sizes,dst_mac_list=dst_mac_list)\r
+        return perf_result\r
+\r
+    def test_perf_check_with_cbdma_channel_using_vhost_async_driver(self):\r
+        """\r
+        Test Case1: PVP performance check with CBDMA channel using vhost async driver\r
+        """\r
+        perf_result = []\r
+        self.get_cbdma_ports_info_and_bind_to_dpdk(1)\r
+\r
+        # test cbdma copy\r
+        # CBDMA copy needs vhost enqueue with cbdma channel using parameter '-dmas'\r
+        self.set_async_threshold(1518)\r
+        self.build_vhost_app()\r
+        cbmda_copy = self.pvp_test_with_cbdma(socket_num=1, with_cbdma=True, cbdma_num=1)\r
+\r
+        self.virtio_user0_pmd.execute_cmd("quit", "#")\r
+        self.vhost_user.send_expect("^C", "# ", 20)\r
+\r
+        # test sync copy\r
+        # Sync copy needs vhost enqueue with cbdma channel, but threshold ( can be adjusted by change value of\r
+        # f.async_threshold in dpdk code) is larger than forwarding packet length\r
+        self.set_async_threshold(0)\r
+        self.build_vhost_app()\r
+        sync_copy = self.pvp_test_with_cbdma(socket_num=1, with_cbdma=True, cbdma_num=1)\r
+\r
+        self.virtio_user0_pmd.execute_cmd("quit", "#")\r
+        self.vhost_user.send_expect("^C", "# ", 20)\r
+\r
+        # test CPU copy\r
+        # CPU copy means vhost enqueue w/o cbdma channel\r
+        cpu_copy = self.pvp_test_with_cbdma(socket_num=1, with_cbdma=False, cbdma_num=0)\r
+\r
+        self.virtio_user0_pmd.execute_cmd("quit", "#")\r
+        self.vhost_user.send_expect("^C", "# ", 20)\r
+\r
+        self.table_header = ['Frame Size(Byte)', 'Mode', 'Throughput(Mpps)']\r
+        self.result_table_create(self.table_header)\r
+        for key in cbmda_copy.keys():\r
+            perf_result.append([key, 'cbdma_copy', cbmda_copy[key]])\r
+        for key in sync_copy.keys():\r
+            perf_result.append([key, 'sync_copy', sync_copy[key]])\r
+        for key in cpu_copy.keys():\r
+            perf_result.append([key, 'cpu_copy', cpu_copy[key]])\r
+        for table_row in perf_result:\r
+            self.result_table_add(table_row)\r
+        self.result_table_print()\r
+\r
+    def pvp_test_with_multi_cbdma(self, socket_num=2, with_cbdma=True, cbdma_num=1, launch_virtio=True, quit_vhost=False):\r
+        self.frame_sizes = [1518]\r
+        self.start_vhost_app(with_cbdma=with_cbdma, cbdma_num=cbdma_num, socket_num=socket_num, client_mode=True)\r
+        if launch_virtio:\r
+            self.start_virtio_testpmd(pmd_session=self.virtio_user0_pmd, dev_mac=self.virtio_dst_mac0, dev_id=0,\r
+                                      cores=self.vuser0_core_list, prefix='testpmd0', enable_queues=1, server_mode=True,\r
+                                      nb_cores=1, used_queues=1)\r
+            self.start_virtio_testpmd(pmd_session=self.virtio_user1_pmd, dev_mac=self.virtio_dst_mac1, dev_id=1,\r
+                                      cores=self.vuser1_core_list, prefix='testpmd1', enable_queues=1, server_mode=True,\r
+                                      nb_cores=1, used_queues=1)\r
+            self.virtio_user0_pmd.execute_cmd('set fwd mac')\r
+            self.virtio_user0_pmd.execute_cmd('start tx_first')\r
+            self.virtio_user0_pmd.execute_cmd('stop')\r
+            self.virtio_user0_pmd.execute_cmd('start')\r
+            self.virtio_user1_pmd.execute_cmd('set fwd mac')\r
+            self.virtio_user1_pmd.execute_cmd('start tx_first')\r
+            self.virtio_user1_pmd.execute_cmd('stop')\r
+            self.virtio_user1_pmd.execute_cmd('start')\r
+        else:\r
+            self.virtio_user0_pmd.execute_cmd('stop', 'testpmd> ', 30)\r
+            self.virtio_user0_pmd.execute_cmd('start tx_first', 'testpmd> ', 30)\r
+            self.virtio_user1_pmd.execute_cmd('stop', 'testpmd> ', 30)\r
+            self.virtio_user1_pmd.execute_cmd('start tx_first', 'testpmd> ', 30)\r
+        dst_mac_list = [self.virtio_dst_mac0, self.virtio_dst_mac1]\r
+        perf_result = self.perf_test(self.frame_sizes, dst_mac_list)\r
+        if quit_vhost:\r
+            self.vhost_user.send_expect("^C", "# ", 20)\r
+        return perf_result\r
+\r
+    def test_perf_check_with_multiple_cbdma_channels_using_vhost_async_driver(self):\r
+        """\r
+        Test Case2: PVP test with multiple CBDMA channels using vhost async driver\r
+        """\r
+        perf_result = []\r
+        self.get_cbdma_ports_info_and_bind_to_dpdk(2)\r
+        self.set_async_threshold(256)\r
+        self.build_vhost_app()\r
+\r
+        self.logger.info("Launch vhost app perf test")\r
+        before_relunch= self.pvp_test_with_multi_cbdma(socket_num=2, with_cbdma=True, cbdma_num=2, launch_virtio=True, quit_vhost=True)\r
+\r
+        self.logger.info("Relaunch vhost app perf test")\r
+        after_relunch = self.pvp_test_with_multi_cbdma(socket_num=2, with_cbdma=True, cbdma_num=2, launch_virtio=False, quit_vhost=False)\r
+\r
+        self.virtio_user0_pmd.execute_cmd("quit", "#")\r
+        self.virtio_user1_pmd.execute_cmd("quit", "#")\r
+        self.vhost_user.send_expect("^C", "# ", 20)\r
+\r
+        self.table_header = ['Frame Size(Byte)', 'Mode', 'Throughput(Mpps)']\r
+        self.result_table_create(self.table_header)\r
+        for key in before_relunch.keys():\r
+            perf_result.append([key, 'Before Re-launch vhost', before_relunch[key]])\r
+        for key in after_relunch.keys():\r
+            perf_result.append([key, 'After Re-launch vhost', after_relunch[key]])\r
+        for table_row in perf_result:\r
+            self.result_table_add(table_row)\r
+        self.result_table_print()\r
+\r
+        self.verify(abs(before_relunch[1518] - after_relunch[1518]) / before_relunch[1518] < 0.1, "Perf is unstable, \\r
+        before relaunch vhost app: %s, after relaunch vhost app: %s" % (before_relunch[1518], after_relunch[1518]))\r
+\r
+    def get_receive_throughput(self, pmd_session, count=5):\r
+        i = 0\r
+        while i < count:\r
+            pmd_session.execute_cmd('show port stats all')\r
+            i += 1\r
+        else:\r
+            out = pmd_session.execute_cmd('show port stats all')\r
+            pmd_session.execute_cmd('stop')\r
+            rx_throughput = re.compile('Rx-pps: \s+(.*?)\s+?').findall(out, re.S)\r
+        return float(rx_throughput[0]) / 1000000.0\r
+\r
+    def set_testpmd0_param(self, pmd_session, eth_peer_mac):\r
+        pmd_session.execute_cmd('set fwd mac')\r
+        pmd_session.execute_cmd('start tx_first')\r
+        pmd_session.execute_cmd('stop')\r
+        pmd_session.execute_cmd('set eth-peer 0 %s' % eth_peer_mac)\r
+        pmd_session.execute_cmd('start')\r
+\r
+    def set_testpmd1_param(self, pmd_session, eth_peer_mac):\r
+        pmd_session.execute_cmd('set fwd mac')\r
+        pmd_session.execute_cmd('set eth-peer 0 %s' % eth_peer_mac)\r
+\r
+    def send_pkts_from_testpmd1(self, pmd_session, pkt_len):\r
+        pmd_session.execute_cmd('set txpkts %s' % pkt_len)\r
+        pmd_session.execute_cmd('start tx_first')\r
+\r
+    def vm2vm_check_with_two_cbdma(self, with_cbdma=True, cbdma_num=2, socket_num=2):\r
+        frame_sizes = [256, 2000]\r
+        self.start_vhost_app(with_cbdma=with_cbdma, cbdma_num=cbdma_num, socket_num=socket_num, client_mode=False)\r
+        self.start_virtio_testpmd(pmd_session=self.virtio_user0_pmd, dev_mac=self.virtio_dst_mac0, dev_id=0,\r
+                                  cores=self.vuser0_core_list, prefix='testpmd0', enable_queues=1, server_mode=False,\r
+                                  nb_cores=1, used_queues=1)\r
+        self.start_virtio_testpmd(pmd_session=self.virtio_user1_pmd, dev_mac=self.virtio_dst_mac1, dev_id=1,\r
+                                  cores=self.vuser1_core_list, prefix='testpmd1', enable_queues=1, server_mode=False,\r
+                                  nb_cores=1, used_queues=1)\r
+        self.set_testpmd0_param(self.virtio_user0_pmd, self.virtio_dst_mac1)\r
+        self.set_testpmd1_param(self.virtio_user1_pmd, self.virtio_dst_mac0)\r
+\r
+        rx_throughput = {}\r
+        for frame_size in frame_sizes:\r
+            self.send_pkts_from_testpmd1(pmd_session=self.virtio_user1_pmd, pkt_len=frame_size)\r
+            # Create test results table\r
+            table_header = ['Frame Size(Byte)', 'Throughput(Mpps)']\r
+            self.result_table_create(table_header)\r
+            rx_pps = self.get_receive_throughput(pmd_session=self.virtio_user1_pmd)\r
+            self.result_table_add([frame_size, rx_pps])\r
+            rx_throughput[frame_size] = rx_pps\r
+            self.result_table_print()\r
+        return rx_throughput\r
+\r
+    def test_vm2vm_check_with_two_cbdma_channels_using_vhost_async_driver(self):\r
+        """\r
+        Test Case3: VM2VM performance test with two CBDMA channels using vhost async driver\r
+        """\r
+        perf_result = []\r
+        self.get_cbdma_ports_info_and_bind_to_dpdk(2)\r
+        self.set_async_threshold(256)\r
+        self.build_vhost_app()\r
+\r
+        cbdma_enable = self.vm2vm_check_with_two_cbdma(with_cbdma=True, cbdma_num=2, socket_num=2)\r
+\r
+        self.virtio_user0_pmd.execute_cmd("quit", "#")\r
+        self.virtio_user1_pmd.execute_cmd("quit", "#")\r
+        self.vhost_user.send_expect("^C", "# ", 20)\r
+\r
+        cbdma_disable = self.vm2vm_check_with_two_cbdma(with_cbdma=False, cbdma_num=2, socket_num=2)\r
+\r
+        self.virtio_user0_pmd.execute_cmd("quit", "#")\r
+        self.virtio_user1_pmd.execute_cmd("quit", "#")\r
+        self.vhost_user.send_expect("^C", "# ", 20)\r
+\r
+        self.table_header = ['Frame Size(Byte)', 'CBDMA Enable/Disable', 'Throughput(Mpps)']\r
+        self.result_table_create(self.table_header)\r
+        for key in cbdma_enable.keys():\r
+            perf_result.append([key, 'Enable', cbdma_enable[key]])\r
+        for key in cbdma_disable.keys():\r
+            perf_result.append([key, 'Disable', cbdma_disable[key]])\r
+        for table_row in perf_result:\r
+            self.result_table_add(table_row)\r
+        self.result_table_print()\r
+\r
+        for cbdma_key in cbdma_enable.keys():\r
+            if cbdma_key == '2000':\r
+                self.verify(cbdma_enable[cbdma_key] > cbdma_disable[cbdma_key],\r
+                            "CBDMA Enable Performance {} should better than CBDMA Disable Performance {} when send 2000"\r
+                            " length packets".format(cbdma_enable[cbdma_key], cbdma_disable[cbdma_key]))\r
+            elif cbdma_key == '256':\r
+                self.verify(cbdma_disable[cbdma_key] > cbdma_enable[cbdma_key],\r
+                            "CBDMA Enable Performance {}  should lower than CBDMA Disable Performance {} when send 256"\r
+                            " length packets".format(cbdma_enable[cbdma_key], cbdma_disable[cbdma_key]))\r
+\r
+    def vm2vm_check_with_two_vhost_device(self, with_cbdma=True, cbdma_num=2, socket_num=2, launch=True):\r
+        frame_sizes = [256, 2000]\r
+        if launch:\r
+            self.start_vhost_app(with_cbdma=with_cbdma, cbdma_num=cbdma_num, socket_num=socket_num, client_mode=False)\r
+            self.start_vms(mode=0, mergeable=False)\r
+            self.vm0_pmd = PmdOutput(self.vm_dut[0])\r
+            self.vm1_pmd = PmdOutput(self.vm_dut[1])\r
+            self.start_vm_testpmd(self.vm0_pmd)\r
+            self.start_vm_testpmd(self.vm1_pmd)\r
+        self.set_testpmd0_param(self.vm0_pmd, self.vm_dst_mac1)\r
+        self.set_testpmd1_param(self.vm1_pmd, self.vm_dst_mac0)\r
+\r
+        rx_throughput = {}\r
+        for frame_size in frame_sizes:\r
+            self.send_pkts_from_testpmd1(pmd_session=self.vm1_pmd, pkt_len=frame_size)\r
+            # Create test results table\r
+            table_header = ['Frame Size(Byte)', 'Throughput(Mpps)']\r
+            self.result_table_create(table_header)\r
+            rx_pps = self.get_receive_throughput(pmd_session=self.vm1_pmd)\r
+            self.result_table_add([frame_size, rx_pps])\r
+            rx_throughput[frame_size] = rx_pps\r
+            self.result_table_print()\r
+\r
+        return rx_throughput\r
+\r
+    def start_vms_testpmd_and_test(self, launch, quit_vm_testpmd=False):\r
+        # start vm0 amd vm1 testpmd, send 256 and 2000 length packets from vm1 testpmd\r
+        perf_result = self.vm2vm_check_with_two_vhost_device(with_cbdma=True, cbdma_num=2, socket_num=2, launch=launch)\r
+        # stop vm1 and clear vm1 stats\r
+        self.vm1_pmd.execute_cmd("stop")\r
+        self.vm1_pmd.execute_cmd("clear port stats all")\r
+        # stop vm0 and clear vm0 stats\r
+        self.vm0_pmd.execute_cmd("stop")\r
+        self.vm0_pmd.execute_cmd("clear port stats all")\r
+        # only start vm0 and send packets from tester, and check vm0 can receive more then tester send packets' count\r
+        self.vm0_pmd.execute_cmd("start")\r
+        self.send_vlan_packet(dts_mac=self.vm_dst_mac0, pkt_size=64, pkt_count=100)\r
+        time.sleep(3)\r
+        self.verify_receive_packet(pmd_session=self.vm0_pmd, expected_pkt_count=100)\r
+        # stop vm0\r
+        self.vm0_pmd.execute_cmd("stop")\r
+        # only start vm1 and send packets from tester, and check vm1 can receive more then tester send packets' count\r
+        self.vm1_pmd.execute_cmd("start")\r
+        # clear vm1 stats after send start command\r
+        self.vm1_pmd.execute_cmd("clear port stats all")\r
+        self.send_vlan_packet(dts_mac=self.vm_dst_mac1, pkt_size=64, pkt_count=100)\r
+        time.sleep(3)\r
+        self.verify_receive_packet(pmd_session=self.vm1_pmd, expected_pkt_count=100)\r
+        if quit_vm_testpmd:\r
+            self.vm0_pmd.execute_cmd("quit", "#")\r
+            self.vm1_pmd.execute_cmd("quit", "#")\r
+        return perf_result\r
+\r
+    def test_vm2vm_check_with_two_vhost_device_using_vhost_async_driver(self):\r
+        """\r
+        Test Case4: VM2VM test with 2 vhost device using vhost async driver\r
+        """\r
+        perf_result = []\r
+        self.get_cbdma_ports_info_and_bind_to_dpdk(2)\r
+        self.set_async_threshold(256)\r
+        self.build_vhost_app()\r
+\r
+        before_rebind = self.start_vms_testpmd_and_test(launch=True, quit_vm_testpmd=True)\r
+        # repeat bind 50 time from virtio-pci to vfio-pci\r
+        self.repeat_bind_driver(dut=self.vm_dut[0], repeat_times=50)\r
+        self.repeat_bind_driver(dut=self.vm_dut[1], repeat_times=50)\r
+        # start vm0 and vm1 testpmd\r
+        self.start_vm_testpmd(pmd_session=self.vm0_pmd)\r
+        self.start_vm_testpmd(pmd_session=self.vm1_pmd)\r
+        after_bind = self.start_vms_testpmd_and_test(launch=False, quit_vm_testpmd=False)\r
+\r
+        for i in range(len(self.vm)):\r
+            self.vm[i].stop()\r
+        self.vhost_user.send_expect("^C", "# ", 20)\r
+\r
+        self.table_header = ['Frame Size(Byte)', 'Before/After Bind VM Driver', 'Throughput(Mpps)']\r
+        self.result_table_create(self.table_header)\r
+        for key in before_rebind.keys():\r
+            perf_result.append([key, 'Before rebind driver', before_rebind[key]])\r
+        for key in after_bind.keys():\r
+            perf_result.append([key, 'After rebind driver', after_bind[key]])\r
+        for table_row in perf_result:\r
+            self.result_table_add(table_row)\r
+        self.result_table_print()\r
+\r
+    def close_all_session(self):\r
+        if getattr(self, 'vhost_user', None):\r
+            self.dut.close_session(self.vhost_user)\r
+        if getattr(self, 'virtio-user0', None):\r
+            self.dut.close_session(self.virtio_user0)\r
+        if getattr(self, 'virtio-user1', None):\r
+            self.dut.close_session(self.virtio_user1)\r
+\r
+    def tear_down(self):\r
+        """\r
+        Run after each test case.\r
+        """\r
+        self.bind_cbdma_device_to_kernel()\r
+        self.close_all_session()\r
+\r
+    def tear_down_all(self):\r
+        """\r
+        Run after each test suite.\r
+        """\r
+        self.set_max_queues(128)\r
+        self.set_async_threshold(256)\r
+        self.dut.build_install_dpdk(self.target)
\ No newline at end of file
-- 
2.25.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dts] [PATCH V2] tests/vswitch_sample_cbdma:add test suite sync with test plan
  2021-01-07  5:48 ` Wang, Yinan
@ 2021-01-13  6:23   ` Tu, Lijuan
  0 siblings, 0 replies; 4+ messages in thread
From: Tu, Lijuan @ 2021-01-13  6:23 UTC (permalink / raw)
  To: Wang, Yinan, Ling, WeiX, dts; +Cc: Ling, WeiX

> > v1:add test suite sync with test plan.
> >
> > v2:modify Copyright(c) <2019> to Copyright(c) <2021>.
> >
> > Signed-off-by: Ling Wei <weix.ling@intel.com>
> Acked-by: Wang, Yinan <yinan.wang@intel.com>

Applied


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-01-13  6:23 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-07 13:36 [dts] [PATCH V2] tests/vswitch_sample_cbdma:add test suite sync with test plan Ling Wei
2021-01-07  5:45 ` Ling, WeiX
2021-01-07  5:48 ` Wang, Yinan
2021-01-13  6:23   ` Tu, Lijuan

test suite reviews and discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://inbox.dpdk.org/dts/0 dts/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dts dts/ https://inbox.dpdk.org/dts \
		dts@dpdk.org
	public-inbox-index dts

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dts


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git