From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 0F929A0096 for ; Thu, 14 Mar 2019 08:00:21 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id F11BA374E; Thu, 14 Mar 2019 08:00:20 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id ACCE22BD3 for ; Thu, 14 Mar 2019 08:00:18 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 14 Mar 2019 00:00:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,477,1544515200"; d="scan'208";a="122562584" Received: from fmsmsx106.amr.corp.intel.com ([10.18.124.204]) by orsmga007.jf.intel.com with ESMTP; 14 Mar 2019 00:00:11 -0700 Received: from fmsmsx155.amr.corp.intel.com (10.18.116.71) by FMSMSX106.amr.corp.intel.com (10.18.124.204) with Microsoft SMTP Server (TLS) id 14.3.408.0; Thu, 14 Mar 2019 00:00:11 -0700 Received: from shsmsx108.ccr.corp.intel.com (10.239.4.97) by FMSMSX155.amr.corp.intel.com (10.18.116.71) with Microsoft SMTP Server (TLS) id 14.3.408.0; Thu, 14 Mar 2019 00:00:08 -0700 Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.158]) by SHSMSX108.ccr.corp.intel.com ([169.254.8.57]) with mapi id 14.03.0415.000; Thu, 14 Mar 2019 15:00:06 +0800 From: "Ma, LihongX" To: "Wang, Yinan" , "dts@dpdk.org" Thread-Topic: [dts] [PATCH V3] Add test suite about pvp multi path of virtio single core performance Thread-Index: AQHU2jIN66JTBRZhZU+uhoXZdlj8jKYKsqCw Date: Thu, 14 Mar 2019 07:00:06 +0000 Message-ID: References: <1552519257-24619-1-git-send-email-lihongx.ma@intel.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="iso-2022-jp" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dts] [PATCH V3] Add test suite about pvp multi path of virtio single core performance X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Sender: "dts" Tested-by:ma,lihong -----Original Message----- From: Wang, Yinan=20 Sent: Thursday, March 14, 2019 2:49 PM To: Ma, LihongX ; dts@dpdk.org Cc: Ma, LihongX Subject: RE: [dts] [PATCH V3] Add test suite about pvp multi path of virtio= single core performance Acked-by: Wang, Yinan > -----Original Message----- > From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of lihong > Sent: 2019=1B$BG/=1B(B3=1B$B7n=1B(B14=1B$BF|=1B(B 7:21 > To: dts@dpdk.org > Cc: Ma, LihongX > Subject: [dts] [PATCH V3] Add test suite about pvp multi path of=20 > virtio single core performance >=20 > Signed-off-by: lihong > --- > ...p_multi_paths_virtio_single_core_performance.py | 274 > +++++++++++++++++++++ > 1 file changed, 274 insertions(+) > create mode 100644 > tests/TestSuite_pvp_multi_paths_virtio_single_core_performance.py >=20 > diff --git > a/tests/TestSuite_pvp_multi_paths_virtio_single_core_performance.py > b/tests/TestSuite_pvp_multi_paths_virtio_single_core_performance.py > new file mode 100644 > index 0000000..440a39a > --- /dev/null > +++ b/tests/TestSuite_pvp_multi_paths_virtio_single_core_performance.p > +++ y > @@ -0,0 +1,274 @@ > +# BSD LICENSE > +# > +# Copyright(c) 2010-2019 Intel Corporation. All rights reserved. > +# All rights reserved. > +# > +# Redistribution and use in source and binary forms, with or without=20 > +# modification, are permitted provided that the following conditions=20 > +# are met: > +# > +# * Redistributions of source code must retain the above copyright > +# notice, this list of conditions and the following disclaimer. > +# * Redistributions in binary form must reproduce the above copyright > +# notice, this list of conditions and the following disclaimer in > +# the documentation and/or other materials provided with the > +# distribution. > +# * Neither the name of Intel Corporation nor the names of its > +# contributors may be used to endorse or promote products derived > +# from this software without specific prior written permission. > +# > +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND > CONTRIBUTORS # > +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT #=20 > +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS > FOR # > +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE > COPYRIGHT # > +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, > INCIDENTAL, # > +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT > # > +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF > USE, # > +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND > ON ANY # > +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT #=20 > +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE > USE # > +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > + > +""" > +DPDK Test suite. > +Test PVP virtio single core performance using virtio_user on 8 tx/rx pat= h. > +""" > + > +import utils > +from test_case import TestCase > +from settings import HEADER_SIZE > + > + > +class TestPVPMultiPathVirtioPerformance(TestCase): > + > + def set_up_all(self): > + """ > + Run at the start of each test suite. > + """ > + self.frame_sizes =3D [64, 128, 256, 512, 1024, 1518] > + self.core_config =3D "1S/5C/1T" > + self.number_of_ports =3D 1 > + self.headers_size =3D HEADER_SIZE['eth'] + HEADER_SIZE['ip'] + \ > + HEADER_SIZE['udp'] > + self.dut_ports =3D self.dut.get_ports() > + self.verify(len(self.dut_ports) >=3D 1, "Insufficient ports for = testing") > + self.ports_socket =3D self.dut.get_numa_id(self.dut_ports[0]) > + self.core_list =3D self.dut.get_core_list( > + self.core_config, socket=3Dself.ports_socket) > + self.core_list_user =3D self.core_list[0:2] > + self.core_list_host =3D self.core_list[2:5] > + self.core_mask_user =3D utils.create_mask(self.core_list_user) > + self.core_mask_host =3D utils.create_mask(self.core_list_host) > + if self.dut.cores[len(self.dut.cores) - 1]['socket'] =3D=3D '0': > + self.socket_mem =3D '1024' > + else: > + self.socket_mem =3D '1024,1024' > + > + def set_up(self): > + """ > + Run before each test case. > + """ > + self.vhost_user =3D self.dut.new_session(suite=3D"user") > + self.vhost =3D self.dut.new_session(suite=3D"vhost") > + # Prepare the result table > + self.table_header =3D ['Frame'] > + self.table_header.append("Mode") > + self.table_header.append("Mpps") > + self.table_header.append("% linerate") > + self.result_table_create(self.table_header) > + > + def send_and_verify(self, case_info, frame_size): > + """ > + Send packet with packet generator and verify > + """ > + payload_size =3D frame_size - self.headers_size > + tgen_input =3D [] > + for port in xrange(self.number_of_ports): > + rx_port =3D self.tester.get_local_port( > + self.dut_ports[port % self.number_of_ports]) > + tx_port =3D self.tester.get_local_port( > + self.dut_ports[(port) % self.number_of_ports]) > + destination_mac =3D self.dut.get_mac_address( > + self.dut_ports[(port) % self.number_of_ports]) > + self.tester.scapy_append( > + 'wrpcap("l2fwd_%d.pcap", > [Ether(dst=3D"%s")/IP()/UDP()/("X"*%d)])' % > + (port, destination_mac, payload_size)) > + > + tgen_input.append((tx_port, rx_port, "l2fwd_%d.pcap" % > + port)) > + > + self.tester.scapy_execute() > + _, pps =3D self.tester.traffic_generator_throughput(tgen_input) > + Mpps =3D pps / 1000000.0 > + self.verify(Mpps > 0, "%s can not receive packets of frame=20 > + size %d" % (self.running_case, frame_size)) > + > + throughput =3D Mpps * 100 / \ > + float(self.wirespeed(self.nic, frame_size, > + self.number_of_ports)) > + > + results_row =3D [frame_size] > + results_row.append(case_info) > + results_row.append(Mpps) > + results_row.append(throughput) > + self.result_table_add(results_row) > + > + def start_vhost_testpmd(self): > + """ > + start testpmd on vhost > + """ > + self.dut.send_expect("rm -rf ./vhost.out", "#") > + self.dut.send_expect("rm -rf ./vhost-net*", "#") > + self.dut.send_expect("killall -s INT testpmd", "#") > + self.dut.send_expect("killall -s INT qemu-system-x86_64", "#") > + command_line_client =3D "./%s/app/testpmd -n %d -c %s > --socket-mem " + \ > + " %s --legacy-mem --file-prefix=3Dvhost > --vdev " + \ > + > "'net_vhost0,iface=3Dvhost-net,queues=3D1,client=3D0' -- -i --nb-cores=3D= 2=20 > --txd=3D1024 --rxd=3D1024" > + command_line_client =3D command_line_client % (self.target, > + self.dut.get_memory_channels(), self.core_mask_host, > self.socket_mem) > + self.vhost.send_expect(command_line_client, "testpmd> ", 120) > + self.vhost.send_expect("set fwd io", "testpmd> ", 120) > + self.vhost.send_expect("start", "testpmd> ", 120) > + > + def start_virtio_testpmd(self, args): > + """ > + start testpmd on virtio > + """ > + command_line_user =3D "./%s/app/testpmd -n %d -c %s " + \ > + " --socket-mem %s --legacy-mem --no-pci > --file-prefix=3Dvirtio " + \ > + > "--vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D./vhost-net,%s = " +=20 > \ > + "-- -i %s --rss-ip --nb-cores=3D1=20 > + --txd=3D1024 > --rxd=3D1024" > + command_line_user =3D command_line_user % (self.target, > + self.dut.get_memory_channels(), self.core_mask_user, > + self.socket_mem, args["version"], args["path"]) > + self.vhost_user.send_expect(command_line_user, "testpmd> ", > 120) > + self.vhost_user.send_expect("set fwd mac", "testpmd> ", 120) > + self.vhost_user.send_expect("start", "testpmd> ", 120) > + > + def close_all_testpmd(self): > + """ > + close all testpmd of vhost and virtio > + """ > + self.vhost.send_expect("quit", "#", 60) > + self.vhost_user.send_expect("quit", "#", 60) > + > + def close_all_session(self): > + """ > + close all session of vhost and vhost-user > + """ > + self.dut.close_session(self.vhost_user) > + self.dut.close_session(self.vhost) > + > + def test_perf_virtio_single_core_virtio11_mergeable(self): > + """ > + performance for Vhost PVP virtio 1.1 Mergeable Path. > + """ > + virtio_pmd_arg =3D {"version": > "packed_vq=3D1,in_order=3D0,mrg_rxbuf=3D1", > + "path": "--tx-offloads=3D0x0 > --enable-hw-vlan-strip"} > + for frame_size in self.frame_sizes: > + self.start_vhost_testpmd() > + self.start_virtio_testpmd(virtio_pmd_arg) > + self.send_and_verify("virtio_1.1_mergeable on", frame_size) > + self.close_all_testpmd() > + self.result_table_print() > + > + def test_perf_virtio_single_core_virtio11_normal(self): > + """ > + performance for Vhost PVP virtio1.1 Normal Path. > + """ > + virtio_pmd_arg =3D {"version": > "packed_vq=3D1,in_order=3D0,mrg_rxbuf=3D0", > + "path": "--tx-offloads=3D0x0 > --enable-hw-vlan-strip"} > + for frame_size in self.frame_sizes: > + self.start_vhost_testpmd() > + self.start_virtio_testpmd(virtio_pmd_arg) > + self.send_and_verify("virtio_1.1_normal", frame_size) > + self.close_all_testpmd() > + self.result_table_print() > + > + def test_perf_virtio_single_core_virtio11_inorder(self): > + """ > + performance for Vhost PVP virtio 1.1 inorder Path. > + """ > + virtio_pmd_arg =3D {"version": > "packed_vq=3D1,in_order=3D1,mrg_rxbuf=3D0", > + "path": "--tx-offloads=3D0x0 > --enable-hw-vlan-strip"} > + for frame_size in self.frame_sizes: > + self.start_vhost_testpmd() > + self.start_virtio_testpmd(virtio_pmd_arg) > + self.send_and_verify("virtio_1.1 inorder", frame_size) > + self.close_all_testpmd() > + self.result_table_print() > + > + def test_perf_virtio_single_core_inorder_mergeable(self): > + """ > + performance for Vhost PVP In_order mergeable Path. > + """ > + virtio_pmd_arg =3D {"version": > "packed_vq=3D0,in_order=3D1,mrg_rxbuf=3D1", > + "path": "--tx-offloads=3D0x0 > --enable-hw-vlan-strip"} > + for frame_size in self.frame_sizes: > + self.start_vhost_testpmd() > + self.start_virtio_testpmd(virtio_pmd_arg) > + self.send_and_verify("inoder mergeable on", frame_size) > + self.close_all_testpmd() > + self.result_table_print() > + > + def test_perf_virtio_single_core_inorder_no_mergeable(self): > + """ > + performance for Vhost PVP In_order no_mergeable Path. > + """ > + virtio_pmd_arg =3D {"version": > "packed_vq=3D0,in_order=3D1,mrg_rxbuf=3D0", > + "path": "--tx-offloads=3D0x0 > --enable-hw-vlan-strip"} > + for frame_size in self.frame_sizes: > + self.start_vhost_testpmd() > + self.start_virtio_testpmd(virtio_pmd_arg) > + self.send_and_verify("inoder mergeable off", frame_size) > + self.close_all_testpmd() > + self.result_table_print() > + > + def test_perf_virtio_single_core_mergeable(self): > + """ > + performance for Vhost PVP Mergeable Path. > + """ > + virtio_pmd_arg =3D {"version": > "packed_vq=3D0,in_order=3D0,mrg_rxbuf=3D1", > + "path": "--tx-offloads=3D0x0 > --enable-hw-vlan-strip"} > + for frame_size in self.frame_sizes: > + self.start_vhost_testpmd() > + self.start_virtio_testpmd(virtio_pmd_arg) > + self.send_and_verify("virito mergeable", frame_size) > + self.close_all_testpmd() > + self.result_table_print() > + > + def test_perf_virtio_single_core_normal(self): > + """ > + performance for Vhost PVP Normal Path. > + """ > + virtio_pmd_arg =3D {"version": > "packed_vq=3D0,in_order=3D0,mrg_rxbuf=3D0", > + "path": "--tx-offloads=3D0x0 > --enable-hw-vlan-strip"} > + for frame_size in self.frame_sizes: > + self.start_vhost_testpmd() > + self.start_virtio_testpmd(virtio_pmd_arg) > + self.send_and_verify("virito normal", frame_size) > + self.close_all_testpmd() > + self.result_table_print() > + > + def test_perf_virtio_single_core_vector_rx(self): > + """ > + performance for Vhost PVP Vector Path > + """ > + virtio_pmd_arg =3D {"version": > "packed_vq=3D0,in_order=3D0,mrg_rxbuf=3D0", > + "path": "--tx-offloads=3D0x0"} > + for frame_size in self.frame_sizes: > + self.start_vhost_testpmd() > + self.start_virtio_testpmd(virtio_pmd_arg) > + self.send_and_verify("virtio vector rx", frame_size) > + self.close_all_testpmd() > + self.result_table_print() > + > + def tear_down(self): > + """ > + Run after each test case. > + """ > + self.dut.send_expect("killall -s INT testpmd", "#") > + self.close_all_session() > + > + def tear_down_all(self): > + """ > + Run after each test suite. > + """ > + pass > -- > 2.7.4