From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4F9BAA0588; Thu, 16 Apr 2020 10:49:18 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 45FE31DB77; Thu, 16 Apr 2020 10:49:18 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 2E9551DB4F for ; Thu, 16 Apr 2020 10:49:17 +0200 (CEST) IronPort-SDR: FfxbqIsY0NZMuYEiEjAqtiigapBZcAX3TizXPTc6TLix3Wmbfj8QCJ8GcM3OXXPyQfPyVH3XeD mkumuvjZxPkA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Apr 2020 01:49:16 -0700 IronPort-SDR: 0M4chwneV2keWcZF3PuLTRSgSE0uQYw844k5GmBfY06142kiT0Bp3a0Wu6rqLkJKvNUkn5h4mB I/0S2wKl4wug== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,390,1580803200"; d="scan'208";a="363922767" Received: from fmsmsx107.amr.corp.intel.com ([10.18.124.205]) by fmsmga001.fm.intel.com with ESMTP; 16 Apr 2020 01:49:16 -0700 Received: from shsmsx108.ccr.corp.intel.com (10.239.4.97) by fmsmsx107.amr.corp.intel.com (10.18.124.205) with Microsoft SMTP Server (TLS) id 14.3.439.0; Thu, 16 Apr 2020 01:49:16 -0700 Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.129]) by SHSMSX108.ccr.corp.intel.com ([169.254.8.7]) with mapi id 14.03.0439.000; Thu, 16 Apr 2020 16:49:13 +0800 From: "Tu, Lijuan" To: "Xiao, QimaiX" , "dts@dpdk.org" CC: "Xiao, QimaiX" Thread-Topic: [dts] [PATCH V1] add automation for pvp_virtio_user_multi_queues_port_restart Thread-Index: AQHWBzZbrjv/jGjUJEK7B8khVSM/vKh7iVWg Date: Thu, 16 Apr 2020 08:49:12 +0000 Message-ID: <8CE3E05A3F976642AAB0F4675D0AD20E0BC0B3B6@SHSMSX101.ccr.corp.intel.com> References: <1585642936-227246-1-git-send-email-qimaix.xiao@intel.com> In-Reply-To: <1585642936-227246-1-git-send-email-qimaix.xiao@intel.com> Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-version: 11.2.0.6 dlp-reaction: no-action x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dts] [PATCH V1] add automation for pvp_virtio_user_multi_queues_port_restart X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Sender: "dts" Applied, thanks > -----Original Message----- > From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Xiao Qimai > Sent: Tuesday, March 31, 2020 4:22 PM > To: dts@dpdk.org > Cc: Xiao, QimaiX > Subject: [dts] [PATCH V1] add automation for > pvp_virtio_user_multi_queues_port_restart >=20 > *. add automation for > test_plans:pvp_virtio_user_multi_queues_port_restart.rst >=20 > Signed-off-by: Xiao Qimai > --- > ...te_pvp_virtio_user_multi_queues_port_restart.py | 294 > +++++++++++++++++++++ > 1 file changed, 294 insertions(+) > create mode 100644 > tests/TestSuite_pvp_virtio_user_multi_queues_port_restart.py >=20 > diff --git a/tests/TestSuite_pvp_virtio_user_multi_queues_port_restart.py > b/tests/TestSuite_pvp_virtio_user_multi_queues_port_restart.py > new file mode 100644 > index 0000000..58d603f > --- /dev/null > +++ b/tests/TestSuite_pvp_virtio_user_multi_queues_port_restart.py > @@ -0,0 +1,294 @@ > +# > +# BSD LICENSE > +# > +# Copyright(c) 2010-2020 Intel Corporation. All rights reserved. > +# All rights reserved. > +# > +# Redistribution and use in source and binary forms, with or without # > +modification, are permitted provided that the following conditions # > +are met: > +# > +# * Redistributions of source code must retain the above copyright > +# notice, this list of conditions and the following disclaimer. > +# * Redistributions in binary form must reproduce the above copyright > +# notice, this list of conditions and the following disclaimer in > +# the documentation and/or other materials provided with the > +# distribution. > +# * Neither the name of Intel Corporation nor the names of its > +# contributors may be used to endorse or promote products derived > +# from this software without specific prior written permission. > +# > +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND > CONTRIBUTORS # > +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT > # > +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS > FOR # > +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE > COPYRIGHT # > +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, > INCIDENTAL, # > +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT > NOT # > +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF > USE, # > +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND > ON ANY # > +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT > # > +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF > THE USE # > +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH > DAMAGE. > + > + > +""" > +DPDK Test suite. > +This test vhost/virtio-user pvp multi-queues with split virtqueue and > +packed virtqueue different rx/tx paths, includes split virtqueue > +in-order mergeable, in-order non-mergeable, mergeable, non-mergeable, > +vector_rx path test, and packed virtqueue in-order mergeable, in-order > +non-mergeable, mergeable, non-mergeable path, also cover port restart > test with each path. > +""" > +import time > +import re > +from test_case import TestCase > +from packet import Packet > +from pktgen import PacketGeneratorHelper > + > + > +class TestPVPVirtioUserMultiQueuesPortRestart(TestCase): > + > + def set_up_all(self): > + """ > + Run at the start of each test suite. > + """ > + self.frame_sizes =3D [64] > + self.dut_ports =3D self.dut.get_ports() > + self.verify(len(self.dut_ports) >=3D 1, "Insufficient ports for = testing") > + # get core mask > + self.ports_socket =3D self.dut.get_numa_id(self.dut_ports[0]) > + self.core_list =3D self.dut.get_core_list('all', socket=3Dself.p= orts_socket) > + self.dst_mac =3D self.dut.get_mac_address(self.dut_ports[0]) > + self.out_path =3D '/tmp' > + out =3D self.tester.send_expect('ls -d %s' % self.out_path, '# '= ) > + if 'No such file or directory' in out: > + self.tester.send_expect('mkdir -p %s' % self.out_path, '# ') > + # create an instance to set stream field setting > + self.pktgen_helper =3D PacketGeneratorHelper() > + self.pci_info =3D self.dut.ports_info[0]['pci'] > + self.vhost =3D self.dut.new_session(suite=3D"vhost-user") > + self.tx_port =3D self.tester.get_local_port(self.dut_ports[0]) > + self.queue_number =3D 2 > + self.dut.kill_all() > + > + def set_up(self): > + """ > + Run before each test case. > + """ > + # Clean the execution ENV > + self.dut.send_expect("rm -rf ./vhost.out", "#") > + # Prepare the result table > + self.table_header =3D ["FrameSize(B)", "Mode", > + "Throughput(Mpps)", "% linerate", "Cycle"] > + self.result_table_create(self.table_header) > + > + def start_vhost_testpmd(self): > + """ > + start testpmd on vhost > + """ > + self.dut.send_expect("killall -s INT testpmd", "#") > + self.dut.send_expect("rm -rf ./vhost-net*", "#") > + testcmd =3D self.dut.target + "/app/testpmd " > + vdev =3D 'net_vhost0,iface=3Dvhost-net,queues=3D2,client=3D0' > + eal_params =3D self.dut.create_eal_parameters(cores=3Dself.core_= list[2:5], > prefix=3D'vhost', ports=3D[self.pci_info], > + vdevs=3D[vdev]) > + para =3D " -- -i --nb-cores=3D2 --rxq=3D%s --txq=3D%s --rss-ip" = % > (self.queue_number, self.queue_number) > + command_line_vhost =3D testcmd + eal_params + para > + self.vhost.send_expect(command_line_vhost, "testpmd> ", 120) > + self.vhost.send_expect("set fwd mac", "testpmd> ", 120) > + self.vhost.send_expect("start", "testpmd> ", 120) > + > + def start_virtio_user_testpmd(self, flag): > + """ > + start testpmd in vm depend on different path > + """ > + testcmd =3D self.dut.target + "/app/testpmd " > + vdev =3D 'net_virtio_user0,mac=3D00:01:02:03:04:05,path=3D./vhos= t- > net,queues=3D2' > + if 'packed_ring' in flag: > + vdev +=3D ',packed_vq=3D1' > + if 'nonmergeable' in flag or 'vector' in flag: > + vdev +=3D ',mrg_rxbuf=3D0' > + if 'inorder' not in flag: > + vdev +=3D ',in_order=3D0' > + eal_params =3D self.dut.create_eal_parameters(cores=3Dself.core_= list[5:8], > prefix=3D'virtio', no_pci=3DTrue, > + vdevs=3D[vdev]) > + if 'vector' not in flag: > + para =3D " -- -i --tx-offloads=3D0x0 --enable-hw-vlan-strip = --rss-ip --nb- > cores=3D2 --rxq=3D%s --txq=3D%s --rss-ip" % ( > + self.queue_number, self.queue_number) > + else: > + para =3D " -- -i --tx-offloads=3D0x0 --rss-ip --nb-cores=3D2= --rxq=3D%s --txq=3D%s > --rss-ip" % ( > + self.queue_number, self.queue_number) > + command_line_user =3D testcmd + eal_params + para > + self.dut.send_expect(command_line_user, "testpmd> ", 30) > + self.dut.send_expect("set fwd mac", "testpmd> ", 30) > + self.dut.send_expect("start", "testpmd> ", 30) > + > + def check_port_link_status_after_port_restart(self): > + """ > + check the link status after port restart > + """ > + loop =3D 1 > + port_status =3D 'down' > + while (loop <=3D 5): > + out =3D self.vhost.send_expect("show port info 0", "testpmd>= ", 120) > + port_status =3D re.findall("Link\s*status:\s*([a-z]*)", out) > + if ("down" not in port_status): > + break > + time.sleep(2) > + loop =3D loop + 1 > + self.verify("down" not in port_status, "port can not up after > + restart") > + > + def port_restart(self, restart_times=3D1): > + for i in range(restart_times): > + self.vhost.send_expect("stop", "testpmd> ", 120) > + self.vhost.send_expect("port stop 0", "testpmd> ", 120) > + self.vhost.send_expect("clear port stats 0", "testpmd> ", 12= 0) > + self.vhost.send_expect("port start 0", "testpmd> ", 120) > + self.check_port_link_status_after_port_restart() > + self.vhost.send_expect("start", "testpmd> ", 120) > + > + def update_table_info(self, case_info, frame_size, Mpps, throughtput= , > Cycle): > + results_row =3D [frame_size] > + results_row.append(case_info) > + results_row.append(Mpps) > + results_row.append(throughtput) > + results_row.append(Cycle) > + self.result_table_add(results_row) > + > + def calculate_avg_throughput(self, frame_size): > + """ > + start to send packet and get the throughput > + """ > + pkt =3D Packet(pkt_type=3D'IP_RAW', pkt_len=3Dframe_size) > + pkt.config_layer('ether', {'dst': '%s' % self.dst_mac}) > + pkt.save_pcapfile(self.tester, "%s/pvp_multipath.pcap" % > + (self.out_path)) > + > + tgenInput =3D [] > + port =3D self.tester.get_local_port(self.dut_ports[0]) > + tgenInput.append((port, port, "%s/pvp_multipath.pcap" % > self.out_path)) > + self.tester.pktgen.clear_streams() > + fields_config =3D {'ip': {'dst': {'action': 'random'}}} > + streams =3D self.pktgen_helper.prepare_stream_from_tginput(tgenI= nput, > 100, fields_config, self.tester.pktgen) > + # set traffic option > + traffic_opt =3D {'delay': 5} > + _, pps =3D self.tester.pktgen.measure_throughput(stream_ids=3Dst= reams, > options=3Dtraffic_opt) > + Mpps =3D pps / 1000000.0 > + self.verify(Mpps > 0, "can not receive packets of frame size %d"= % > (frame_size)) > + throughput =3D Mpps * 100 / \ > + float(self.wirespeed(self.nic, frame_size, 1)) > + return Mpps, throughput > + > + def send_and_verify(self, case_info): > + """ > + start to send packets and verify it > + """ > + for frame_size in self.frame_sizes: > + info =3D "Running test %s, and %d frame size." % (self.runni= ng_case, > frame_size) > + self.logger.info(info) > + Mpps, throughput =3D self.calculate_avg_throughput(frame_siz= e) > + self.update_table_info(case_info, frame_size, Mpps, throughp= ut, > "Before Restart") > + self.check_packets_of_each_queue(frame_size) > + restart_times =3D 100 if case_info =3D=3D 'packed_ring_merge= able' else 1 > + self.port_restart(restart_times=3Drestart_times) > + Mpps, throughput =3D self.calculate_avg_throughput(frame_siz= e) > + self.update_table_info(case_info, frame_size, Mpps, throughp= ut, > "After Restart") > + self.check_packets_of_each_queue(frame_size) > + > + def check_packets_of_each_queue(self, frame_size): > + """ > + check each queue has receive packets > + """ > + out =3D self.dut.send_expect("stop", "testpmd> ", 60) > + p =3D re.compile("RX Port=3D 0/Queue=3D (\d+) -> TX Port=3D 0/Qu= eue=3D > \d+.*\n.*RX-packets:\s?(\d+).*TX-packets:\s?(\d+)") > + res =3D p.findall(out) > + self.res_queues =3D sorted([int(i[0]) for i in res]) > + self.res_rx_pkts =3D [int(i[1]) for i in res] > + self.res_tx_pkts =3D [int(i[2]) for i in res] > + self.verify(self.res_queues =3D=3D list(range(self.queue_number)= ), > + "frame_size: %s, expect %s queues to handle packets,= result %s > queues" % (frame_size, list(range(self.queue_number)), self.res_queues)) > + self.verify(all(self.res_rx_pkts), "each queue should has rx pac= kets, > result: %s" % self.res_rx_pkts) > + self.verify(all(self.res_tx_pkts), "each queue should has tx pac= kets, > result: %s" % self.res_tx_pkts) > + self.dut.send_expect("start", "testpmd> ", 60) > + > + def close_all_testpmd(self): > + """ > + close testpmd about vhost-user and vm_testpmd > + """ > + self.vhost.send_expect("quit", "#", 60) > + self.dut.send_expect("quit", "#", 60) > + > + def > test_perf_pvp_2queues_test_with_packed_ring_mergeable_path(self): > + self.start_vhost_testpmd() > + self.start_virtio_user_testpmd(flag=3D"packed_ring_mergeable") > + self.send_and_verify("packed_ring_mergeable") > + self.close_all_testpmd() > + self.result_table_print() > + > + def > test_perf_pvp_2queues_test_with_packed_ring_nonmergeable_path(self): > + self.start_vhost_testpmd() > + self.start_virtio_user_testpmd(flag=3D"packed_ring_nonmergeable"= ) > + self.send_and_verify("packed_ring_nonmergeable") > + self.close_all_testpmd() > + self.result_table_print() > + > + def > test_perf_pvp_2queues_test_with_split_ring_inorder_mergeable_path(self): > + self.start_vhost_testpmd() > + self.start_virtio_user_testpmd(flag=3D"split_ring_inorder_mergea= ble") > + self.send_and_verify("split_ring_inorder_mergeable") > + self.close_all_testpmd() > + self.result_table_print() > + > + def > test_perf_pvp_2queues_test_with_split_ring_inorder_nonmergeable_path(s > elf): > + self.start_vhost_testpmd() > + self.start_virtio_user_testpmd(flag=3D"split_ring_inorder_nonmer= geable") > + self.send_and_verify("split_ring_inorder_nonmergeable") > + self.close_all_testpmd() > + self.result_table_print() > + > + def test_perf_pvp_2queues_test_with_split_ring_mergeable_path(self): > + self.start_vhost_testpmd() > + self.start_virtio_user_testpmd(flag=3D"split_ring_mergeable") > + self.send_and_verify("split_ring_mergeable") > + self.close_all_testpmd() > + self.result_table_print() > + > + def > test_perf_pvp_2queues_test_with_split_ring_nonmergeable_path(self): > + self.start_vhost_testpmd() > + self.start_virtio_user_testpmd(flag=3D"split_ring_nonmergeable") > + self.send_and_verify("split_ring_nonmergeable") > + self.close_all_testpmd() > + self.result_table_print() > + > + def test_perf_pvp_2queues_test_with_split_ring_vector_rx_path(self): > + self.start_vhost_testpmd() > + self.start_virtio_user_testpmd(flag=3D"split_ring_vector_rx") > + self.send_and_verify("split_ring_vector_rx") > + self.close_all_testpmd() > + self.result_table_print() > + > + def > test_perf_pvp_2queues_test_with_packed_ring_inorder_mergeable_path(se > lf): > + self.start_vhost_testpmd() > + self.start_virtio_user_testpmd(flag=3D"packed_ring_inorder_merge= able") > + self.send_and_verify("packed_ring_inorder_mergeable") > + self.close_all_testpmd() > + self.result_table_print() > + > + def > test_perf_pvp_2queues_test_with_packed_ring_inorder_nonmergeable_pat > h(self): > + self.start_vhost_testpmd() > + > self.start_virtio_user_testpmd(flag=3D"packed_ring_inorder_nonmergeable") > + self.send_and_verify("packed_ring_inorder_nonmergeable") > + self.close_all_testpmd() > + self.result_table_print() > + > + def tear_down(self): > + """ > + Run after each test case. > + """ > + self.dut.kill_all() > + time.sleep(2) > + > + def tear_down_all(self): > + """ > + Run after each test suite. > + """ > + self.dut.close_session(self.vhost) > -- > 1.8.3.1