From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 67AB4A0518; Fri, 19 Jun 2020 14:07:36 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 600721BEA9; Fri, 19 Jun 2020 14:07:36 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 53B391BE9D for ; Fri, 19 Jun 2020 14:07:34 +0200 (CEST) IronPort-SDR: 0onGgLgmaN/KQzo22sk1/68bXUR3iICKiN2uln1qpjwoVtdRkc/eiHQtAwjakwGAmG4ZuXP2JI iS/Xcv6el+ag== X-IronPort-AV: E=McAfee;i="6000,8403,9656"; a="142842645" X-IronPort-AV: E=Sophos;i="5.75,255,1589266800"; d="scan'208";a="142842645" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 05:07:33 -0700 IronPort-SDR: TP16wOOS5Ray6kavtqSRXN+r2MfIJxZ2p6qnjDdubMlL+QZbhMHqtyD/l/vJqfWbckKz3XHIZD 7dK95jX83MlQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,255,1589266800"; d="scan'208";a="352706211" Received: from fmsmsx108.amr.corp.intel.com ([10.18.124.206]) by orsmga001.jf.intel.com with ESMTP; 19 Jun 2020 05:07:33 -0700 Received: from fmsmsx604.amr.corp.intel.com (10.18.126.84) by FMSMSX108.amr.corp.intel.com (10.18.124.206) with Microsoft SMTP Server (TLS) id 14.3.439.0; Fri, 19 Jun 2020 05:07:32 -0700 Received: from fmsmsx604.amr.corp.intel.com (10.18.126.84) by fmsmsx604.amr.corp.intel.com (10.18.126.84) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Fri, 19 Jun 2020 05:07:29 -0700 Received: from FMSEDG001.ED.cps.intel.com (10.1.192.133) by fmsmsx604.amr.corp.intel.com (10.18.126.84) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.1713.5 via Frontend Transport; Fri, 19 Jun 2020 05:07:29 -0700 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (104.47.58.102) by edgegateway.intel.com (192.55.55.68) with Microsoft SMTP Server (TLS) id 14.3.439.0; Fri, 19 Jun 2020 05:07:29 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BbL8GzjudhpQUpu74tL1KE/U0BUkRrZ4MrqBrhpBeCIqItGZxBC1hWHeejzOAjYWFzDt7PjhXVGFaWstZxftXjSO+ENYHELN5Y5zUz2dfSoGtbgZVRXNLyV0y4kMVlxzIcWIzp82ALl8ngHIKrr1FK7BEH6MnniMa/CA0lEZVLolKrvtTerkSS5L3Dp4oxmp3j/e9N7FS40+pWQyk9zLarrIok/j6mIHcgEm3hZCQfk8DvCRYZm1lsjmHjah8CvFZ3sG9Usae943bWwDfLbuIv5Uhrd5eJBRqJyZjN+eya4LujZWsO2e6y+oC6T8sW0WrgqTTRj0Py/h/PQ3zRal2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=r0CbUfg+V5Uut4IbrTTVUp9WxL7Wc+wWj4xp5T8djvA=; b=iqh5+FLxEOmPISEmkNAH7BCzWsfMvGT40X8JDqkU6JoT7rlm6WlqgGdjovqPuP2JMJAqQ6uO6ZmTzhSLunsTJ9krjjpey/7N5ZNk53lwFP3ewuSv/SHEUV9tHVZnegyqu4jsqHYMEyx0tZTZ/BuqMhTOzNd3Zyf3hHSUPqzp7agh4yLcWpmZA0Da2qikpAPoHM4HZN/vYUi7UsKxTqTTHKC+8QoQpUir+uUEO2Y0edTTqNSoEN53EANEfL1ufJjNIzwO4bgiYqlm5btNlETHqyiPo5bO4HHQexDc00b7fO2FLnI7/FpssOgLBf7i+gUeOV4YjYh/cW2gZOB1ZVqYBw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=r0CbUfg+V5Uut4IbrTTVUp9WxL7Wc+wWj4xp5T8djvA=; b=sU1vM3wgi7cLEu2YQHVTz4UorX9g2wd+yjoOzAGYrxTdzGmji0zKFpyYR5Pa4uk/b/hqvtm2wowKbNUuOwNLBATk9Q2DwZh5iZxxrySutdJtYX3UzfYW6F099BqbDRKbGQ56XRPFhhjGy81Bnm4sPT/gyh1hXhhabptiSldfPzw= Received: from BN6PR11MB1698.namprd11.prod.outlook.com (2603:10b6:404:42::23) by BN8PR11MB3841.namprd11.prod.outlook.com (2603:10b6:408:85::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3109.23; Fri, 19 Jun 2020 12:07:28 +0000 Received: from BN6PR11MB1698.namprd11.prod.outlook.com ([fe80::2023:32a5:8d93:7fdb]) by BN6PR11MB1698.namprd11.prod.outlook.com ([fe80::2023:32a5:8d93:7fdb%9]) with mapi id 15.20.3109.023; Fri, 19 Jun 2020 12:07:28 +0000 From: "Wang, Yinan" To: "Xiao, QimaiX" , "dts@dpdk.org" CC: "Xiao, QimaiX" Thread-Topic: [dts] [PATCH V3]pvp_vhost_user_reconnect: check perf data in each reconnect loop Thread-Index: AQHWRitoQAdbAChqhUm5d8LNhlfd1qjf1++A Date: Fri, 19 Jun 2020 12:07:27 +0000 Message-ID: References: <1592565007-21251-1-git-send-email-qimaix.xiao@intel.com> In-Reply-To: <1592565007-21251-1-git-send-email-qimaix.xiao@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-reaction: no-action dlp-version: 11.2.0.6 dlp-product: dlpe-windows authentication-results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=intel.com; x-originating-ip: [192.198.147.211] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 53f0bdfd-919e-4a21-ce2b-08d8144955f4 x-ms-traffictypediagnostic: BN8PR11MB3841: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:1468; x-forefront-prvs: 0439571D1D x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: 0ApFzNC/Gk1peL1nMk7amLF3aTWSbbJyU4c79Bl3IRKNq+R2b0Cm3LZM0g92dhWUJbTm9/IWCmQF08R1CBHVMmEygf7O0Ri1+5ZdPhgfMxREEXuhLaH7NGcl0Crs2QtR2y4L+haKSrzsKgI5dChtnpuPc233iHHyQWMgw+yVSGoqxgzAmLYzEe88NB2fcHC8dOINokIT1/7wUZ3NmaXyIc4sPjj05a66DRMtHYYRXjQkvicvr5gnjwjqbIZoxXB/ddM5G8ByFtCkxHPtP8GmbiqBS0eRlwf+USJ7LtjZdOlzOlvXSU2OfqbNmHV9FE3oa7YROcDVY8S4xn0Co7ujUQ== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:BN6PR11MB1698.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFTY:; SFS:(4636009)(136003)(376002)(366004)(39860400002)(396003)(346002)(66946007)(66476007)(66556008)(30864003)(64756008)(66446008)(76116006)(7696005)(9686003)(83380400001)(55016002)(86362001)(53546011)(6506007)(52536014)(33656002)(5660300002)(316002)(110136005)(71200400001)(107886003)(2906002)(4326008)(186003)(26005)(8936002)(478600001)(579004); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata: apTGQZNQGb3eyU+EzsigYyB1XL4i6tnQ3Sd8HCOYkWyMTVGXrjQt7ijchAAK/sdNwb8t6+9MzI6ZKWJUlw0zLNsgyaSpwI+KLO4Vvp4mF1IDiEoqfvjKkMRUj73Rx7lNWtruBzQzWPlWblS9ecd1/6bNmXtTHdlEGPNesGg7jlper29LxS4Wa4J9o84iGYPHdNet8FOSdJXfOV0P+YvHMc7qdQYuxiilt7yVRsL+1DWPli2roubgkd+i+q2eTPnsrJGu0R6ufrMPmzxJtlOo7/+3tuBzhKbWicRnJnoKUL9g6Cy6A/GHMQWJGd+zk6oNRONvKRyCXwkG0o6Ji91lubvVcthclFNPZd7eVvKVL30ut8j4H+si0vxHajwEe+Xwq2nJzLCnADloaKeeMhD1z+fUUgwqzfLkIBtEFtPx682QFno5+InrzhxkHDg++8GXL4xg2pMarCWEGV6w0IQ3eTRbKjeSkNSKyJSI5466bgydkldKVi901EiwaX49kftT Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-Network-Message-Id: 53f0bdfd-919e-4a21-ce2b-08d8144955f4 X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Jun 2020 12:07:27.8800 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: +GDgg1paUNUg8Jktdk0N7cmZHDyaHRn7+kqih/Dz8aI5SIoG5fVed9BGIvfPsn5ShhFypaRB2Myk/cqXHsDW0w== X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR11MB3841 X-OriginatorOrg: intel.com Subject: Re: [dts] [PATCH V3]pvp_vhost_user_reconnect: check perf data in each reconnect loop X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Sender: "dts" Acked-by: Wang, Yinan > -----Original Message----- > From: dts On Behalf Of Xiao Qimai > Sent: 2020?6?19? 19:10 > To: dts@dpdk.org > Cc: Xiao, QimaiX > Subject: [dts] [PATCH V3]pvp_vhost_user_reconnect: check perf data in eac= h > reconnect loop >=20 > * 1.check perf data in each reconnect loop > * 2.increase perf descend tolerance from 5% to 15% because of network > fluctuations >=20 > Signed-off-by: Xiao Qimai > --- > tests/TestSuite_pvp_vhost_user_reconnect.py | 115 +++++++++++++++++---- > ------- > 1 file changed, 72 insertions(+), 43 deletions(-) >=20 > diff --git a/tests/TestSuite_pvp_vhost_user_reconnect.py > b/tests/TestSuite_pvp_vhost_user_reconnect.py > index 2ddc454..b609115 100644 > --- a/tests/TestSuite_pvp_vhost_user_reconnect.py > +++ b/tests/TestSuite_pvp_vhost_user_reconnect.py > @@ -71,7 +71,7 @@ class TestPVPVhostUserReconnect(TestCase): > self.frame_sizes =3D [64, 1518] > self.virtio_ip =3D ["1.1.1.2", "1.1.1.3"] > self.virtio_mac =3D ["52:54:00:00:00:01", > - "52:54:00:00:00:02"] > + "52:54:00:00:00:02"] > self.src1 =3D "192.168.4.1" > self.dst1 =3D "192.168.3.1" > self.checked_vm =3D False > @@ -116,7 +116,8 @@ class TestPVPVhostUserReconnect(TestCase): > for i in range(self.vm_num): > vdev_info +=3D "--vdev 'net_vhost%d,iface=3Dvhost- > net%d,client=3D1,queues=3D1' " % (i, i) > testcmd =3D self.dut.base_dir + "/%s/app/testpmd " % self.target > - eal_params =3D self.dut.create_eal_parameters(cores=3Dself.cores= , > no_pci=3DTrue, prefix=3D'vhost', ports=3D[self.pci_info]) > + eal_params =3D self.dut.create_eal_parameters(cores=3Dself.cores= , > no_pci=3DTrue, prefix=3D'vhost', > + ports=3D[self.pci_in= fo]) > para =3D " -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024" > self.vhostapp_testcmd =3D testcmd + eal_params + vdev_info + par= a > self.vhost_user.send_expect(self.vhostapp_testcmd, "testpmd> ", = 40) > @@ -127,10 +128,10 @@ class TestPVPVhostUserReconnect(TestCase): > check the link status is up after testpmd start > """ > loop =3D 1 > - while(loop <=3D 5): > + while (loop <=3D 5): > out =3D dut_info.send_expect("show port info all", "testpmd>= ", 120) > port_status =3D re.findall("Link\s*status:\s*([a-z]*)", out) > - if("down" not in port_status): > + if ("down" not in port_status): > break > time.sleep(3) > loop =3D loop + 1 > @@ -153,11 +154,11 @@ class TestPVPVhostUserReconnect(TestCase): > out =3D self.dut.send_expect("%s --version" % self.vm_qemu_versi= on, "#") > result =3D re.search("QEMU\s*emulator\s*version\s*(\d*.\d*)", ou= t) > self.verify(result is not None, > - 'the qemu path may be not right: %s' % self.vm_qemu_vers= ion) > + 'the qemu path may be not right: %s' % self.vm_qemu_= version) > version =3D result.group(1) > index =3D version.find('.') > self.verify(int(version[:index]) > 2 or > - (int(version[:index]) =3D=3D 2 and int(version[index= +1:]) >=3D 7), > + (int(version[:index]) =3D=3D 2 and int(version[index= + 1:]) >=3D 7), > 'This qemu version should greater than 2.7 ' + \ > 'in this suite, please config it in vhost_sample.cfg= file') > self.checked_vm =3D True > @@ -176,7 +177,7 @@ class TestPVPVhostUserReconnect(TestCase): > vm_params =3D {} > vm_params['driver'] =3D 'vhost-user' > vm_params['opt_path'] =3D './vhost-net%d' % (i) > - vm_params['opt_mac'] =3D '52:54:00:00:00:0%d' % (i+1) > + vm_params['opt_mac'] =3D '52:54:00:00:00:0%d' % (i + 1) > vm_params['opt_server'] =3D 'server' > vm_params['opt_settings'] =3D setting_args > vm_info.set_vm_device(**vm_params) > @@ -198,7 +199,7 @@ class TestPVPVhostUserReconnect(TestCase): > start testpmd in vm > """ > vm_testpmd =3D self.dut.target + "/app/testpmd -c 0x3 -n 4 " + \ > - "-- -i --port-topology=3Dchained --txd=3D1024 --= rxd=3D1024 " > + "-- -i --port-topology=3Dchained --txd=3D1024 --rxd= =3D1024 " > for i in range(len(self.vm_dut)): > self.vm_dut[i].send_expect(vm_testpmd, "testpmd> ", 20) > self.vm_dut[i].send_expect("set fwd mac", "testpmd> ") > @@ -225,24 +226,24 @@ class TestPVPVhostUserReconnect(TestCase): > time.sleep(5) > vm_intf =3D self.vm_dut[i].ports_info[0]['intf'] > self.vm_dut[i].send_expect("ifconfig %s %s" % > - (vm_intf, self.virtio_ip[i]), "#", 1= 0) > + (vm_intf, self.virtio_ip[i]), "#"= , 10) > self.vm_dut[i].send_expect("ifconfig %s up" % vm_intf, "#", = 10) >=20 > self.vm_dut[0].send_expect('arp -s %s %s' % > - (self.virtio_ip[1], self.virtio_mac[1])= , '#', 10) > + (self.virtio_ip[1], self.virtio_mac[1= ]), '#', 10) > self.vm_dut[1].send_expect('arp -s %s %s' % > - (self.virtio_ip[0], self.virtio_mac[0])= , '#', 10) > + (self.virtio_ip[0], self.virtio_mac[0= ]), '#', 10) >=20 > def start_iperf(self): > """ > start iperf > """ > self.vm_dut[0].send_expect( > - 'iperf -s -p 12345 -i 1 > iperf_server.log &', '', 10) > + 'iperf -s -p 12345 -i 1 > iperf_server.log &', '', 10) > self.vm_dut[1].send_expect( > - 'iperf -c %s -p 12345 -i 1 -t 5 > iperf_client.log &' % > - self.virtio_ip[0], '', 60) > - time.sleep(20) > + 'iperf -c %s -p 12345 -i 1 -t 10 > iperf_client.log &' % > + self.virtio_ip[0], '', 60) > + time.sleep(15) >=20 > def iperf_result_verify(self, cycle, tinfo): > """ > @@ -250,7 +251,7 @@ class TestPVPVhostUserReconnect(TestCase): > """ > # copy iperf_client file from vm1 > self.vm_dut[1].session.copy_file_from("%s/iperf_client.log" % > - self.dut.base_dir) > + self.dut.base_dir) > fp =3D open("./iperf_client.log") > fmsg =3D fp.read() > fp.close() > @@ -261,12 +262,17 @@ class TestPVPVhostUserReconnect(TestCase): > else: > cinfo =3D tinfo > self.result_table_add(["vm2vm iperf", iperfdata[-1], cinfo]) > + data_li =3D iperfdata[-1].strip().split() > + if self.nic in ['fortville_spirit']: > + self.verify(data_li[1] =3D=3D 'Gbits/sec', 'data unit not co= rrect') > + return float(data_li[0]) >=20 > def send_and_verify(self, cycle=3D0, tinfo=3D""): > + frame_data =3D dict().fromkeys(self.frame_sizes, 0) > for frame_size in self.frame_sizes: > - pkt =3D Packet(pkt_type =3D 'UDP', pkt_len =3D frame_size) > + pkt =3D Packet(pkt_type=3D'UDP', pkt_len=3Dframe_size) > pkt.config_layers([('ether', {'dst': '%s' % self.dst_mac}), > - ('ipv4', {'dst': '%s' % self.dst1, 'src': '%= s' % self.src1})]) > + ('ipv4', {'dst': '%s' % self.dst1, 'src':= '%s' % self.src1})]) > pkt.save_pcapfile(self.tester, "%s/reconnect.pcap" % self.ou= t_path) >=20 > tgenInput =3D [] > @@ -275,7 +281,7 @@ class TestPVPVhostUserReconnect(TestCase): >=20 > self.tester.pktgen.clear_streams() > streams =3D > self.pktgen_helper.prepare_stream_from_tginput(tgenInput, 100, > - None, self.tester.pktgen) > + Non= e, self.tester.pktgen) > traffic_opt =3D {'delay': 30, } > _, pps =3D self.tester.pktgen.measure_throughput(stream_ids= =3Dstreams, > options=3Dtraffic_opt) > Mpps =3D pps / 1000000.0 > @@ -285,7 +291,8 @@ class TestPVPVhostUserReconnect(TestCase): > check_speed =3D 5 if frame_size =3D=3D 64 else 1 > self.verify(Mpps > check_speed, "can not receive packets of = frame > size %d" % (frame_size)) > pct =3D Mpps * 100 / \ > - float(self.wirespeed(self.nic, frame_size, 1)) > + float(self.wirespeed(self.nic, frame_size, 1)) > + frame_data[frame_size] =3D Mpps > if cycle =3D=3D 0: > data_row =3D [tinfo, frame_size, str(Mpps), str(pct), > "Before relaunch", "1"] > @@ -293,20 +300,30 @@ class TestPVPVhostUserReconnect(TestCase): > data_row =3D [tinfo, frame_size, str(Mpps), str(pct), > "After relaunch", "1"] > self.result_table_add(data_row) > + return frame_data > + > + def check_reconnect_perf(self): > + if isinstance(self.before_data, dict): > + for i in self.frame_sizes: > + self.verify( > + (self.before_data[i] - self.reconnect_data[i]) < sel= f.before_data[i] > * 0.15, 'verify reconnect speed failed') > + else: > + self.verify( > + (self.before_data - self.reconnect_data) < self.before_d= ata * 0.15, > 'verify reconnect speed failed') >=20 > def test_perf_split_ring_reconnet_one_vm(self): > """ > test reconnect stability test of one vm > """ > self.header_row =3D ["Mode", "FrameSize(B)", "Throughput(Mpps)", > - "LineRate(%)", "Cycle", "Queue Number"] > + "LineRate(%)", "Cycle", "Queue Number"] > self.result_table_create(self.header_row) > vm_cycle =3D 0 > self.vm_num =3D 1 > self.launch_testpmd_as_vhost_user() > self.start_vms() > self.vm_testpmd_start() > - self.send_and_verify(vm_cycle, "reconnet one vm") > + self.before_data =3D self.send_and_verify(vm_cycle, "reconnet on= e vm") >=20 > vm_cycle =3D 1 > # reconnet from vhost > @@ -314,7 +331,8 @@ class TestPVPVhostUserReconnect(TestCase): > for i in range(self.reconnect_times): > self.dut.send_expect("killall -s INT testpmd", "# ") > self.launch_testpmd_as_vhost_user() > - self.send_and_verify(vm_cycle, "reconnet from vhost") > + self.reconnect_data =3D self.send_and_verify(vm_cycle, "reco= nnet from > vhost") > + self.check_reconnect_perf() >=20 > # reconnet from qemu > self.logger.info('now reconnect from vm') > @@ -322,7 +340,8 @@ class TestPVPVhostUserReconnect(TestCase): > self.dut.send_expect("killall -s INT qemu-system-x86_64", "#= ") > self.start_vms() > self.vm_testpmd_start() > - self.send_and_verify(vm_cycle, "reconnet from VM") > + self.reconnect_data =3D self.send_and_verify(vm_cycle, "reco= nnet from > VM") > + self.check_reconnect_perf() > self.result_table_print() >=20 > def test_perf_split_ring_reconnet_two_vms(self): > @@ -330,14 +349,14 @@ class TestPVPVhostUserReconnect(TestCase): > test reconnect stability test of two vms > """ > self.header_row =3D ["Mode", "FrameSize(B)", "Throughput(Mpps)", > - "LineRate(%)", "Cycle", "Queue Number"] > + "LineRate(%)", "Cycle", "Queue Number"] > self.result_table_create(self.header_row) > vm_cycle =3D 0 > self.vm_num =3D 2 > self.launch_testpmd_as_vhost_user() > self.start_vms() > self.vm_testpmd_start() > - self.send_and_verify(vm_cycle, "reconnet two vm") > + self.before_data =3D self.send_and_verify(vm_cycle, "reconnet tw= o vm") >=20 > vm_cycle =3D 1 > # reconnet from vhost > @@ -345,7 +364,8 @@ class TestPVPVhostUserReconnect(TestCase): > for i in range(self.reconnect_times): > self.dut.send_expect("killall -s INT testpmd", "# ") > self.launch_testpmd_as_vhost_user() > - self.send_and_verify(vm_cycle, "reconnet from vhost") > + self.reconnect_data =3D self.send_and_verify(vm_cycle, "reco= nnet from > vhost") > + self.check_reconnect_perf() >=20 > # reconnet from qemu > self.logger.info('now reconnect from vm') > @@ -353,7 +373,8 @@ class TestPVPVhostUserReconnect(TestCase): > self.dut.send_expect("killall -s INT qemu-system-x86_64", "#= ") > self.start_vms() > self.vm_testpmd_start() > - self.send_and_verify(vm_cycle, "reconnet from VM") > + self.reconnect_data =3D self.send_and_verify(vm_cycle, "reco= nnet from > VM") > + self.check_reconnect_perf() > self.result_table_print() >=20 > def test_perf_split_ring_vm2vm_virtio_net_reconnet_two_vms(self): > @@ -368,7 +389,7 @@ class TestPVPVhostUserReconnect(TestCase): > self.start_vms() > self.config_vm_intf() > self.start_iperf() > - self.iperf_result_verify(vm_cycle, 'before reconnet') > + self.before_data =3D self.iperf_result_verify(vm_cycle, 'before = reconnet') >=20 > vm_cycle =3D 1 > # reconnet from vhost > @@ -377,10 +398,12 @@ class TestPVPVhostUserReconnect(TestCase): > self.dut.send_expect("killall -s INT testpmd", "# ") > self.launch_testpmd_as_vhost_user_with_no_pci() > self.start_iperf() > - self.iperf_result_verify(vm_cycle, 'reconnet from vhost') > + self.reconnect_data =3D self.iperf_result_verify(vm_cycle, '= reconnet > from vhost') > + self.check_reconnect_perf() >=20 > # reconnet from VM > self.logger.info('now reconnect from vm') > + vm_tmp =3D list() > for i in range(self.reconnect_times): > self.vm_dut[0].send_expect('rm iperf_server.log', '# ', 10) > self.vm_dut[1].send_expect('rm iperf_client.log', '# ', 10) > @@ -388,7 +411,8 @@ class TestPVPVhostUserReconnect(TestCase): > self.start_vms() > self.config_vm_intf() > self.start_iperf() > - self.iperf_result_verify(vm_cycle, 'reconnet from vm') > + self.reconnect_data =3D self.iperf_result_verify(vm_cycle, '= reconnet > from vm') > + self.check_reconnect_perf() > self.result_table_print() >=20 > def test_perf_packed_ring_reconnet_one_vm(self): > @@ -396,14 +420,14 @@ class TestPVPVhostUserReconnect(TestCase): > test reconnect stability test of one vm > """ > self.header_row =3D ["Mode", "FrameSize(B)", "Throughput(Mpps)", > - "LineRate(%)", "Cycle", "Queue Number"] > + "LineRate(%)", "Cycle", "Queue Number"] > self.result_table_create(self.header_row) > vm_cycle =3D 0 > self.vm_num =3D 1 > self.launch_testpmd_as_vhost_user() > self.start_vms(packed=3DTrue) > self.vm_testpmd_start() > - self.send_and_verify(vm_cycle, "reconnet one vm") > + self.before_data =3D self.send_and_verify(vm_cycle, "reconnet on= e vm") >=20 > vm_cycle =3D 1 > # reconnet from vhost > @@ -411,7 +435,8 @@ class TestPVPVhostUserReconnect(TestCase): > for i in range(self.reconnect_times): > self.dut.send_expect("killall -s INT testpmd", "# ") > self.launch_testpmd_as_vhost_user() > - self.send_and_verify(vm_cycle, "reconnet from vhost") > + self.reconnect_data =3D self.send_and_verify(vm_cycle, "reco= nnet from > vhost") > + self.check_reconnect_perf() >=20 > # reconnet from qemu > self.logger.info('now reconnect from vm') > @@ -419,7 +444,8 @@ class TestPVPVhostUserReconnect(TestCase): > self.dut.send_expect("killall -s INT qemu-system-x86_64", "#= ") > self.start_vms(packed=3DTrue) > self.vm_testpmd_start() > - self.send_and_verify(vm_cycle, "reconnet from VM") > + self.reconnect_data =3D self.send_and_verify(vm_cycle, "reco= nnet from > VM") > + self.check_reconnect_perf() > self.result_table_print() >=20 > def test_perf_packed_ring_reconnet_two_vms(self): > @@ -427,14 +453,14 @@ class TestPVPVhostUserReconnect(TestCase): > test reconnect stability test of two vms > """ > self.header_row =3D ["Mode", "FrameSize(B)", "Throughput(Mpps)", > - "LineRate(%)", "Cycle", "Queue Number"] > + "LineRate(%)", "Cycle", "Queue Number"] > self.result_table_create(self.header_row) > vm_cycle =3D 0 > self.vm_num =3D 2 > self.launch_testpmd_as_vhost_user() > self.start_vms(packed=3DTrue) > self.vm_testpmd_start() > - self.send_and_verify(vm_cycle, "reconnet two vm") > + self.before_data =3D self.send_and_verify(vm_cycle, "reconnet tw= o vm") >=20 > vm_cycle =3D 1 > # reconnet from vhost > @@ -442,15 +468,16 @@ class TestPVPVhostUserReconnect(TestCase): > for i in range(self.reconnect_times): > self.dut.send_expect("killall -s INT testpmd", "# ") > self.launch_testpmd_as_vhost_user() > - self.send_and_verify(vm_cycle, "reconnet from vhost") > - > + self.reconnect_data =3D self.send_and_verify(vm_cycle, "reco= nnet from > vhost") > + self.check_reconnect_perf() > # reconnet from qemu > self.logger.info('now reconnect from vm') > for i in range(self.reconnect_times): > self.dut.send_expect("killall -s INT qemu-system-x86_64", "#= ") > self.start_vms(packed=3DTrue) > self.vm_testpmd_start() > - self.send_and_verify(vm_cycle, "reconnet from VM") > + self.reconnect_data =3D self.send_and_verify(vm_cycle, "reco= nnet from > VM") > + self.check_reconnect_perf() > self.result_table_print() >=20 > def test_perf_packed_ring_virtio_net_reconnet_two_vms(self): > @@ -465,7 +492,7 @@ class TestPVPVhostUserReconnect(TestCase): > self.start_vms(packed=3DTrue) > self.config_vm_intf() > self.start_iperf() > - self.iperf_result_verify(vm_cycle, 'before reconnet') > + self.before_data =3D self.iperf_result_verify(vm_cycle, 'before = reconnet') >=20 > vm_cycle =3D 1 > # reconnet from vhost > @@ -474,7 +501,8 @@ class TestPVPVhostUserReconnect(TestCase): > self.dut.send_expect("killall -s INT testpmd", "# ") > self.launch_testpmd_as_vhost_user_with_no_pci() > self.start_iperf() > - self.iperf_result_verify(vm_cycle, 'reconnet from vhost') > + self.reconnect_data =3D self.iperf_result_verify(vm_cycle, '= reconnet > from vhost') > + self.check_reconnect_perf() >=20 > # reconnet from VM > self.logger.info('now reconnect from vm') > @@ -485,7 +513,8 @@ class TestPVPVhostUserReconnect(TestCase): > self.start_vms(packed=3DTrue) > self.config_vm_intf() > self.start_iperf() > - self.iperf_result_verify(vm_cycle, 'reconnet from vm') > + self.reconnect_data =3D self.iperf_result_verify(vm_cycle, '= reconnet > from vm') > + self.check_reconnect_perf() > self.result_table_print() >=20 > def tear_down(self): > -- > 1.8.3.1