From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D02ABA00BE; Fri, 12 Jun 2020 04:12:09 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4C7DD2C28; Fri, 12 Jun 2020 04:12:09 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id E299B2C15 for ; Fri, 12 Jun 2020 04:12:07 +0200 (CEST) IronPort-SDR: KpmWahxyXCT7Jbtoax20W6pZpdFSe6CX7Fei3giNsqbXPXAalDP2bfi9jfT/MAWEWwJJokrnwZ rX/axFIxJSMQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Jun 2020 19:12:06 -0700 IronPort-SDR: bTCRb6VU+vd8d7Kao0oamCj7h9Qd9c0AvaoBtexMSgVQ7WpFHafvwsIOsa2t/U/PZI3xgGLm/X Ncy+XBjpBuLg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,501,1583222400"; d="scan'208";a="473906122" Received: from fmsmsx107.amr.corp.intel.com ([10.18.124.205]) by fmsmga006.fm.intel.com with ESMTP; 11 Jun 2020 19:12:06 -0700 Received: from fmsmsx610.amr.corp.intel.com (10.18.126.90) by fmsmsx107.amr.corp.intel.com (10.18.124.205) with Microsoft SMTP Server (TLS) id 14.3.439.0; Thu, 11 Jun 2020 19:12:06 -0700 Received: from fmsmsx605.amr.corp.intel.com (10.18.126.85) by fmsmsx610.amr.corp.intel.com (10.18.126.90) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Thu, 11 Jun 2020 19:12:05 -0700 Received: from FMSEDG002.ED.cps.intel.com (10.1.192.134) by fmsmsx605.amr.corp.intel.com (10.18.126.85) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.1713.5 via Frontend Transport; Thu, 11 Jun 2020 19:12:05 -0700 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (104.47.56.175) by edgegateway.intel.com (192.55.55.69) with Microsoft SMTP Server (TLS) id 14.3.439.0; Thu, 11 Jun 2020 19:12:05 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=nZfhOxNx699OVDj5k/fLV3qaFNAIWQLlhApKggrt7oEEE0G5SE4z2B+qdSl2UkCH5fjoxjfjC3CfoSG97iXOjfKyT29SV4yXbB3n9e5cv07nuQEC/HTaqiBsTugXaxXFAKf70F5RvoOmYlyezlf932In/ZyNdbT/DMkQWLwjvJ/LoFfNOGG3b9vSnhZyxiafkrKPAZaixqDlgxgjxCrtXgbAYKCIvh8JNnB2bdxJw3dplgpXMU8SuVidlsEyCuNHkTcF5sj9NgJLcnUTvRiNUs3xbtO238kHoWxlfgxifuF9Sa8lGTDdEonhAjjtzlJSWDrIV4lkv0RKqMfJzpOrtQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zqoHj68swx0r2FQGHa0aTl93LndDBdN6HNjx8sA7xZE=; b=YN3wyfTHn3D7VI1bjM+ROZau0xMA7aQEPg3arjVahLaUd3eCG5UC2xKQUWhnEQjIEk99HDCcrd5esyx64sFeOIZs7j6e2ohN6XEEsa5kqKRrd+ikHulPOyhucNACP+cKG7bkpBNzxNnj7xzKfHtGQjtU3yTsWtqF9WKK2gX+JnKfxy6TlyeHA92ufgOyYKXgLTcwxYeBOOdWfzgiODUdfxgH7SOzrzVQeKJNx6P/YEHIoL0q6PgIWFFM0wRCk1TRUx14JquJMuZY48GGKpLWSodj2CzAQ/kFe8sLp0FqK2ePOht26DMTpgS1/6nN6QNwparoTrFCI0WOhXLG7c2b+A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zqoHj68swx0r2FQGHa0aTl93LndDBdN6HNjx8sA7xZE=; b=y4Yrypi/JyHySsjPSxJ3NVNbMV/XB+9R1TXlPcnTTM5SNTQgR1DlQF18kLQb2J3/TM7cJpquH/QSopevhWvwOlPwEKLLhkm4gur3qI9WdagnHknGc/pp/+CuHIoe8zJcrbK9UdqyeAZcQlmlipMkeUexKUddlwIKjsDUrU3zcBM= Received: from BN6PR11MB1698.namprd11.prod.outlook.com (2603:10b6:404:42::23) by BN8PR11MB3604.namprd11.prod.outlook.com (2603:10b6:408:83::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3066.18; Fri, 12 Jun 2020 02:12:02 +0000 Received: from BN6PR11MB1698.namprd11.prod.outlook.com ([fe80::2023:32a5:8d93:7fdb]) by BN6PR11MB1698.namprd11.prod.outlook.com ([fe80::2023:32a5:8d93:7fdb%9]) with mapi id 15.20.3066.023; Fri, 12 Jun 2020 02:12:02 +0000 From: "Wang, Yinan" To: "Xiao, QimaiX" , "dts@dpdk.org" Thread-Topic: [dts] [PATCH V2]pvp_vhost_user_reconnect: check perf data in each reconnect loop Thread-Index: AQHWP9M5S9lhMzO6mkSLewz+thh0EajUPgBA Date: Fri, 12 Jun 2020 02:12:02 +0000 Message-ID: References: <1591867353-21547-1-git-send-email-qimaix.xiao@intel.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-reaction: no-action dlp-version: 11.2.0.6 dlp-product: dlpe-windows authentication-results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=intel.com; x-originating-ip: [192.102.204.38] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 1ee744c5-065b-4ec0-eb84-08d80e75ff0f x-ms-traffictypediagnostic: BN8PR11MB3604: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:513; x-forefront-prvs: 0432A04947 x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: fX28UqQxpOS3WNCbJ2mXGz3u9eMUwQd8CbvDKjoFeaRE8Mv786pzzlvOvATZZx62o2NRBuu9jgPo0k6HkuLkmJVJ5fC4XJDWujekqdCYuk5Go/ratb/AQ9YJDI/6TArPUfHzRwCZCJTglj9BkzWB7xg55UH6SDWnv7l3hGHjOg9oYAMmMwtC6cYLiG3Lb4kiB3dpVF3/B0AmQipcLOVdlFobAsKeWhl5QpqtAQ+ksCluJCeUBiBpEPbQybnNEzxGUrdhZykIQOQGbFkMTEcmQSgpDVJgk9xfBsvfON080lU5LVXQ2YkvfQEBzC90ciZqjbdSKuqmy8+ICTxZtsN3Ww== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:BN6PR11MB1698.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFTY:; SFS:(366004)(396003)(346002)(376002)(39860400002)(136003)(6506007)(186003)(55016002)(64756008)(5660300002)(76116006)(8676002)(8936002)(86362001)(66556008)(30864003)(26005)(9686003)(66946007)(66446008)(53546011)(66476007)(110136005)(478600001)(33656002)(2906002)(83380400001)(316002)(71200400001)(7696005)(52536014); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata: f5htvZFnGQiuXLUAJe1/mLL5tZqy9+h6RHR62Fh6xOBpjezv7GAjRqDCt/Oeel8KxlCahMYHZxNARHfSMJax/6JB2Tf4ckWN7qbeFVseRybV8ZIQ9zT69QNm784gl189Kd4RXSy8Vq0epug1fZm9LO3ky5/G1Ij0FRnD9/g0ef5vkhq9M1Lo8LemXo4gQGtEjluwlPVequrSzuHSQWBuc4BH0tF9Hk6GUSmxDxclvHi4T5ROPHiWm1GFO6cVhRHcMPcrVRXI/U6B+Wo+PepJ2zhyf0zwnhSYH+eM4GB12cGTvjdY07u7onZlP5UFx2ZM6UOXyI2tG80jtCSyg629E/CKLLKae2g+JuMiPfZcsIYTyKVHBwb9u5GgovBp5fVPfMUigSD376FPQxdKMkKNgiZUgVCj/CQH0pClPpllfwYQfAVZA5BZysISrS/Qdl0SJnmVIOIQO0G1cit5gocX9Lzwf+ZhA41jVMhIpZv4G0ET2HYgiBQGj4d6Dddh8RpI Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-Network-Message-Id: 1ee744c5-065b-4ec0-eb84-08d80e75ff0f X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Jun 2020 02:12:02.5551 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: tFPezKi1ILqQBqKt9C6UEtJ8vZIagaFzfFzI8aXOCSbU9e8efGWhukGZanHPLpjRS9KQEbVgn6ABUeh4aJX0Rw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR11MB3604 X-OriginatorOrg: intel.com Subject: Re: [dts] [PATCH V2]pvp_vhost_user_reconnect: check perf data in each reconnect loop X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Sender: "dts" Acked-by: Wang, Yinan > -----Original Message----- > From: dts On Behalf Of Xiao, QimaiX > Sent: 2020?6?11? 17:32 > To: dts@dpdk.org > Subject: Re: [dts] [PATCH V2]pvp_vhost_user_reconnect: check perf data in= each > reconnect loop >=20 > Tested-by: Xiao, QimaiX >=20 > Regards, > Xiao Qimai >=20 > > -----Original Message----- > > From: Xiao, QimaiX > > Sent: Thursday, June 11, 2020 5:23 PM > > To: dts@dpdk.org > > Cc: Xiao, QimaiX > > Subject: [dts][PATCH V2]pvp_vhost_user_reconnect: check perf data in ea= ch > > reconnect loop > > > > * 1.check perf data in each reconnect loop > > * 2.increase perf descend tolerance from 5% to 15% because of network > > fluctuations > > > > Signed-off-by: Xiao Qimai > > --- > > tests/TestSuite_pvp_vhost_user_reconnect.py | 110 ++++++++------------= --- > > ----- > > 1 file changed, 29 insertions(+), 81 deletions(-) > > > > diff --git a/tests/TestSuite_pvp_vhost_user_reconnect.py > > b/tests/TestSuite_pvp_vhost_user_reconnect.py > > index fa86d02..b609115 100644 > > --- a/tests/TestSuite_pvp_vhost_user_reconnect.py > > +++ b/tests/TestSuite_pvp_vhost_user_reconnect.py > > @@ -66,7 +66,7 @@ class TestPVPVhostUserReconnect(TestCase): > > else: > > self.socket_mem =3D '1024,1024' > > > > - self.reconnect_times =3D 2 > > + self.reconnect_times =3D 5 > > self.vm_num =3D 1 > > self.frame_sizes =3D [64, 1518] > > self.virtio_ip =3D ["1.1.1.2", "1.1.1.3"] @@ -241,9 +241,9 @@ = class > > TestPVPVhostUserReconnect(TestCase): > > self.vm_dut[0].send_expect( > > 'iperf -s -p 12345 -i 1 > iperf_server.log &', '', 10) > > self.vm_dut[1].send_expect( > > - 'iperf -c %s -p 12345 -i 1 -t 5 > iperf_client.log &' % > > + 'iperf -c %s -p 12345 -i 1 -t 10 > iperf_client.log &' % > > self.virtio_ip[0], '', 60) > > - time.sleep(20) > > + time.sleep(15) > > > > def iperf_result_verify(self, cycle, tinfo): > > """ > > @@ -306,14 +306,10 @@ class TestPVPVhostUserReconnect(TestCase): > > if isinstance(self.before_data, dict): > > for i in self.frame_sizes: > > self.verify( > > - (self.before_data[i] - self.vhost_reconnect_data[i= ]) < > > self.before_data[i] * 0.05, 'verify reconnect vhost speed failed') > > - self.verify( > > - (self.before_data[i] - self.vm_reconnect_data[i]) = < > > self.before_data[i] * 0.05, 'verify reconnect vm speed failed') > > + (self.before_data[i] - self.reconnect_data[i]) < > > + self.before_data[i] * 0.15, 'verify reconnect speed failed') > > else: > > self.verify( > > - (self.before_data - self.vhost_reconnect_data < self.b= efore_data * > > 0.05, 'verify reconnect vhost speed failed')) > > - self.verify( > > - (self.before_data - self.vm_reconnect_data < self.befo= re_data * > > 0.05, 'verify reconnect vm speed failed')) > > + (self.before_data - self.reconnect_data) < > > + self.before_data * 0.15, 'verify reconnect speed failed') > > > > def test_perf_split_ring_reconnet_one_vm(self): > > """ > > @@ -332,32 +328,21 @@ class TestPVPVhostUserReconnect(TestCase): > > vm_cycle =3D 1 > > # reconnet from vhost > > self.logger.info('now reconnect from vhost') > > - vhost_tmp =3D list() > > for i in range(self.reconnect_times): > > self.dut.send_expect("killall -s INT testpmd", "# ") > > self.launch_testpmd_as_vhost_user() > > - vhost_tmp.append(self.send_and_verify(vm_cycle, "reconnet = from > > vhost")) > > - > > - self.vhost_reconnect_data =3D dict() > > - for frame_size in self.frame_sizes: > > - size_value =3D [i[frame_size] for i in vhost_tmp] > > - self.vhost_reconnect_data[frame_size] =3D > > sum(size_value)/len(size_value) > > + self.reconnect_data =3D self.send_and_verify(vm_cycle, "re= connet > > from vhost") > > + self.check_reconnect_perf() > > > > # reconnet from qemu > > self.logger.info('now reconnect from vm') > > - vm_tmp =3D list() > > for i in range(self.reconnect_times): > > self.dut.send_expect("killall -s INT qemu-system-x86_64", = "# ") > > self.start_vms() > > self.vm_testpmd_start() > > - vm_tmp.append(self.send_and_verify(vm_cycle, "reconnet fro= m > > VM")) > > - > > - self.vm_reconnect_data =3D dict() > > - for frame_size in self.frame_sizes: > > - size_value =3D [i[frame_size] for i in vm_tmp] > > - self.vm_reconnect_data[frame_size] =3D > > sum(size_value)/len(size_value) > > + self.reconnect_data =3D self.send_and_verify(vm_cycle, "re= connet > > from VM") > > + self.check_reconnect_perf() > > self.result_table_print() > > - self.check_reconnect_perf() > > > > def test_perf_split_ring_reconnet_two_vms(self): > > """ > > @@ -376,31 +361,21 @@ class TestPVPVhostUserReconnect(TestCase): > > vm_cycle =3D 1 > > # reconnet from vhost > > self.logger.info('now reconnect from vhost') > > - vhost_tmp =3D list() > > for i in range(self.reconnect_times): > > self.dut.send_expect("killall -s INT testpmd", "# ") > > self.launch_testpmd_as_vhost_user() > > - vhost_tmp.append(self.send_and_verify(vm_cycle, "reconnet = from > > vhost")) > > - self.vhost_reconnect_data =3D dict() > > - for frame_size in self.frame_sizes: > > - size_value =3D [i[frame_size] for i in vhost_tmp] > > - self.vhost_reconnect_data[frame_size] =3D sum(size_value) = / > > len(size_value) > > + self.reconnect_data =3D self.send_and_verify(vm_cycle, "re= connet > > from vhost") > > + self.check_reconnect_perf() > > > > # reconnet from qemu > > self.logger.info('now reconnect from vm') > > - vm_tmp =3D list() > > for i in range(self.reconnect_times): > > self.dut.send_expect("killall -s INT qemu-system-x86_64", = "# ") > > self.start_vms() > > self.vm_testpmd_start() > > - vm_tmp.append(self.send_and_verify(vm_cycle, "reconnet fro= m > > VM")) > > - > > - self.vm_reconnect_data =3D dict() > > - for frame_size in self.frame_sizes: > > - size_value =3D [i[frame_size] for i in vm_tmp] > > - self.vm_reconnect_data[frame_size] =3D > > sum(size_value)/len(size_value) > > + self.reconnect_data =3D self.send_and_verify(vm_cycle, "re= connet > > from VM") > > + self.check_reconnect_perf() > > self.result_table_print() > > - self.check_reconnect_perf() > > > > def test_perf_split_ring_vm2vm_virtio_net_reconnet_two_vms(self): > > """ > > @@ -419,13 +394,12 @@ class TestPVPVhostUserReconnect(TestCase): > > vm_cycle =3D 1 > > # reconnet from vhost > > self.logger.info('now reconnect from vhost') > > - vhost_tmp =3D list() > > for i in range(self.reconnect_times): > > self.dut.send_expect("killall -s INT testpmd", "# ") > > self.launch_testpmd_as_vhost_user_with_no_pci() > > self.start_iperf() > > - vhost_tmp.append(self.iperf_result_verify(vm_cycle, 'recon= net from > > vhost')) > > - self.vhost_reconnect_data =3D sum(vhost_tmp)/len(vhost_tmp) > > + self.reconnect_data =3D self.iperf_result_verify(vm_cycle,= 'reconnet > > from vhost') > > + self.check_reconnect_perf() > > > > # reconnet from VM > > self.logger.info('now reconnect from vm') @@ -437,10 +411,9 @@= class > > TestPVPVhostUserReconnect(TestCase): > > self.start_vms() > > self.config_vm_intf() > > self.start_iperf() > > - vm_tmp.append(self.iperf_result_verify(vm_cycle, 'reconnet= from > > vm')) > > - self.vm_reconnect_data =3D sum(vm_tmp)/len(vm_tmp) > > + self.reconnect_data =3D self.iperf_result_verify(vm_cycle,= 'reconnet > > from vm') > > + self.check_reconnect_perf() > > self.result_table_print() > > - self.check_reconnect_perf() > > > > def test_perf_packed_ring_reconnet_one_vm(self): > > """ > > @@ -448,7 +421,6 @@ class TestPVPVhostUserReconnect(TestCase): > > """ > > self.header_row =3D ["Mode", "FrameSize(B)", "Throughput(Mpps)= ", > > "LineRate(%)", "Cycle", "Queue Number"] > > - self.res =3D dict().fromkeys(["before_relaunch", "after_relaun= ch"], list()) > > self.result_table_create(self.header_row) > > vm_cycle =3D 0 > > self.vm_num =3D 1 > > @@ -459,31 +431,22 @@ class TestPVPVhostUserReconnect(TestCase): > > > > vm_cycle =3D 1 > > # reconnet from vhost > > - vhost_tmp =3D list() > > self.logger.info('now reconnect from vhost') > > for i in range(self.reconnect_times): > > self.dut.send_expect("killall -s INT testpmd", "# ") > > self.launch_testpmd_as_vhost_user() > > - vhost_tmp.append(self.send_and_verify(vm_cycle, "reconnet = from > > vhost")) > > - self.vhost_reconnect_data =3D dict() > > - for frame_size in self.frame_sizes: > > - size_value =3D [i[frame_size] for i in vhost_tmp] > > - self.vhost_reconnect_data[frame_size] =3D sum(size_value) = / > > len(size_value) > > + self.reconnect_data =3D self.send_and_verify(vm_cycle, "re= connet > > from vhost") > > + self.check_reconnect_perf() > > > > # reconnet from qemu > > self.logger.info('now reconnect from vm') > > - vm_tmp =3D list() > > for i in range(self.reconnect_times): > > self.dut.send_expect("killall -s INT qemu-system-x86_64", = "# ") > > self.start_vms(packed=3DTrue) > > self.vm_testpmd_start() > > - vm_tmp.append(self.send_and_verify(vm_cycle, "reconnet fro= m > > VM")) > > - self.vm_reconnect_data =3D dict() > > - for frame_size in self.frame_sizes: > > - size_value =3D [i[frame_size] for i in vm_tmp] > > - self.vm_reconnect_data[frame_size] =3D sum(size_value) / > > len(size_value) > > + self.reconnect_data =3D self.send_and_verify(vm_cycle, "re= connet > > from VM") > > + self.check_reconnect_perf() > > self.result_table_print() > > - self.check_reconnect_perf() > > > > def test_perf_packed_ring_reconnet_two_vms(self): > > """ > > @@ -491,7 +454,6 @@ class TestPVPVhostUserReconnect(TestCase): > > """ > > self.header_row =3D ["Mode", "FrameSize(B)", "Throughput(Mpps)= ", > > "LineRate(%)", "Cycle", "Queue Number"] > > - self.res =3D dict().fromkeys(["before_relaunch", "after_relaun= ch"], list()) > > self.result_table_create(self.header_row) > > vm_cycle =3D 0 > > self.vm_num =3D 2 > > @@ -503,37 +465,26 @@ class TestPVPVhostUserReconnect(TestCase): > > vm_cycle =3D 1 > > # reconnet from vhost > > self.logger.info('now reconnect from vhost') > > - vhost_tmp =3D list() > > for i in range(self.reconnect_times): > > self.dut.send_expect("killall -s INT testpmd", "# ") > > self.launch_testpmd_as_vhost_user() > > - vhost_tmp.append(self.send_and_verify(vm_cycle, "reconnet = from > > vhost")) > > - > > - self.vhost_reconnect_data =3D dict() > > - for frame_size in self.frame_sizes: > > - size_value =3D [i[frame_size] for i in vhost_tmp] > > - self.vhost_reconnect_data[frame_size] =3D sum(size_value) = / > > len(size_value) > > + self.reconnect_data =3D self.send_and_verify(vm_cycle, "re= connet > > from vhost") > > + self.check_reconnect_perf() > > # reconnet from qemu > > self.logger.info('now reconnect from vm') > > - vm_tmp =3D list() > > for i in range(self.reconnect_times): > > self.dut.send_expect("killall -s INT qemu-system-x86_64", = "# ") > > self.start_vms(packed=3DTrue) > > self.vm_testpmd_start() > > - vm_tmp.append(self.send_and_verify(vm_cycle, "reconnet fro= m > > VM")) > > - self.vm_reconnect_data =3D dict() > > - for frame_size in self.frame_sizes: > > - size_value =3D [i[frame_size] for i in vm_tmp] > > - self.vm_reconnect_data[frame_size] =3D sum(size_value) / > > len(size_value) > > + self.reconnect_data =3D self.send_and_verify(vm_cycle, "re= connet > > from VM") > > + self.check_reconnect_perf() > > self.result_table_print() > > - self.check_reconnect_perf() > > > > def test_perf_packed_ring_virtio_net_reconnet_two_vms(self): > > """ > > test the iperf traffice can resume after reconnet > > """ > > self.header_row =3D ["Mode", "[M|G]bits/sec", "Cycle"] > > - self.res =3D dict().fromkeys(["before_relaunch", "after_relaun= ch"], list()) > > self.result_table_create(self.header_row) > > self.vm_num =3D 2 > > vm_cycle =3D 0 > > @@ -546,17 +497,15 @@ class TestPVPVhostUserReconnect(TestCase): > > vm_cycle =3D 1 > > # reconnet from vhost > > self.logger.info('now reconnect from vhost') > > - vhost_tmp =3D list() > > for i in range(self.reconnect_times): > > self.dut.send_expect("killall -s INT testpmd", "# ") > > self.launch_testpmd_as_vhost_user_with_no_pci() > > self.start_iperf() > > - vhost_tmp.append(self.iperf_result_verify(vm_cycle, 'recon= net from > > vhost')) > > - self.vhost_reconnect_data =3D sum(vhost_tmp)/len(vhost_tmp) > > + self.reconnect_data =3D self.iperf_result_verify(vm_cycle,= 'reconnet > > from vhost') > > + self.check_reconnect_perf() > > > > # reconnet from VM > > self.logger.info('now reconnect from vm') > > - vm_tmp =3D list() > > for i in range(self.reconnect_times): > > self.vm_dut[0].send_expect('rm iperf_server.log', '# ', 10= ) > > self.vm_dut[1].send_expect('rm iperf_client.log', '# ', 10= ) @@ -564,10 > > +513,9 @@ class TestPVPVhostUserReconnect(TestCase): > > self.start_vms(packed=3DTrue) > > self.config_vm_intf() > > self.start_iperf() > > - vm_tmp.append(self.iperf_result_verify(vm_cycle, 'reconnet= from > > vm')) > > - self.vm_reconnect_data =3D sum(vm_tmp)/len(vm_tmp) > > + self.reconnect_data =3D self.iperf_result_verify(vm_cycle,= 'reconnet > > from vm') > > + self.check_reconnect_perf() > > self.result_table_print() > > - self.check_reconnect_perf() > > > > def tear_down(self): > > # > > -- > > 1.8.3.1