From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 13245A04F0; Fri, 27 Dec 2019 09:33:14 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 084791BFF1; Fri, 27 Dec 2019 09:33:14 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id AC5971BFEE for ; Fri, 27 Dec 2019 09:33:10 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Dec 2019 00:33:09 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.69,362,1571727600"; d="scan'208";a="220457147" Received: from fmsmsx104.amr.corp.intel.com ([10.18.124.202]) by orsmga003.jf.intel.com with ESMTP; 27 Dec 2019 00:33:09 -0800 Received: from fmsmsx156.amr.corp.intel.com (10.18.116.74) by fmsmsx104.amr.corp.intel.com (10.18.124.202) with Microsoft SMTP Server (TLS) id 14.3.439.0; Fri, 27 Dec 2019 00:32:54 -0800 Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by fmsmsx156.amr.corp.intel.com (10.18.116.74) with Microsoft SMTP Server (TLS) id 14.3.439.0; Fri, 27 Dec 2019 00:32:53 -0800 Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.90]) by SHSMSX152.ccr.corp.intel.com ([169.254.6.222]) with mapi id 14.03.0439.000; Fri, 27 Dec 2019 16:32:51 +0800 From: "Wang, Yinan" To: "Mo, YufengX" , "dts@dpdk.org" Thread-Topic: [dts][PATCH V2 4/4] tests/metrics: upload automation script Thread-Index: AQHVsZIxfATZ+0b9sE2jIdYEFbtFwKfNvUAw Date: Fri, 27 Dec 2019 08:32:51 +0000 Message-ID: References: <20191213085140.13231-1-yufengx.mo@intel.com> <20191213085140.13231-5-yufengx.mo@intel.com> In-Reply-To: <20191213085140.13231-5-yufengx.mo@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiNzdlMTlhZjQtZmM5OS00N2YxLWJmN2ItN2M5NDlhOTEyNzEyIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiSkJadUlUckxWR21RRTFDR2tBZzllbk8xZDY2STBkM2FcL2VJdUdWMVNMb0tkR2VZWWZWT1JFcTBQNFErcEh0dEkifQ== x-ctpclassification: CTP_NT dlp-product: dlpe-windows dlp-version: 11.2.0.6 dlp-reaction: no-action x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="iso-2022-jp" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dts] [PATCH V2 4/4] tests/metrics: upload automation script X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Sender: "dts" Acked-by: Wang, Yinan > -----Original Message----- > From: Mo, YufengX > Sent: 2019=1B$BG/=1B(B12=1B$B7n=1B(B13=1B$BF|=1B(B 16:52 > To: dts@dpdk.org; Wang, Yinan > Cc: Mo, YufengX > Subject: [dts][PATCH V2 4/4] tests/metrics: upload automation script >=20 >=20 > The Metrics implements a mechanism by which producers can publish numeric > information for later querying by consumers. Here dpdk-procinfo process i= s the > consumers. latency stats and bit rate are the two implements based on met= rics > lib. >=20 > Signed-off-by: yufengmx > --- > tests/TestSuite_metrics.py | 824 > +++++++++++++++++++++++++++++++++++++ > 1 file changed, 824 insertions(+) > create mode 100644 tests/TestSuite_metrics.py >=20 > diff --git a/tests/TestSuite_metrics.py b/tests/TestSuite_metrics.py new = file > mode 100644 index 0000000..d098ccd > --- /dev/null > +++ b/tests/TestSuite_metrics.py > @@ -0,0 +1,824 @@ > +# BSD LICENSE > +# > +# Copyright(c) 2010-2019 Intel Corporation. All rights reserved. > +# All rights reserved. > +# > +# Redistribution and use in source and binary forms, with or without # > +modification, are permitted provided that the following conditions # > +are met: > +# > +# * Redistributions of source code must retain the above copyright > +# notice, this list of conditions and the following disclaimer. > +# * Redistributions in binary form must reproduce the above copyright > +# notice, this list of conditions and the following disclaimer in > +# the documentation and/or other materials provided with the > +# distribution. > +# * Neither the name of Intel Corporation nor the names of its > +# contributors may be used to endorse or promote products derived > +# from this software without specific prior written permission. > +# > +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND > CONTRIBUTORS # > +'AS IS' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # > +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS > FOR # > +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE > COPYRIGHT # > +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, > INCIDENTAL, # > +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # > +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF > USE, # > +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON > ANY # > +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # > +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE > USE # > +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > + > +''' > +DPDK Test suite. > +''' > + > +import os > +import re > +import time > +import random > +import traceback > +from copy import deepcopy > +from pprint import pformat > + > +from test_case import TestCase > +from pmd_output import PmdOutput > +from exception import VerifyFailure > +from settings import HEADER_SIZE > +from packet import Packet > +from pktgen import TRANSMIT_CONT > +from config import SuiteConf > + > + > +class TestMetrics(TestCase): > + BIT_RATE =3D 'bit_rate' > + LATENCY =3D 'latency' > + display_seq =3D { > + # metrics bit rate > + BIT_RATE: [ > + 'mean_bits_in', > + 'peak_bits_in', > + 'ewma_bits_in', > + 'mean_bits_out', > + 'peak_bits_out', > + 'ewma_bits_out'], > + # metrics latency > + LATENCY: [ > + 'min_latency_ns', > + 'max_latency_ns', > + 'avg_latency_ns', > + 'jitter_ns'], } > + > + def d_a_con(self, cmd): > + _cmd =3D [cmd, '# ', 10] if isinstance(cmd, (str, unicode)) else= cmd > + output =3D self.dut.alt_session.send_expect(*_cmd) > + output2 =3D self.dut.alt_session.session.get_session_before(2) > + return output + os.linesep + output2 > + > + @property > + def target_dir(self): > + # get absolute directory of target source code > + target_dir =3D '/root' + self.dut.base_dir[1:] \ > + if self.dut.base_dir.startswith('~') else \ > + self.dut.base_dir > + return target_dir > + > + def get_pkt_len(self, pkt_type, frame_size=3D64): > + headers_size =3D sum( > + map(lambda x: HEADER_SIZE[x], ['eth', 'ip', pkt_type])) > + pktlen =3D frame_size - headers_size > + return pktlen > + > + def config_stream(self, framesize): > + payload_size =3D self.get_pkt_len('udp', framesize) > + # set streams for traffic > + pkt_config =3D { > + 'type': 'UDP', > + 'pkt_layers': { > + 'raw': {'payload': ['58'] * payload_size}}, } > + # create packet for send > + pkt_type =3D pkt_config.get('type') > + pkt_layers =3D pkt_config.get('pkt_layers') > + pkt =3D Packet(pkt_type=3Dpkt_type) > + for layer in pkt_layers.keys(): > + pkt.config_layer(layer, pkt_layers[layer]) > + self.logger.debug(pformat(pkt.pktgen.pkt.command())) > + > + return pkt.pktgen.pkt > + > + def add_stream_to_pktgen(self, txport, rxport, send_pkt, option): > + stream_ids =3D [] > + for pkt in send_pkt: > + _option =3D deepcopy(option) > + _option['pcap'] =3D pkt > + # link peer 0 > + stream_id =3D self.tester.pktgen.add_stream(txport, rxport, = pkt) > + self.tester.pktgen.config_stream(stream_id, _option) > + stream_ids.append(stream_id) > + # link peer 1 > + stream_id =3D self.tester.pktgen.add_stream(rxport, txport, = pkt) > + self.tester.pktgen.config_stream(stream_id, _option) > + stream_ids.append(stream_id) > + return stream_ids > + > + def send_packets_by_pktgen(self, option): > + txport =3D option.get('tx_intf') > + rxport =3D option.get('rx_intf') > + rate =3D option.get('rate', float(100)) > + send_pkt =3D option.get('stream') > + # clear streams before add new streams > + self.tester.pktgen.clear_streams() > + # attach streams to pktgen > + stream_option =3D { > + 'stream_config': { > + 'txmode': {}, > + 'transmit_mode': TRANSMIT_CONT, > + 'rate': rate, } > + } > + stream_ids =3D self.add_stream_to_pktgen( > + txport, rxport, send_pkt, stream_option) > + # run pktgen traffic > + traffic_opt =3D option.get('traffic_opt') > + self.logger.debug(traffic_opt) > + result =3D self.tester.pktgen.measure(stream_ids, traffic_opt) > + > + return result > + > + def run_traffic(self, option): > + tester_tx_port_id =3D self.tester.get_local_port(self.dut_ports[= 0]) > + tester_rx_port_id =3D self.tester.get_local_port(self.dut_ports[= 1]) > + ports_topo =3D { > + 'tx_intf': tester_tx_port_id, > + 'rx_intf': tester_rx_port_id, > + 'stream': option.get('stream'), > + 'rate': option.get('rate') or 100.0, > + 'traffic_opt': option.get('traffic_opt'), } > + # begin traffic > + result =3D self.send_packets_by_pktgen(ports_topo) > + > + return result > + > + def init_testpmd(self): > + self.testpmd =3D PmdOutput(self.dut) > + > + def start_testpmd(self, mode): > + table =3D { > + self.BIT_RATE: 'bitrate-stats', > + self.LATENCY: 'latencystats', } > + if mode not in table: > + return > + option =3D '--{0}=3D{1}'.format(table.get(mode), self.monitor_co= res) > + self.testpmd.start_testpmd( > + '1S/2C/1T', > + eal_param=3D'-v', > + param=3Doption) > + self.is_pmd_on =3D True > + > + def set_testpmd(self): > + cmds =3D [ > + 'set fwd io', > + 'start'] > + [self.testpmd.execute_cmd(cmd) for cmd in cmds] > + > + def close_testpmd(self): > + if not self.is_pmd_on: > + return > + self.testpmd.quit() > + self.is_pmd_on =3D False > + > + def init_proc_info_tool(self): > + option =3D ' -v -- --metrics' > + self.dpdk_proc =3D os.path.join( > + self.target_dir, self.target, "app", "dpdk-procinfo" + optio= n) > + self.metrics_stat =3D [] > + > + def proc_info_query(self, flag=3DNone): > + msg =3D self.d_a_con(self.dpdk_proc) > + self.logger.debug(msg) > + portStatus =3D {} > + keys =3D self.display_seq.get(flag or self.BIT_RATE) > + curPortNo =3D None > + portPat =3D r"metrics for port (\d)+.*#" > + summary =3D 'non port' > + for item2 in msg.splitlines(): > + item =3D item2.strip(os.linesep) > + if 'metrics' in item: > + curPortNo =3D summary \ > + if summary in item.lower() else \ > + int("".join(re.findall(portPat, item, re.M))) > + portStatus[curPortNo] =3D {} > + if curPortNo is None: > + continue > + if ":" in item: > + status =3D item.strip().split(': ') > + if len(status) =3D=3D 2: > + portStatus[curPortNo][status[0]] =3D > int(status[1].strip()) > + retPortStatus =3D {} > + for port_id in portStatus: > + retPortStatus[port_id] =3D {} > + for key in keys: > + retPortStatus[port_id][key] =3D portStatus[port_id][key] > + self.logger.debug(pformat(retPortStatus)) > + > + return retPortStatus > + > + def proc_info_query_bit_rate(self): > + self.metrics_stat.append(self.proc_info_query(self.BIT_RATE)) > + > + def proc_info_query_latency(self): > + self.metrics_stat.append(self.proc_info_query(self.LATENCY)) > + > + def display_suite_result(self, data): > + values =3D data.get('values') > + title =3D data.get('title') > + self.result_table_create(title) > + for value in values: > + self.result_table_add(value) > + self.result_table_print() > + > + def display_metrics_data(self, port_status, mode=3DNone): > + mode =3D mode if mode else self.BIT_RATE > + display_seq =3D self.display_seq.get(mode) > + textLength =3D max(map(lambda x: len(x), display_seq)) > + for port in sorted(port_status.keys()): > + port_value =3D port_status[port] > + if port !=3D 'non port': > + self.logger.info("port {0}".format(port)) > + for key in display_seq: > + value =3D port_value[key] > + self.logger.info("{0} =3D [{1}]".format( > + key.ljust(textLength), value)) > + else: > + maxvalue =3D max(map(lambda x: int(x), > port_value.values())) > + if not maxvalue: > + continue > + self.logger.info("port {0}".format(port)) > + for key in display_seq: > + value =3D port_value[key] > + if value: > + self.logger.info("{0} =3D [{1}]".format( > + key.ljust(textLength), value)) > + > + def sample_bit_rate_after_stop_traffic(self, query_interval): > + self.logger.info("sample data after stop traffic") > + max_query_count =3D self.query_times_after_stop > + bit_rate_stop_results =3D [] > + while max_query_count: > + time.sleep(query_interval) > + > bit_rate_stop_results.append(self.proc_info_query(self.BIT_RATE)) > + max_query_count -=3D 1 > + # get statistic after stop testpmd > + self.testpmd.execute_cmd('stop') > + stop_testpmd_results =3D [] > + max_query_count =3D 2 > + while max_query_count: > + time.sleep(query_interval) > + > stop_testpmd_results.append(self.proc_info_query(self.BIT_RATE)) > + max_query_count -=3D 1 > + # check metrics status > + first_result =3D stop_testpmd_results[0] > + second_result =3D stop_testpmd_results[1] > + if cmp(first_result, second_result) =3D=3D 0: > + msg =3D "bit rate statistics stop successful after stop test= pmd" > + self.logger.info(msg) > + else: > + msg =3D "bit rate statistics fail to stop after stop testpmd= " > + self.logger.warning(msg) > + return bit_rate_stop_results > + > + def sample_bit_rate(self, frame_size, content): > + duration =3D content.get('duration') > + sample_number =3D content.get('sample_number') > + query_interval =3D duration / sample_number > + # start testpmd > + self.testpmd.execute_cmd('start') > + # run traffic > + self.metrics_stat =3D [] > + opt =3D { > + 'stream': self.streams.get(frame_size), > + 'traffic_opt': { > + 'method': 'throughput', > + 'duration': duration, > + 'interval': query_interval, > + 'callback': self.proc_info_query_bit_rate}, } > + result =3D self.run_traffic(opt) > + pktgen_results =3D [{'total': {'rx_bps': rx_pps, 'rx_pps': rx_bp= s}} > + for rx_pps, rx_bps in result] > + # get data after traffic stop > + metrcis_stats_after_stop =3D self.sample_bit_rate_after_stop_tra= ffic( > + query_interval) > + # save testing configuration > + sub_stats =3D { > + 'pktgen_stats_on_traffic': pktgen_results, > + 'metrics_stats_on_traffic': deepcopy(self.metrics_stat), > + 'metrics_stats_after_traffic_stop': metrcis_stats_after_stop= , > + 'test_content': { > + 'frame_size': frame_size, > + 'traffic_duration': duration, > + 'query_interval': query_interval}} > + > + return sub_stats > + > + def display_metrics_bit_rate(self, metrics_data): > + title =3D ['No', 'port'] > + values =3D [] > + for index, result in enumerate(metrics_data): > + for port, data in sorted(result.iteritems()): > + _value =3D [index, port] > + for key, value in data.iteritems(): > + if key not in title: > + title.append(key) > + _value.append(value) > + values.append(_value) > + metrics_data =3D { > + 'title': title, > + 'values': values} > + self.display_suite_result(metrics_data) > + > + def calculate_bit_rate_deviation(self, pktgen_stats, metrics_stats): > + pktgen_bps =3D max([result.get('total').get('rx_bps') > + for result in pktgen_stats]) > + metrics_bps_in =3D 0 > + metrics_bps_out =3D 0 > + for index, result in enumerate(metrics_stats): > + for port_id in self.dut_ports: > + metrics_bps_in +=3D result.get(port_id).get('mean_bits_i= n') > + metrics_bps_out +=3D > result.get(port_id).get('mean_bits_out') > + mean_metrics_bps_in =3D metrics_bps_in / (index + 1) > + mean_metrics_bps_out =3D metrics_bps_out / (index + 1) > + > + return (1.0 - float(mean_metrics_bps_in) / float(pktgen_bps), > + 1.0 - float(mean_metrics_bps_out) / float(pktgen_bps)) > + > + def check_metrics_data_after_stop_traffic(self, data): > + # check mean_bits, it should be zero > + for port_id in self.dut_ports: > + for result in data: > + metrics_bps_in =3D result.get(port_id).get('mean_bits_in= ') > + metrics_bps_out =3D > result.get(port_id).get('mean_bits_out') > + if metrics_bps_in or metrics_bps_out: > + msg =3D 'mean_bits bps is not cleared as exepected' > + raise VerifyFailure(msg) > + # check peak_bits, it should be the same > + for port_id in self.dut_ports: > + peak_bits_in =3D [] > + peak_bits_out =3D [] > + for result in data: > + > peak_bits_in.append(result.get(port_id).get('peak_bits_in')) > + > peak_bits_out.append(result.get(port_id).get('peak_bits_out')) > + if len(set(peak_bits_in)) > 1 or len(set(peak_bits_out)) > 1= : > + msg =3D 'peak_bits bps is not keep the maximum value' > + raise VerifyFailure(msg) > + # check ewma_bits, it should decrease step by step > + for port_id in self.dut_ports: > + for key in ['ewma_bits_in', 'ewma_bits_out']: > + ewma_bits =3D [] > + for result in data: > + ewma_bits.append(result.get(port_id).get(key)) > + status =3D [ewma_bits[index] > ewma_bits[port_id + 1] > + for index in range(len(ewma_bits) - 1)] > + if all(status): > + continue > + msg =3D 'ewma_bits bps not decrease' > + raise VerifyFailure(msg) > + > + def check_one_bit_rate_deviation(self, data, bias): > + # display test content > + test_content =3D data.get('test_content') > + test_cfg =3D { > + 'title': test_content.keys(), > + 'values': [test_content.values()]} > + self.display_suite_result(test_cfg) > + # display pktgen bit rate statistics on traffic > + self.logger.info("pktgen bit rate statistics:") > + pktgen_results =3D data.get('pktgen_stats_on_traffic') > + self.display_metrics_bit_rate(pktgen_results) > + # display metrics bit rate statistics on traffic > + self.logger.info("dpdk metrics bit rate statistics on traffic:") > + metrics_results =3D data.get('metrics_stats_on_traffic') > + self.display_metrics_bit_rate(metrics_results) > + # check bit rate bias between packet generator and dpdk metircs > + in_bias, out_bias =3D \ > + self.calculate_bit_rate_deviation(pktgen_results, > metrics_results) > + msg =3D ('in bps bias is {0} ' > + 'out bps bias is {1} ' > + 'expected bias is {2}').format(in_bias, out_bias, bias) > + self.logger.info(msg) > + if in_bias > bias or out_bias > bias: > + msg =3D ('metrics mean_bits bps has more than {} bias' > + 'compared with pktgen bps').format(bias) > + raise VerifyFailure(msg) > + # display dpdk metrics bit rate statistics after stop traffic > + self.logger.info("dpdk metrics bit rate statistics when stop tra= ffic:") > + metrics_results =3D data.get('metrics_stats_after_traffic_stop') > + self.display_metrics_bit_rate(metrics_results) > + # check metrics tool mean_bits/ewma_bits/peak_bits behavior afte= r > stop > + # traffic > + self.check_metrics_data_after_stop_traffic(metrics_results) > + > + def sample_bit_rate_peak_after_stop_traffic(self, query_interval): > + self.logger.info("sample data after stop traffic") > + # sample data after stop > + max_query_count =3D self.query_times_after_stop > + bit_rate_stop_results =3D [] > + while max_query_count: > + time.sleep(query_interval) > + > bit_rate_stop_results.append(self.proc_info_query(self.BIT_RATE)) > + max_query_count -=3D 1 > + # check metrics status after stop testpmd > + self.testpmd.execute_cmd('stop') > + self.logger.info("get metrics bit rate stop after stop testpmd:"= ) > + stop_testpmd_results =3D [] > + max_query_count =3D 2 > + while max_query_count: > + time.sleep(query_interval) > + > stop_testpmd_results.append(self.proc_info_query(self.BIT_RATE)) > + max_query_count -=3D 1 > + # check metrics tool status > + first_result =3D stop_testpmd_results[0] > + second_result =3D stop_testpmd_results[1] > + if cmp(first_result, second_result) =3D=3D 0: > + msg =3D "metrics bit rate stop successful after stop testpmd= " > + self.logger.info(msg) > + else: > + msg =3D "metrics bit rate fail to stop after stop testpmd" > + self.logger.warning(msg) > + return bit_rate_stop_results > + > + def sample_bit_rate_peak(self, frame_size, rate, content): > + duration =3D content.get('duration') > + sample_number =3D content.get('sample_number') > + query_interval =3D duration / sample_number > + # start testpmd > + self.testpmd.execute_cmd('start') > + # run traffic > + opt =3D { > + 'stream': self.streams.get(frame_size), > + 'rate': rate, > + 'traffic_opt': { > + 'method': 'throughput', > + 'duration': duration, }, } > + result =3D self.run_traffic(opt) > + pktgen_results =3D [{'total': {'rx_bps': rx_pps, 'rx_pps': rx_bp= s}} > + for rx_pps, rx_bps in [result]] > + # get data after traffic stop > + metrcis_stats_after_stop =3D \ > + self.sample_bit_rate_peak_after_stop_traffic(query_interval) > + # save testing configuration > + sub_stats =3D { > + 'pktgen_stats_on_traffic': pktgen_results, > + 'metrics_stats_after_traffic_stop': metrcis_stats_after_stop= , > + 'test_content': { > + 'rate': rate, > + 'frame_size': frame_size, > + 'traffic_duration': duration, > + 'query_interval': query_interval}} > + > + return sub_stats > + > + def get_one_bit_rate_peak(self, data): > + # display test content > + test_content =3D data.get('test_content') > + test_cfg =3D { > + 'title': test_content.keys(), > + 'values': [test_content.values()]} > + self.display_suite_result(test_cfg) > + # display pktgen bit rate statistics on traffic > + self.logger.info("pktgen bit rate statistics :") > + pktgen_results =3D data.get('pktgen_stats_on_traffic') > + self.display_metrics_bit_rate(pktgen_results) > + pktgen_bps =3D max([result.get('total').get('rx_bps') > + for result in pktgen_results]) > + # display dpdk metrics bit rate statistics after stop traffic > + self.logger.info("dpdk bit rate statistics after stop traffic:") > + metrics_results =3D data.get('metrics_stats_after_traffic_stop') > + self.display_metrics_bit_rate(metrics_results) > + metrics_peak_data =3D {} > + for port_id in self.dut_ports: > + metrics_peak_data[port_id] =3D { > + 'peak_bits_in': > + max([result.get(port_id).get('peak_bits_in') > + for result in metrics_results]), > + 'peak_bits_out': > + max([result.get(port_id).get('peak_bits_out') > + for result in metrics_results]), > + } > + return pktgen_bps, metrics_peak_data > + > + def check_bit_rate_peak_data(self, data): > + ''' > + check ``peak_bits_in/peak_bits_out`` should keep the first max v= alue > + when packet generator work with decreasing traffic rate percent. > + ''' > + pktgen_stats =3D [] > + metrics_stats =3D [] > + for sub_data in data: > + pktgen_stat, metrics_stat =3D > self.get_one_bit_rate_peak(sub_data) > + pktgen_stats.append(pktgen_stat) > + metrics_stats.append(metrics_stat) > + # check if traffic run with decreasing rate percent > + status =3D [pktgen_stats[index] > pktgen_stats[index + 1] > + for index in range(len(pktgen_stats) - 1)] > + msg =3D 'traffic rate percent does not run with decreasing rate > percent' > + self.verify(all(status), msg) > + # check ``peak_bits_in/peak_bits_out`` keep the first max value > + for port_id in self.dut_ports: > + for key in ['peak_bits_in', 'peak_bits_out']: > + peak_values =3D [metrics_stat.get(port_id).get(key) > + for metrics_stat in metrics_stats] > + max_value =3D max(peak_values) > + if max_value !=3D metrics_stats[0].get(port_id).get(key)= : > + msg =3D 'port {0} {1} does not keep maximum > value'.format( > + port_id, key) > + raise VerifyFailure(msg) > + > + def sample_latency_after_stop_traffic(self, query_interval): > + self.logger.info("sample statistics after stop traffic") > + # sample data after stop > + max_query_count =3D self.query_times_after_stop > + latency_stop_results =3D [] > + while max_query_count: > + time.sleep(5) > + > latency_stop_results.append(self.proc_info_query(self.LATENCY)) > + max_query_count -=3D 1 > + # check statistic status after stop testpmd > + self.testpmd.execute_cmd('stop') > + self.logger.info("query metrics latency after stop testpmd:") > + stop_testpmd_results =3D [] > + max_query_count =3D 2 > + while max_query_count: > + time.sleep(query_interval) > + > stop_testpmd_results.append(self.proc_info_query(self.LATENCY)) > + max_query_count -=3D 1 > + # check metrics behavior > + first_result =3D stop_testpmd_results[0] > + second_result =3D stop_testpmd_results[1] > + if cmp(first_result, second_result) =3D=3D 0: > + msg =3D "metrics latency stop successful after stop testpmd" > + self.logger.info(msg) > + else: > + msg =3D "metrics latency fail to stop after stop testpmd" > + self.logger.warning(msg) > + return latency_stop_results > + > + def display_metrics_latency(self, metrics_data): > + title =3D ['No', 'port'] > + values =3D [] > + for index, result in enumerate(metrics_data): > + for port_id, data in result.iteritems(): > + _value =3D [index, port_id] > + for key, value in data.iteritems(): > + if key not in title: > + title.append(key) > + _value.append(value) > + values.append(_value) > + metrics_data =3D { > + 'title': title, > + 'values': values} > + self.display_suite_result(metrics_data) > + > + def sample_latency(self, frame_size, content): > + self.metrics_stat =3D [] > + duration =3D content.get('duration') > + sample_number =3D content.get('sample_number') > + query_interval =3D duration / sample_number > + # start testpmd > + self.testpmd.execute_cmd('start') > + # run traffic > + opt =3D { > + 'stream': self.streams.get(frame_size), > + 'traffic_opt': { > + 'method': 'latency', > + 'duration': duration, }, } > + pktgen_results =3D self.run_traffic(opt) > + # get data after traffic stop > + metrcis_stats_after_stop =3D self.sample_latency_after_stop_traf= fic( > + query_interval) > + # save testing configuration and results > + sub_stats =3D { > + 'pktgen_stats_on_traffic': pktgen_results, > + 'metrics_stats_after_traffic_stop': metrcis_stats_after_stop= , > + 'test_content': { > + 'rate': 100.0, > + 'frame_size': frame_size, > + 'traffic_duration': duration, > + 'query_interval': query_interval}} > + > + return sub_stats > + > + def check_one_latecny_data(self, data): > + ''' > + packet generator calculates line latency between tx port and rx = port, > + dpdk metrics calculates packet forward latency between rx and tx > inside > + testpmd. These two types latency data are used for different > purposes. > + ''' > + # display test content > + test_content =3D data.get('test_content') > + test_cfg =3D { > + 'title': test_content.keys(), > + 'values': [test_content.values()]} > + self.display_suite_result(test_cfg) > + # display pktgen latency statistics on traffic > + self.logger.info("pktgen line latency statistics :") > + pktgen_results =3D data.get('pktgen_stats_on_traffic') > + self.display_metrics_latency([pktgen_results]) > + # check if the value is reasonable, no reference data > + for port, value in pktgen_results.iteritems(): > + max_value =3D value.get('max') > + min_value =3D value.get('min') > + average =3D value.get('average') > + if max_value =3D=3D 0 and average =3D=3D 0 and min_value =3D= =3D 0: > + msg =3D 'failed to get pktgen latency data' > + raise VerifyFailure(msg) > + continue > + if max_value > average and average > min_value and min_value= > > 0: > + continue > + msg =3D ('pktgen latency is wrong: ' > + 'max <{0}> ' > + 'average <{1}> ' > + 'min <{2}>').format( > + max_value, average, min_value) > + raise VerifyFailure(msg) > + # display dpdk metrics latency statistics > + self.logger.info("dpdk forward latency statistics :") > + metrics_results =3D data.get('metrics_stats_after_traffic_stop') > + self.display_metrics_latency(metrics_results) > + # check if the value is reasonable, no reference data > + for index, result in enumerate(metrics_results): > + for port, value in result.iteritems(): > + if port !=3D 'non port': > + continue > + max_value =3D value.get('max_latency_ns') > + min_value =3D value.get('min_latency_ns') > + average =3D value.get('avg_latency_ns') > + # ignore invalid data > + if max_value =3D=3D 0 and average =3D=3D 0 and min_value= =3D=3D 0: > + msg =3D 'failed to get metrics latency data' > + raise VerifyFailure(msg) > + if max_value > average and \ > + average > min_value and min_value > 0: > + continue > + msg =3D ('metrics latency is wrong : ' > + 'min_latency_ns <{0}> ' > + 'avg_latency_ns <{1}> ' > + 'min_latency_ns <{2}>').format( > + max_value, average, min_value) > + raise VerifyFailure(msg) > + msg =3D 'frame_size {0} latency data is ok.'.format( > + test_content.get('frame_size')) > + self.logger.info(msg) > + > + def verify_bit_rate(self): > + except_content =3D None > + try: > + # set testpmd on ready status > + self.start_testpmd(self.BIT_RATE) > + self.set_testpmd() > + stats =3D [] > + for frame_size in self.test_content.get('frame_sizes'): > + sub_stats =3D self.sample_bit_rate(frame_size, > self.test_content) > + stats.append( > + [sub_stats, > self.test_content.get('bias').get(frame_size)]) > + for data, bias in stats: > + self.check_one_bit_rate_deviation(data, bias) > + except Exception as e: > + self.logger.error(traceback.format_exc()) > + except_content =3D e > + finally: > + self.close_testpmd() > + > + # re-raise verify exception result > + if except_content: > + raise Exception(except_content) > + > + def verify_bit_rate_peak(self): > + except_content =3D None > + try: > + # set testpmd on ready status > + self.start_testpmd(self.BIT_RATE) > + self.set_testpmd() > + stats =3D [] > + frame_sizes =3D self.test_content.get('frame_sizes') > + frame_size =3D frame_sizes[random.randint(0, len(frame_sizes= ) - > 1)] > + for rate in self.test_content.get('rates'): > + sub_stats =3D self.sample_bit_rate_peak( > + frame_size, rate, self.test_content) > + stats.append(sub_stats) > + self.check_bit_rate_peak_data(stats) > + except Exception as e: > + self.logger.error(traceback.format_exc()) > + except_content =3D e > + finally: > + self.close_testpmd() > + > + # re-raise verify exception result > + if except_content: > + raise Exception(except_content) > + > + def verify_latency_stat(self): > + except_content =3D None > + try: > + # set testpmd on ready status > + self.start_testpmd(self.LATENCY) > + self.set_testpmd() > + # get test content > + stats =3D [] > + for frame_size in self.test_content.get('frame_sizes'): > + sub_stats =3D self.sample_latency(frame_size, > self.test_content) > + stats.append(sub_stats) > + for data in stats: > + self.check_one_latecny_data(data) > + except Exception as e: > + self.logger.error(traceback.format_exc()) > + except_content =3D e > + finally: > + self.close_testpmd() > + > + # re-raise verify exception result > + if except_content: > + raise Exception(except_content) > + > + def verify_supported_nic(self): > + supported_drivers =3D ['ixgbe'] > + result =3D all([self.dut.ports_info[port_id]['port'].default_dri= ver in > + supported_drivers > + for port_id in self.dut_ports]) > + msg =3D "current nic is not supported" > + self.verify(result, msg) > + > + def get_test_content_from_cfg(self): > + conf =3D SuiteConf(self.suite_name) > + cfg_content =3D dict(conf.suite_conf.load_section('content')) > + frames_cfg =3D cfg_content.get('frames_cfg') > + info =3D [(int(item[0]), float(item[1])) > + for item in [item.split(':') for item in frames_cfg.spli= t(',')]] > + frames_info =3D dict(info) > + test_content =3D { > + 'frame_sizes': frames_info.keys(), > + 'duration': int(cfg_content.get('duration') or 0), > + 'sample_number': int(cfg_content.get('sample_number') or 0), > + 'rates': [int(item) > + for item in cfg_content.get('rates').split(',')], > + 'bias': frames_info} > + self.query_times_after_stop =3D 5 > + > + return test_content > + > + def preset_traffic(self): > + self.streams =3D {} > + # prepare streams instance > + for frame_size in self.test_content.get('frame_sizes'): > + self.streams[frame_size] =3D self.config_stream(frame_size) > + > + def preset_test_environment(self): > + # get test content > + self.test_content =3D self.get_test_content_from_cfg() > + self.logger.debug(pformat(self.test_content)) > + # binary status flag > + self.is_pmd_on =3D None > + # monitor cores > + self.monitor_cores =3D '2' > + # init binary > + self.init_testpmd() > + self.init_proc_info_tool() > + # traffic relevant > + self.preset_traffic() > + # > + # Test cases. > + # > + > + def set_up_all(self): > + self.dut_ports =3D self.dut.get_ports(self.nic) > + self.verify(len(self.dut_ports) >=3D 2, "Not enough ports") > + # prepare testing environment > + self.preset_test_environment() > + > + def tear_down_all(self): > + """ Run after each test suite. """ > + pass > + > + def set_up(self): > + """ Run before each test case. """ > + pass > + > + def tear_down(self): > + """ Run after each test case. """ > + self.dut.kill_all() > + > + def test_perf_bit_rate_peak(self): > + """ > + Test bit rate peak > + """ > + self.verify_bit_rate_peak() > + > + def test_perf_bit_rate(self): > + """ > + Test bit rate > + """ > + self.verify_bit_rate() > + > + def test_perf_latency_stat(self): > + """ > + Test latency stat > + """ > + self.verify_latency_stat() > -- > 2.21.0