From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 70FEFA00E6 for ; Wed, 12 Jun 2019 10:19:22 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3D0DA1C419; Wed, 12 Jun 2019 10:19:22 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id 8242E1C400 for ; Wed, 12 Jun 2019 10:19:20 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Jun 2019 01:19:18 -0700 X-ExtLoop1: 1 Received: from fmsmsx107.amr.corp.intel.com ([10.18.124.205]) by orsmga006.jf.intel.com with ESMTP; 12 Jun 2019 01:19:18 -0700 Received: from fmsmsx153.amr.corp.intel.com (10.18.125.6) by fmsmsx107.amr.corp.intel.com (10.18.124.205) with Microsoft SMTP Server (TLS) id 14.3.408.0; Wed, 12 Jun 2019 01:19:18 -0700 Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by FMSMSX153.amr.corp.intel.com (10.18.125.6) with Microsoft SMTP Server (TLS) id 14.3.408.0; Wed, 12 Jun 2019 01:19:17 -0700 Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.104]) by SHSMSX152.ccr.corp.intel.com ([169.254.6.225]) with mapi id 14.03.0439.000; Wed, 12 Jun 2019 16:19:15 +0800 From: "Tu, Lijuan" To: "thaq@marvell.com" , "dts@dpdk.org" CC: "fmasood@marvell.com" , "avijay@marvell.com" , "jerinj@marvell.com" Thread-Topic: [dts] [PATCH] TestSuite_eventdev_pipeline_perf.py: Adding Eventdev_pipeline features performance Testscript Thread-Index: AQHVIOqnNF2NU/JNxUuiKM9Dxm7oM6aXrSqA Date: Wed, 12 Jun 2019 08:19:15 +0000 Message-ID: <8CE3E05A3F976642AAB0F4675D0AD20E0BABB97A@SHSMSX101.ccr.corp.intel.com> References: <1560321973-30152-1-git-send-email-thaq@marvell.com> In-Reply-To: <1560321973-30152-1-git-send-email-thaq@marvell.com> Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-version: 11.0.600.7 dlp-reaction: no-action x-ctpclassification: CTP_NT x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiYWVkZjFhOTctMzdkOS00Mjg2LWIxODUtNzQzMmUxNmQ2ZDQzIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiS1NGbEg4ajFoeUNvTks2RnhUVGFkV08rTjJPdU9hTitwSXhLenhPb1BVSXBiSTk4QXFpUUM0dXZzT3B5Yk1GTCJ9 x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dts] [PATCH] TestSuite_eventdev_pipeline_perf.py: Adding Eventdev_pipeline features performance Testscript X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Sender: "dts" Applied, thanks > -----Original Message----- > From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of thaq@marvell.com > Sent: Wednesday, June 12, 2019 2:46 PM > To: dts@dpdk.org > Cc: fmasood@marvell.com; avijay@marvell.com; jerinj@marvell.com; > Thanseerulhaq > Subject: [dts] [PATCH] TestSuite_eventdev_pipeline_perf.py: Adding > Eventdev_pipeline features performance Testscript >=20 > From: Thanseerulhaq >=20 > Adding performance testscripts for 1/2/4 NIC ports for eventdev_pipeline > features atomic, parallel, order stages. > Adding eventdev_perf testcase supports for Marvell cards in > test_case_supportlist.json file. >=20 > Signed-off-by: Thanseerulhaq > --- > conf/test_case_supportlist.json | 135 +++++ > tests/TestSuite_eventdev_pipeline_perf.py | 834 > ++++++++++++++++++++++++++++++ > 2 files changed, 969 insertions(+) > create mode 100644 tests/TestSuite_eventdev_pipeline_perf.py >=20 > diff --git a/conf/test_case_supportlist.json b/conf/test_case_supportlist= .json > index 822fb27..e8c2a43 100644 > --- a/conf/test_case_supportlist.json > +++ b/conf/test_case_supportlist.json > @@ -1230,5 +1230,140 @@ > "Bug ID": "", > "Comments": "This case currently support for cavium_a063 " > } > + ], > + "perf_eventdev_pipeline_1ports_atomic_performance": [ > + { > + "OS": [ > + "ALL" > + ], > + "NIC": [ > + "cavium_a063" > + ], > + "Target": [ > + "ALL" > + ], > + "Bug ID": "", > + "Comments": "This case currently support for cavium_a063 " > + } > + ], > + "perf_eventdev_pipeline_1ports_parallel_performance": [ > + { > + "OS": [ > + "ALL" > + ], > + "NIC": [ > + "cavium_a063" > + ], > + "Target": [ > + "ALL" > + ], > + "Bug ID": "", > + "Comments": "This case currently support for cavium_a063 " > + } > + ], > + "perf_eventdev_pipeline_1ports_order_performance": [ > + { > + "OS": [ > + "ALL" > + ], > + "NIC": [ > + "cavium_a063" > + ], > + "Target": [ > + "ALL" > + ], > + "Bug ID": "", > + "Comments": "This case currently support for cavium_a063 " > + } > + ], > + "perf_eventdev_pipeline_2ports_atomic_performance": [ > + { > + "OS": [ > + "ALL" > + ], > + "NIC": [ > + "cavium_a063" > + ], > + "Target": [ > + "ALL" > + ], > + "Bug ID": "", > + "Comments": "This case currently support for cavium_a063 " > + } > + ], > + "perf_eventdev_pipeline_2ports_parallel_performance": [ > + { > + "OS": [ > + "ALL" > + ], > + "NIC": [ > + "cavium_a063" > + ], > + "Target": [ > + "ALL" > + ], > + "Bug ID": "", > + "Comments": "This case currently support for cavium_a063 " > + } > + ], > + "perf_eventdev_pipeline_2ports_order_performance": [ > + { > + "OS": [ > + "ALL" > + ], > + "NIC": [ > + "cavium_a063" > + ], > + "Target": [ > + "ALL" > + ], > + "Bug ID": "", > + "Comments": "This case currently support for cavium_a063 " > + } > + ], > + "perf_eventdev_pipeline_4ports_atomic_performance": [ > + { > + "OS": [ > + "ALL" > + ], > + "NIC": [ > + "cavium_a063" > + ], > + "Target": [ > + "ALL" > + ], > + "Bug ID": "", > + "Comments": "This case currently support for cavium_a063 " > + } > + ], > + "perf_eventdev_pipeline_4ports_parallel_performance": [ > + { > + "OS": [ > + "ALL" > + ], > + "NIC": [ > + "cavium_a063" > + ], > + "Target": [ > + "ALL" > + ], > + "Bug ID": "", > + "Comments": "This case currently support for cavium_a063 " > + } > + ], > + "perf_eventdev_pipeline_4ports_order_performance": [ > + { > + "OS": [ > + "ALL" > + ], > + "NIC": [ > + "cavium_a063" > + ], > + "Target": [ > + "ALL" > + ], > + "Bug ID": "", > + "Comments": "This case currently support for cavium_a063 " > + } > ] > } > diff --git a/tests/TestSuite_eventdev_pipeline_perf.py > b/tests/TestSuite_eventdev_pipeline_perf.py > new file mode 100644 > index 0000000..1d8a374 > --- /dev/null > +++ b/tests/TestSuite_eventdev_pipeline_perf.py > @@ -0,0 +1,834 @@ > +# BSD LICENSE > +# SPDX-License-Identifier: BSD-3-Clause # Copyright (C) 2019 Marvell > +International Ltd. > + > +""" > +DPDK Test suite. > +Test userland 10Gb/25Gb/40Gb/100Gb > +""" > + > +import utils > +import re > +import time > +import os > + > +from test_case import TestCase > +from time import sleep > +from settings import HEADER_SIZE > +from pmd_output import PmdOutput > +from etgen import IxiaPacketGenerator > + > +from settings import FOLDERS > +from system_info import SystemInfo > +import perf_report > +from datetime import datetime > + > +class TestEventdevPipelinePerf(TestCase,IxiaPacketGenerator): > + > + def set_up_all(self): > + """ > + Run at the start of each test suite. > + > + PMD prerequisites. > + """ > + > + self.tester.extend_external_packet_generator(TestEventdevPipelinePerf, > + self) > + > + self.frame_sizes =3D [64, 128, 256, 512, 1024, 1518] > + > + self.rxfreet_values =3D [0, 8, 16, 32, 64, 128] > + > + self.test_cycles =3D [ > + {'cores': '1S/2C/1T', 'Mpps': {}, 'pct': {}}, > + {'cores': '1S/3C/1T', 'Mpps': {}, 'pct': {}}, > + {'cores': '1S/5C/1T', 'Mpps': {}, 'pct': {}}, > + {'cores': '1S/9C/1T', 'Mpps': {}, 'pct': {}}, > + {'cores': '1S/17C/1T', 'Mpps': {}, 'pct': {}}, > + ] > + self.get_cores_from_last =3D True > + self.table_header =3D ['Frame Size'] > + for test_cycle in self.test_cycles: > + m =3D re.search(r"(\d+S/)(\d+)(C/\d+T)",test_cycle['cores']) > + cores =3D m.group(1) + str(int(m.group(2))-1) + m.group(3) > + self.table_header.append("%s Mpps" % cores) > + self.table_header.append("% linerate") > + > + self.perf_results =3D {'header': [], 'data': []} > + > + self.blacklist =3D "" > + > + # Based on h/w type, choose how many ports to use > + self.dut_ports =3D self.dut.get_ports() > + if self.dut.get_os_type() =3D=3D 'linux': > + # Get dut system information > + port_num =3D self.dut_ports[0] > + pci_device_id =3D self.dut.ports_info[port_num]['pci'] > + ori_driver =3D self.dut.ports_info[port_num]['port'].get_nic= _driver() > + self.dut.ports_info[port_num]['port'].bind_driver() > + > + > + self.dut.ports_info[port_num]['port'].bind_driver(ori_driver) > + > + if self.nic =3D=3D "cavium_a063": > + self.eventdev_device_bus_id =3D "0002:0e:00.0" > + self.eventdev_device_id =3D "a0f9" > + > + #### Bind evendev device #### > + > + self.dut.bind_eventdev_port(port_to_bind=3Dself.eventdev_device_bus_id) > + > + #### Configuring evendev SS0 & SSOw limits #### > + self.dut.set_eventdev_port_limits(self.eventdev_device_id, > + self.eventdev_device_bus_id) > + > + self.headers_size =3D HEADER_SIZE['eth'] + HEADER_SIZE[ > + 'ip'] + HEADER_SIZE['tcp'] > + > + self.ports_socket =3D self.dut.get_numa_id(self.dut_ports[0]) > + > + self.pmdout =3D PmdOutput(self.dut) > + > + def set_up(self): > + """ > + Run before each test case. > + """ > + pass > + > + def eventdev_cmd(self, stlist, nports, wmask): > + > + self.Port_pci_ids =3D [] > + command_line1 =3D "dpdk-eventdev_pipeline -c %s -w %s" > + for i in range(0, nports): > + self.Port_pci_ids.append(self.dut.ports_info[i]['pci']) > + ## Adding core-list and pci-ids > + command_line1 =3D command_line1 + " -w %s " > + ## Adding test and stage types > + command_line2 =3D "-- -w %s -n=3D0 --dump %s -m 16384" % (wmask = , > stlist ) > + return command_line1 + command_line2 > + > + def test_perf_eventdev_pipeline_1ports_atomic_performance(self): > + """ > + Evendev_Pipeline Performance Benchmarking with 1 ports. > + """ > + self.verify(len(self.dut_ports) >=3D 1, "Insufficient ports for = 1 ports > performance test") > + self.perf_results['header'] =3D [] > + self.perf_results['data'] =3D [] > + > + all_cores_mask =3D > + utils.create_mask(self.dut.get_core_list("all")) > + > + # prepare traffic generator input > + tgen_input =3D [] > + tgen_input.append((self.tester.get_local_port(self.dut_ports[0])= , > + self.tester.get_local_port(self.dut_ports[0])= , > + "event_test.pcap")) > + > + # run testpmd for each core config > + for test_cycle in self.test_cycles: > + core_config =3D test_cycle['cores'] > + > + core_list =3D self.dut.get_core_list(core_config, > + socket=3Dself.ports_socke= t, from_last =3D > self.get_cores_from_last) > + core_mask =3D utils.create_mask(core_list) > + core_list.remove(core_list[0]) > + worker_core_mask =3D utils.create_mask(core_list) > + > + command_line =3D self.eventdev_cmd("", 1, worker_core_mask) > + command_line =3D command_line %(core_mask, > self.eventdev_device_bus_id, self.Port_pci_ids[0]) > + self.dut.send_expect(command_line,"eventdev port 0", 100) > + > + info =3D "Executing Eventdev_pipeline using %s\n" % test_cyc= le['cores'] > + self.logger.info(info) > + self.rst_report(info, annex=3DTrue) > + self.rst_report(command_line + "\n\n", frame=3DTrue, > + annex=3DTrue) > + > + for frame_size in self.frame_sizes: > + wirespeed =3D self.wirespeed(self.nic, frame_size, 1) > + > + # create pcap file > + self.logger.info("Running with frame size %d " % frame_s= ize) > + payload_size =3D frame_size - self.headers_size > + self.tester.scapy_append( > + 'wrpcap("event_test.pcap", > [Ether(src=3D"52:00:00:00:00:00")/IP(src=3D"1.2.3.4",dst=3D"1.1.1.1")/TCP= ()/("X"*%d)] > )' % (payload_size)) > + self.tester.scapy_execute() > + > + # run traffic generator > + _, pps =3D self.tester.traffic_generator_throughput(tgen= _input, > rate_percent=3D100, delay=3D60) > + pps /=3D 1000000.0 > + pct =3D pps * 100 / wirespeed > + test_cycle['Mpps'][frame_size] =3D float('%.3f' % pps) > + test_cycle['pct'][frame_size] =3D float('%.3f' % pct) > + > + self.dut.send_expect("^C", "# ", 5) > + sleep(5) > + > + for n in range(len(self.test_cycles)): > + for frame_size in self.frame_sizes: > + self.verify(self.test_cycles[n]['Mpps'][ > + frame_size] > 0, "No traffic detected") > + > + # Print results > + self.result_table_create(self.table_header) > + self.perf_results['header'] =3D self.table_header > + for frame_size in self.frame_sizes: > + table_row =3D [frame_size] > + for test_cycle in self.test_cycles: > + table_row.append(test_cycle['Mpps'][frame_size]) > + table_row.append(test_cycle['pct'][frame_size]) > + > + self.result_table_add(table_row) > + self.perf_results['data'].append(table_row) > + > + self.result_table_print() > + > + def test_perf_eventdev_pipeline_1ports_parallel_performance(self): > + """ > + Evendev_Pipeline Performance Benchmarking with 1 ports. > + """ > + self.verify(len(self.dut_ports) >=3D 1, "Insufficient ports for = 1 ports > performance test") > + self.perf_results['header'] =3D [] > + self.perf_results['data'] =3D [] > + > + all_cores_mask =3D > + utils.create_mask(self.dut.get_core_list("all")) > + > + # prepare traffic generator input > + tgen_input =3D [] > + tgen_input.append((self.tester.get_local_port(self.dut_ports[0])= , > + self.tester.get_local_port(self.dut_ports[0])= , > + "event_test.pcap")) > + > + # run testpmd for each core config > + for test_cycle in self.test_cycles: > + core_config =3D test_cycle['cores'] > + > + core_list =3D self.dut.get_core_list(core_config, > + socket=3Dself.ports_socke= t, from_last =3D > self.get_cores_from_last) > + core_mask =3D utils.create_mask(core_list) > + core_list.remove(core_list[0]) > + worker_core_mask =3D utils.create_mask(core_list) > + > + command_line =3D self.eventdev_cmd("-p", 1, worker_core_mask= ) > + command_line =3D command_line %(core_mask, > self.eventdev_device_bus_id, self.Port_pci_ids[0]) > + self.dut.send_expect(command_line,"eventdev port 0", 100) > + > + info =3D "Executing Eventdev_pipeline using %s\n" % test_cyc= le['cores'] > + self.logger.info(info) > + self.rst_report(info, annex=3DTrue) > + self.rst_report(command_line + "\n\n", frame=3DTrue, > + annex=3DTrue) > + > + for frame_size in self.frame_sizes: > + wirespeed =3D self.wirespeed(self.nic, frame_size, 1) > + > + # create pcap file > + self.logger.info("Running with frame size %d " % frame_s= ize) > + payload_size =3D frame_size - self.headers_size > + self.tester.scapy_append( > + 'wrpcap("event_test.pcap", > [Ether(src=3D"52:00:00:00:00:00")/IP(src=3D"1.2.3.4",dst=3D"1.1.1.1")/TCP= ()/("X"*%d)] > )' % (payload_size)) > + self.tester.scapy_execute() > + > + # run traffic generator > + _, pps =3D self.tester.traffic_generator_throughput(tgen= _input, > rate_percent=3D100, delay=3D60) > + pps /=3D 1000000.0 > + pct =3D pps * 100 / wirespeed > + test_cycle['Mpps'][frame_size] =3D float('%.3f' % pps) > + test_cycle['pct'][frame_size] =3D float('%.3f' % pct) > + > + self.dut.send_expect("^C", "# ", 5) > + sleep(5) > + > + for n in range(len(self.test_cycles)): > + for frame_size in self.frame_sizes: > + self.verify(self.test_cycles[n]['Mpps'][ > + frame_size] > 0, "No traffic detected") > + > + # Print results > + self.result_table_create(self.table_header) > + self.perf_results['header'] =3D self.table_header > + for frame_size in self.frame_sizes: > + table_row =3D [frame_size] > + for test_cycle in self.test_cycles: > + table_row.append(test_cycle['Mpps'][frame_size]) > + table_row.append(test_cycle['pct'][frame_size]) > + > + self.result_table_add(table_row) > + self.perf_results['data'].append(table_row) > + > + self.result_table_print() > + > + def test_perf_eventdev_pipeline_1ports_order_performance(self): > + """ > + Evendev_Pipeline Performance Benchmarking with 1 ports. > + """ > + self.verify(len(self.dut_ports) >=3D 1, "Insufficient ports for = 1 ports > performance test") > + self.perf_results['header'] =3D [] > + self.perf_results['data'] =3D [] > + > + all_cores_mask =3D > + utils.create_mask(self.dut.get_core_list("all")) > + > + # prepare traffic generator input > + tgen_input =3D [] > + tgen_input.append((self.tester.get_local_port(self.dut_ports[0])= , > + self.tester.get_local_port(self.dut_ports[0])= , > + "event_test.pcap")) > + > + # run testpmd for each core config > + for test_cycle in self.test_cycles: > + core_config =3D test_cycle['cores'] > + > + core_list =3D self.dut.get_core_list(core_config, > + socket=3Dself.ports_socke= t, from_last =3D > self.get_cores_from_last) > + core_mask =3D utils.create_mask(core_list) > + core_list.remove(core_list[0]) > + worker_core_mask =3D utils.create_mask(core_list) > + > + command_line =3D self.eventdev_cmd("-o", 1, worker_core_mask= ) > + command_line =3D command_line %(core_mask, > self.eventdev_device_bus_id, self.Port_pci_ids[0]) > + self.dut.send_expect(command_line,"eventdev port 0", 100) > + > + info =3D "Executing Eventdev_pipeline using %s\n" % test_cyc= le['cores'] > + self.logger.info(info) > + self.rst_report(info, annex=3DTrue) > + self.rst_report(command_line + "\n\n", frame=3DTrue, > + annex=3DTrue) > + > + for frame_size in self.frame_sizes: > + wirespeed =3D self.wirespeed(self.nic, frame_size, 1) > + > + # create pcap file > + self.logger.info("Running with frame size %d " % frame_s= ize) > + payload_size =3D frame_size - self.headers_size > + self.tester.scapy_append( > + 'wrpcap("event_test.pcap", > [Ether(src=3D"52:00:00:00:00:00")/IP(src=3D"1.2.3.4",dst=3D"1.1.1.1")/TCP= ()/("X"*%d)] > )' % (payload_size)) > + self.tester.scapy_execute() > + > + # run traffic generator > + _, pps =3D self.tester.traffic_generator_throughput(tgen= _input, > rate_percent=3D100, delay=3D60) > + pps /=3D 1000000.0 > + pct =3D pps * 100 / wirespeed > + test_cycle['Mpps'][frame_size] =3D float('%.3f' % pps) > + test_cycle['pct'][frame_size] =3D float('%.3f' % pct) > + > + self.dut.send_expect("^C", "# ", 5) > + sleep(5) > + > + for n in range(len(self.test_cycles)): > + for frame_size in self.frame_sizes: > + self.verify(self.test_cycles[n]['Mpps'][ > + frame_size] > 0, "No traffic detected") > + > + # Print results > + self.result_table_create(self.table_header) > + self.perf_results['header'] =3D self.table_header > + for frame_size in self.frame_sizes: > + table_row =3D [frame_size] > + for test_cycle in self.test_cycles: > + table_row.append(test_cycle['Mpps'][frame_size]) > + table_row.append(test_cycle['pct'][frame_size]) > + > + self.result_table_add(table_row) > + self.perf_results['data'].append(table_row) > + > + self.result_table_print() > + > + def test_perf_eventdev_pipeline_2ports_atomic_performance(self): > + """ > + Evendev_Pipeline Performance Benchmarking with 2 ports. > + """ > + self.verify(len(self.dut_ports) >=3D 2, "Insufficient ports for = 2 ports > performance test") > + self.perf_results['header'] =3D [] > + self.perf_results['data'] =3D [] > + > + all_cores_mask =3D > + utils.create_mask(self.dut.get_core_list("all")) > + > + # prepare traffic generator input > + tgen_input =3D [] > + tgen_input.append((self.tester.get_local_port(self.dut_ports[0])= , > + self.tester.get_local_port(self.dut_ports[1])= , > + "event_test.pcap")) > + tgen_input.append((self.tester.get_local_port(self.dut_ports[1])= , > + self.tester.get_local_port(self.dut_ports[0])= , > + "event_test.pcap")) > + > + # run testpmd for each core config > + for test_cycle in self.test_cycles: > + core_config =3D test_cycle['cores'] > + > + core_list =3D self.dut.get_core_list(core_config, > + socket=3Dself.ports_socke= t, from_last =3D > self.get_cores_from_last) > + core_mask =3D utils.create_mask(core_list) > + core_list.remove(core_list[0]) > + worker_core_mask =3D utils.create_mask(core_list) > + > + command_line =3D self.eventdev_cmd("", 2, worker_core_mask ) > + command_line =3D command_line %(core_mask, > self.eventdev_device_bus_id, self.Port_pci_ids[0], self.Port_pci_ids[1]) > + self.dut.send_expect(command_line,"eventdev port 0", 100) > + > + info =3D "Executing Eventdev_pipeline using %s\n" % test_cyc= le['cores'] > + self.logger.info(info) > + self.rst_report(info, annex=3DTrue) > + self.rst_report(command_line + "\n\n", frame=3DTrue, > + annex=3DTrue) > + > + for frame_size in self.frame_sizes: > + wirespeed =3D self.wirespeed(self.nic, frame_size, 2) > + > + # create pcap file > + self.logger.info("Running with frame size %d " % frame_s= ize) > + payload_size =3D frame_size - self.headers_size > + self.tester.scapy_append( > + 'wrpcap("event_test.pcap", > [Ether(src=3D"52:00:00:00:00:00")/IP(src=3D"1.2.3.4",dst=3D"1.1.1.1")/TCP= ()/("X"*%d)] > )' % (payload_size)) > + self.tester.scapy_execute() > + > + # run traffic generator > + _, pps =3D self.tester.traffic_generator_throughput(tgen= _input, > rate_percent=3D100, delay=3D60) > + pps /=3D 1000000.0 > + pct =3D pps * 100 / wirespeed > + test_cycle['Mpps'][frame_size] =3D float('%.3f' % pps) > + test_cycle['pct'][frame_size] =3D float('%.3f' % pct) > + > + self.dut.send_expect("^C", "# ", 5) > + sleep(5) > + > + for n in range(len(self.test_cycles)): > + for frame_size in self.frame_sizes: > + self.verify(self.test_cycles[n]['Mpps'][ > + frame_size] > 0, "No traffic detected") > + > + # Print results > + self.result_table_create(self.table_header) > + self.perf_results['header'] =3D self.table_header > + for frame_size in self.frame_sizes: > + table_row =3D [frame_size] > + for test_cycle in self.test_cycles: > + table_row.append(test_cycle['Mpps'][frame_size]) > + table_row.append(test_cycle['pct'][frame_size]) > + > + self.result_table_add(table_row) > + self.perf_results['data'].append(table_row) > + > + self.result_table_print() > + > + def test_perf_eventdev_pipeline_2ports_parallel_performance(self): > + """ > + Evendev_Pipeline parallel schedule type Performance Benchmarking > with 2 ports. > + """ > + self.verify(len(self.dut_ports) >=3D 2, "Insufficient ports for = 2 ports > performance test") > + self.perf_results['header'] =3D [] > + self.perf_results['data'] =3D [] > + > + all_cores_mask =3D > + utils.create_mask(self.dut.get_core_list("all")) > + > + # prepare traffic generator input > + tgen_input =3D [] > + tgen_input.append((self.tester.get_local_port(self.dut_ports[0])= , > + self.tester.get_local_port(self.dut_ports[1])= , > + "event_test.pcap")) > + tgen_input.append((self.tester.get_local_port(self.dut_ports[1])= , > + self.tester.get_local_port(self.dut_ports[0])= , > + "event_test.pcap")) > + > + # run testpmd for each core config > + for test_cycle in self.test_cycles: > + core_config =3D test_cycle['cores'] > + > + core_list =3D self.dut.get_core_list(core_config, > + socket=3Dself.ports_socke= t, from_last =3D > self.get_cores_from_last) > + core_mask =3D utils.create_mask(core_list) > + core_list.remove(core_list[0]) > + worker_core_mask =3D utils.create_mask(core_list) > + > + command_line =3D self.eventdev_cmd("-p", 2, worker_core_mask= ) > + command_line =3D command_line %(core_mask, > self.eventdev_device_bus_id, self.Port_pci_ids[0], self.Port_pci_ids[1]) > + self.dut.send_expect(command_line,"eventdev port 0", 100) > + > + info =3D "Executing Eventdev_pipeline using %s\n" % test_cyc= le['cores'] > + self.logger.info(info) > + self.rst_report(info, annex=3DTrue) > + self.rst_report(command_line + "\n\n", frame=3DTrue, > + annex=3DTrue) > + > + for frame_size in self.frame_sizes: > + wirespeed =3D self.wirespeed(self.nic, frame_size, 2) > + > + # create pcap file > + self.logger.info("Running with frame size %d " % frame_s= ize) > + payload_size =3D frame_size - self.headers_size > + self.tester.scapy_append( > + 'wrpcap("event_test.pcap", > [Ether(src=3D"52:00:00:00:00:00")/IP(src=3D"1.2.3.4",dst=3D"1.1.1.1")/TCP= ()/("X"*%d)] > )' % (payload_size)) > + self.tester.scapy_execute() > + > + # run traffic generator > + _, pps =3D self.tester.traffic_generator_throughput(tgen= _input, > rate_percent=3D100, delay=3D60) > + pps /=3D 1000000.0 > + pct =3D pps * 100 / wirespeed > + test_cycle['Mpps'][frame_size] =3D float('%.3f' % pps) > + test_cycle['pct'][frame_size] =3D float('%.3f' % pct) > + > + self.dut.send_expect("^C", "# ", 5) > + sleep(5) > + > + for n in range(len(self.test_cycles)): > + for frame_size in self.frame_sizes: > + self.verify(self.test_cycles[n]['Mpps'][ > + frame_size] > 0, "No traffic detected") > + > + # Print results > + self.result_table_create(self.table_header) > + self.perf_results['header'] =3D self.table_header > + for frame_size in self.frame_sizes: > + table_row =3D [frame_size] > + for test_cycle in self.test_cycles: > + table_row.append(test_cycle['Mpps'][frame_size]) > + table_row.append(test_cycle['pct'][frame_size]) > + > + self.result_table_add(table_row) > + self.perf_results['data'].append(table_row) > + > + self.result_table_print() > + > + def test_perf_eventdev_pipeline_2ports_order_performance(self): > + """ > + Evendev_Pipeline Order schedule type Performance Benchmarking > with 2 ports. > + """ > + self.verify(len(self.dut_ports) >=3D 2, "Insufficient ports for = 2 ports > performance test") > + self.perf_results['header'] =3D [] > + self.perf_results['data'] =3D [] > + > + all_cores_mask =3D > + utils.create_mask(self.dut.get_core_list("all")) > + > + # prepare traffic generator input > + tgen_input =3D [] > + tgen_input.append((self.tester.get_local_port(self.dut_ports[0])= , > + self.tester.get_local_port(self.dut_ports[1])= , > + "event_test.pcap")) > + tgen_input.append((self.tester.get_local_port(self.dut_ports[1])= , > + self.tester.get_local_port(self.dut_ports[0])= , > + "event_test.pcap")) > + > + # run testpmd for each core config > + for test_cycle in self.test_cycles: > + core_config =3D test_cycle['cores'] > + > + core_list =3D self.dut.get_core_list(core_config, > + socket=3Dself.ports_socke= t, from_last =3D > self.get_cores_from_last) > + core_mask =3D utils.create_mask(core_list) > + core_list.remove(core_list[0]) > + worker_core_mask =3D utils.create_mask(core_list) > + > + command_line =3D self.eventdev_cmd("-o", 2, worker_core_mask= ) > + command_line =3D command_line %(core_mask, > self.eventdev_device_bus_id, self.Port_pci_ids[0], self.Port_pci_ids[1]) > + self.dut.send_expect(command_line,"eventdev port 0", 100) > + > + info =3D "Executing Eventdev_pipeline using %s\n" % test_cyc= le['cores'] > + self.logger.info(info) > + self.rst_report(info, annex=3DTrue) > + self.rst_report(command_line + "\n\n", frame=3DTrue, > + annex=3DTrue) > + > + for frame_size in self.frame_sizes: > + wirespeed =3D self.wirespeed(self.nic, frame_size, 2) > + > + # create pcap file > + self.logger.info("Running with frame size %d " % frame_s= ize) > + payload_size =3D frame_size - self.headers_size > + self.tester.scapy_append( > + 'wrpcap("event_test.pcap", > [Ether(src=3D"52:00:00:00:00:00")/IP(src=3D"1.2.3.4",dst=3D"1.1.1.1")/TCP= ()/("X"*%d)] > )' % (payload_size)) > + self.tester.scapy_execute() > + > + # run traffic generator > + _, pps =3D self.tester.traffic_generator_throughput(tgen= _input, > rate_percent=3D100, delay=3D60) > + pps /=3D 1000000.0 > + pct =3D pps * 100 / wirespeed > + test_cycle['Mpps'][frame_size] =3D float('%.3f' % pps) > + test_cycle['pct'][frame_size] =3D float('%.3f' % pct) > + > + self.dut.send_expect("^C", "# ", 5) > + sleep(5) > + > + for n in range(len(self.test_cycles)): > + for frame_size in self.frame_sizes: > + self.verify(self.test_cycles[n]['Mpps'][ > + frame_size] > 0, "No traffic detected") > + > + # Print results > + self.result_table_create(self.table_header) > + self.perf_results['header'] =3D self.table_header > + for frame_size in self.frame_sizes: > + table_row =3D [frame_size] > + for test_cycle in self.test_cycles: > + table_row.append(test_cycle['Mpps'][frame_size]) > + table_row.append(test_cycle['pct'][frame_size]) > + > + self.result_table_add(table_row) > + self.perf_results['data'].append(table_row) > + > + self.result_table_print() > + > + def test_perf_eventdev_pipeline_4ports_atomic_performance(self): > + """ > + Evendev_Pipeline Performance Benchmarking with 4 ports. > + """ > + self.verify(len(self.dut_ports) >=3D 4, "Insufficient ports for = 4 ports > performance test") > + self.perf_results['header'] =3D [] > + self.perf_results['data'] =3D [] > + > + all_cores_mask =3D > + utils.create_mask(self.dut.get_core_list("all")) > + > + # prepare traffic generator input > + tgen_input =3D [] > + tgen_input.append((self.tester.get_local_port(self.dut_ports[0])= , > + self.tester.get_local_port(self.dut_ports[1])= , > + "event_test.pcap")) > + tgen_input.append((self.tester.get_local_port(self.dut_ports[2])= , > + self.tester.get_local_port(self.dut_ports[3])= , > + "event_test.pcap")) > + tgen_input.append((self.tester.get_local_port(self.dut_ports[1])= , > + self.tester.get_local_port(self.dut_ports[0])= , > + "event_test.pcap")) > + tgen_input.append((self.tester.get_local_port(self.dut_ports[3])= , > + self.tester.get_local_port(self.dut_ports[2])= , > + "event_test.pcap")) > + > + # run testpmd for each core config > + for test_cycle in self.test_cycles: > + core_config =3D test_cycle['cores'] > + > + core_list =3D self.dut.get_core_list(core_config, > + socket=3Dself.ports_socke= t, from_last =3D > self.get_cores_from_last) > + core_mask =3D utils.create_mask(core_list) > + core_list.remove(core_list[0]) > + worker_core_mask =3D utils.create_mask(core_list) > + > + command_line =3D self.eventdev_cmd("", 4, worker_core_mask) > + command_line =3D command_line %(core_mask, > self.eventdev_device_bus_id, self.Port_pci_ids[0], self.Port_pci_ids[1], > self.Port_pci_ids[2], self.Port_pci_ids[3]) > + self.dut.send_expect(command_line,"eventdev port 0", 100) > + > + info =3D "Executing Eventdev_pipeline using %s\n" % test_cyc= le['cores'] > + self.logger.info(info) > + self.rst_report(info, annex=3DTrue) > + self.rst_report(command_line + "\n\n", frame=3DTrue, > + annex=3DTrue) > + > + for frame_size in self.frame_sizes: > + wirespeed =3D self.wirespeed(self.nic, frame_size, 4) > + > + # create pcap file > + self.logger.info("Running with frame size %d " % frame_s= ize) > + payload_size =3D frame_size - self.headers_size > + self.tester.scapy_append( > + 'wrpcap("event_test.pcap", > [Ether(src=3D"52:00:00:00:00:00")/IP(src=3D"1.2.3.4",dst=3D"1.1.1.1")/TCP= ()/("X"*%d)] > )' % (payload_size)) > + self.tester.scapy_execute() > + > + # run traffic generator > + _, pps =3D self.tester.traffic_generator_throughput(tgen= _input, > rate_percent=3D100, delay=3D60) > + pps /=3D 1000000.0 > + pct =3D pps * 100 / wirespeed > + test_cycle['Mpps'][frame_size] =3D float('%.3f' % pps) > + test_cycle['pct'][frame_size] =3D float('%.3f' % pct) > + > + self.dut.send_expect("^C", "# ", 5) > + sleep(5) > + > + for n in range(len(self.test_cycles)): > + for frame_size in self.frame_sizes: > + self.verify(self.test_cycles[n]['Mpps'][ > + frame_size] > 0, "No traffic detected") > + > + # Print results > + self.result_table_create(self.table_header) > + self.perf_results['header'] =3D self.table_header > + for frame_size in self.frame_sizes: > + table_row =3D [frame_size] > + for test_cycle in self.test_cycles: > + table_row.append(test_cycle['Mpps'][frame_size]) > + table_row.append(test_cycle['pct'][frame_size]) > + > + self.result_table_add(table_row) > + self.perf_results['data'].append(table_row) > + > + self.result_table_print() > + > + def test_perf_eventdev_pipeline_4ports_parallel_performance(self): > + """ > + Evendev_Pipeline parallel schedule type Performance Benchmarking > with 4 ports. > + """ > + self.verify(len(self.dut_ports) >=3D 4, "Insufficient ports for = 4 ports > performance test") > + self.perf_results['header'] =3D [] > + self.perf_results['data'] =3D [] > + > + all_cores_mask =3D > + utils.create_mask(self.dut.get_core_list("all")) > + > + # prepare traffic generator input > + tgen_input =3D [] > + tgen_input.append((self.tester.get_local_port(self.dut_ports[0])= , > + self.tester.get_local_port(self.dut_ports[1])= , > + "event_test.pcap")) > + tgen_input.append((self.tester.get_local_port(self.dut_ports[2])= , > + self.tester.get_local_port(self.dut_ports[3])= , > + "event_test.pcap")) > + tgen_input.append((self.tester.get_local_port(self.dut_ports[1])= , > + self.tester.get_local_port(self.dut_ports[0])= , > + "event_test.pcap")) > + tgen_input.append((self.tester.get_local_port(self.dut_ports[3])= , > + self.tester.get_local_port(self.dut_ports[2])= , > + "event_test.pcap")) > + > + # run testpmd for each core config > + for test_cycle in self.test_cycles: > + core_config =3D test_cycle['cores'] > + > + core_list =3D self.dut.get_core_list(core_config, > + socket=3Dself.ports_socke= t, from_last =3D > self.get_cores_from_last) > + core_mask =3D utils.create_mask(core_list) > + core_list.remove(core_list[0]) > + worker_core_mask =3D utils.create_mask(core_list) > + > + command_line =3D self.eventdev_cmd("-p", 4, worker_core_mask= ) > + command_line =3D command_line %(core_mask, > self.eventdev_device_bus_id, self.Port_pci_ids[0], self.Port_pci_ids[1], > self.Port_pci_ids[2], self.Port_pci_ids[3]) > + self.dut.send_expect(command_line,"eventdev port 0", 100) > + > + info =3D "Executing Eventdev_pipeline using %s\n" % test_cyc= le['cores'] > + self.logger.info(info) > + self.rst_report(info, annex=3DTrue) > + self.rst_report(command_line + "\n\n", frame=3DTrue, > + annex=3DTrue) > + > + for frame_size in self.frame_sizes: > + wirespeed =3D self.wirespeed(self.nic, frame_size, 4) > + > + # create pcap file > + self.logger.info("Running with frame size %d " % frame_s= ize) > + payload_size =3D frame_size - self.headers_size > + self.tester.scapy_append( > + 'wrpcap("event_test.pcap", > [Ether(src=3D"52:00:00:00:00:00")/IP(src=3D"1.2.3.4",dst=3D"1.1.1.1")/TCP= ()/("X"*%d)] > )' % (payload_size)) > + self.tester.scapy_execute() > + > + # run traffic generator > + _, pps =3D self.tester.traffic_generator_throughput(tgen= _input, > rate_percent=3D100, delay=3D60) > + pps /=3D 1000000.0 > + pct =3D pps * 100 / wirespeed > + test_cycle['Mpps'][frame_size] =3D float('%.3f' % pps) > + test_cycle['pct'][frame_size] =3D float('%.3f' % pct) > + > + self.dut.send_expect("^C", "# ", 5) > + sleep(5) > + > + for n in range(len(self.test_cycles)): > + for frame_size in self.frame_sizes: > + self.verify(self.test_cycles[n]['Mpps'][ > + frame_size] > 0, "No traffic detected") > + > + # Print results > + self.result_table_create(self.table_header) > + self.perf_results['header'] =3D self.table_header > + for frame_size in self.frame_sizes: > + table_row =3D [frame_size] > + for test_cycle in self.test_cycles: > + table_row.append(test_cycle['Mpps'][frame_size]) > + table_row.append(test_cycle['pct'][frame_size]) > + > + self.result_table_add(table_row) > + self.perf_results['data'].append(table_row) > + > + self.result_table_print() > + > + def test_perf_eventdev_pipeline_4ports_order_performance(self): > + """ > + Evendev_Pipeline Order schedule type Performance Benchmarking > with 4 ports. > + """ > + self.verify(len(self.dut_ports) >=3D 4, "Insufficient ports for = 4 ports > performance test") > + self.perf_results['header'] =3D [] > + self.perf_results['data'] =3D [] > + > + all_cores_mask =3D > + utils.create_mask(self.dut.get_core_list("all")) > + > + # prepare traffic generator input > + tgen_input =3D [] > + tgen_input.append((self.tester.get_local_port(self.dut_ports[0])= , > + self.tester.get_local_port(self.dut_ports[1])= , > + "event_test.pcap")) > + tgen_input.append((self.tester.get_local_port(self.dut_ports[2])= , > + self.tester.get_local_port(self.dut_ports[3])= , > + "event_test.pcap")) > + tgen_input.append((self.tester.get_local_port(self.dut_ports[1])= , > + self.tester.get_local_port(self.dut_ports[0])= , > + "event_test.pcap")) > + tgen_input.append((self.tester.get_local_port(self.dut_ports[3])= , > + self.tester.get_local_port(self.dut_ports[2])= , > + "event_test.pcap")) > + > + # run testpmd for each core config > + for test_cycle in self.test_cycles: > + core_config =3D test_cycle['cores'] > + > + core_list =3D self.dut.get_core_list(core_config, > + socket=3Dself.ports_socke= t, from_last =3D > self.get_cores_from_last) > + core_mask =3D utils.create_mask(core_list) > + core_list.remove(core_list[0]) > + worker_core_mask =3D utils.create_mask(core_list) > + > + command_line =3D self.eventdev_cmd("-o", 4, worker_core_mask= ) > + command_line =3D command_line %(core_mask, > self.eventdev_device_bus_id, self.Port_pci_ids[0], self.Port_pci_ids[1], > self.Port_pci_ids[2], self.Port_pci_ids[3]) > + self.dut.send_expect(command_line,"eventdev port 0", 100) > + > + info =3D "Executing Eventdev_pipeline using %s\n" % test_cyc= le['cores'] > + self.logger.info(info) > + self.rst_report(info, annex=3DTrue) > + self.rst_report(command_line + "\n\n", frame=3DTrue, > + annex=3DTrue) > + > + for frame_size in self.frame_sizes: > + wirespeed =3D self.wirespeed(self.nic, frame_size, 4) > + > + # create pcap file > + self.logger.info("Running with frame size %d " % frame_s= ize) > + payload_size =3D frame_size - self.headers_size > + self.tester.scapy_append( > + 'wrpcap("event_test.pcap", > [Ether(src=3D"52:00:00:00:00:00")/IP(src=3D"1.2.3.4",dst=3D"1.1.1.1")/TCP= ()/("X"*%d)] > )' % (payload_size)) > + self.tester.scapy_execute() > + > + # run traffic generator > + _, pps =3D self.tester.traffic_generator_throughput(tgen= _input, > rate_percent=3D100, delay=3D60) > + pps /=3D 1000000.0 > + pct =3D pps * 100 / wirespeed > + test_cycle['Mpps'][frame_size] =3D float('%.3f' % pps) > + test_cycle['pct'][frame_size] =3D float('%.3f' % pct) > + > + self.dut.send_expect("^C", "# ", 5) > + sleep(5) > + > + for n in range(len(self.test_cycles)): > + for frame_size in self.frame_sizes: > + self.verify(self.test_cycles[n]['Mpps'][ > + frame_size] > 0, "No traffic detected") > + > + # Print results > + self.result_table_create(self.table_header) > + self.perf_results['header'] =3D self.table_header > + for frame_size in self.frame_sizes: > + table_row =3D [frame_size] > + for test_cycle in self.test_cycles: > + table_row.append(test_cycle['Mpps'][frame_size]) > + table_row.append(test_cycle['pct'][frame_size]) > + > + self.result_table_add(table_row) > + self.perf_results['data'].append(table_row) > + > + self.result_table_print() > + > + def ip(self, port, frag, src, proto, tos, dst, chksum, len, options,= version, > flags, ihl, ttl, id): > + self.add_tcl_cmd("protocol config -name ip") > + self.add_tcl_cmd('ip config -sourceIpAddr "%s"' % src) > + self.add_tcl_cmd("ip config -sourceIpAddrMode ipIncrHost") > + self.add_tcl_cmd("ip config -sourceIpAddrRepeatCount 100") > + self.add_tcl_cmd('ip config -destIpAddr "%s"' % dst) > + self.add_tcl_cmd("ip config -destIpAddrMode ipIdle") > + self.add_tcl_cmd("ip config -ttl %d" % ttl) > + self.add_tcl_cmd("ip config -totalLength %d" % len) > + self.add_tcl_cmd("ip config -fragment %d" % frag) > + self.add_tcl_cmd("ip config -ipProtocol ipV4ProtocolReserved255"= ) > + self.add_tcl_cmd("ip config -identifier %d" % id) > + self.add_tcl_cmd("stream config -framesize %d" % (len + 18)) > + self.add_tcl_cmd("ip set %d %d %d" % (self.chasId, > + port['card'], port['port'])) > + > + > + def tear_down(self): > + """ > + Run after each test case. > + """ > + self.dut.send_expect("^C", "# ", 5) > + > + > self.dut.unbind_eventdev_port(port_to_unbind=3Dself.eventdev_device_bus_ > + id) > + > + def tear_down_all(self): > + """ > + Run after each test suite. > + """ > + self.dut.kill_all() > -- > 1.8.3.1