* [dts] [PATCH V1 1/4] ipfix_flow_classify: upload test plan
2018-06-06 5:34 [dts] [PATCH V1 0/4] ipfix_flow_classify: upload test plan yufengx.mo
@ 2018-06-06 5:34 ` yufengx.mo
2018-06-06 5:34 ` [dts] [PATCH V1 2/4] ipfix_flow_classify: upload automation script yufengx.mo
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: yufengx.mo @ 2018-06-06 5:34 UTC (permalink / raw)
To: dts; +Cc: yufengmx
From: yufengmx <yufengx.mo@intel.com>
This test plan is for flow classify feature.
DPDK provides a Flow Classification library that provides the ability
to classify an input packet by matching it against a set of Flow rules.
The implementation supports counting of IPv4 5-tuple packets which match a
particular Flow rule only.
flow_classify is the tool to call flow_classify lib for group of packets,
just after receiving them or before transmitting them.
Signed-off-by: yufengmx <yufengx.mo@intel.com>
---
test_plans/ipfix_flow_classify_test_plan.rst | 246 +++++++++++++++++++++++++++
1 file changed, 246 insertions(+)
create mode 100644 test_plans/ipfix_flow_classify_test_plan.rst
diff --git a/test_plans/ipfix_flow_classify_test_plan.rst b/test_plans/ipfix_flow_classify_test_plan.rst
new file mode 100644
index 0000000..9492be1
--- /dev/null
+++ b/test_plans/ipfix_flow_classify_test_plan.rst
@@ -0,0 +1,246 @@
+.. Copyright (c) <2018>, Intel Corporation
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ - Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+
+ - Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+
+ - Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ OF THE POSSIBILITY OF SUCH DAMAGE.
+
+====================
+ipfix flow classify
+====================
+
+This document provides test plan for flow classify feature.
+
+Flow Classify provides flow record information with some measured properties.
+
+DPDK provides a Flow Classification library that provides the ability
+to classify an input packet by matching it against a set of Flow rules.
+The implementation supports counting of IPv4 5-tuple packets which match a
+particular Flow rule only.
+
+flow_classify is the tool to call flow_classify lib for group of packets,
+just after receiving them or before transmitting them. It provide the flow type
+interested in, measurement to apply to that flow in rte_flow_classify_create()
+API, and should providerte_flow_classify object and storage to put results in
+rte_flow_classify_query() API.
+
+DPDK technical doc refer to::
+dpdk/doc/guides/sample_app_ug/flow_classify.rst
+dpdk/doc/guides/prog_guide/flow_classify_lib.rst
+
+Prerequisites
+-------------
+
+2xNICs (2 full duplex optical ports per NIC)
+Flow Classify should run on 2 pair link peer at least.
+No limitation about nic type.
+
+HW configuration
+-------------
+ Tester DUT
+ .-------. .-------.
+ | port0 | <------------------> | port0 |
+ | port1 | <------------------> | port1 |
+ '-------' '-------'
+
+Stream configuration
+-------------
+
+UDP_1:
+Frame Data/Protocols: Ethernet 2 0800, IPv4,UDP/IP, Fixed 64.
+IPv4 Header Page: Dest Address: 2.2.2.7 Src Address: 2.2.2.3
+UDP Header: Src Port: 32 Dest Port: 33
+
+UDP_2:
+Frame Data/Protocols: Ethernet 2 0800, IPv4,UDP/IP, Fixed 64.
+IPv4 Header Page: Dest Address: 9.9.9.7 Src Address: 9.9.9.3
+UDP Header: Src Port: 32 Dest Port: 33
+
+UDP_invalid:
+Frame Data/Protocols: Ethernet 2 0800, IPv4,UDP/IP, Fixed 64.
+IPv4 Header Page: Dest Address: 9.8.7.6 Src Address: 192.168.0.36
+UDP Header: Src Port: 10 Dest Port: 11
+
+TCP_1:
+Frame Data/Protocols: Ethernet 2 0800, IPv4,TCP/IP, Fixed 64.
+IPv4 Header Page: Dest Address: 9.9.9.7 Src Address: 9.9.9.3
+TCP Header: Src Port: 32 Dest Port: 33
+
+TCP_2:
+Frame Data/Protocols: Ethernet 2 0800, IPv4,TCP/IP, Fixed 64.
+IPv4 Header Page: Dest Address: 9.9.8.7 Src Address: 9.9.8.3
+TCP Header: Src Port: 32 Dest Port: 33
+
+TCP_invalid:
+Frame Data/Protocols: Ethernet 2 0800, IPv4,TCP/IP, Fixed 64.
+IPv4 Header Page: Dest Address: 9.8.7.6 Src Address: 192.168.0.36
+TCP Header: Src Port: 10 Dest Port: 11
+
+SCTP_1:
+Frame Data/Protocols: Ethernet 2 0800, IPv4, None, Fixed 256.
+IPv4 Header Page: Dest Address: 2.3.4.5 Src Address: 6.7.8.9
+SCTP Header: Src Port: 32 Dest Port: 33
+Protocol: 132-SCTP
+
+SCTP_invalid:
+Frame Data/Protocols: Ethernet 2 0800, IPv4, None, Fixed 256.
+IPv4 Header Page: Dest Address: 9.8.7.6 Src Address: 192.168.0.36
+SCTP Header: Src Port: 10 Dest Port: 11
+Protocol: 132-SCTP
+
+ixia config for stream(ixia tcl command):
+----------
+each stream should be 32 burst packets, stream interval should be as the
+following command limitaion.
+
+stream config -numBursts 32
+stream config -gapUnit gapMilliSeconds
+stream config -ifg 10
+stream config -ibg 1
+stream config -isg 1000
+stream config -dma stopStream
+
+Compilation:
+----------
+cd $DPDK_PATH
+export RTE_TARGET=$DPDK_PATH
+export RTE_SDK=`pwd`
+make -C examples/flow_classify
+
+Flow classify bin file under:
+$DPDK_PATH/examples/flow_classify/build/flow_classify
+
+rule config file(default):
+$DPDK_PATH/examples/flow_classify/ipv4_rules_file.txt
+
+Test cases
+----------
+The idea behind the testing process is to compare packet count sending by
+ixia packet generater with packet count filtered by flow_classify. Valid
+packets should be in flow_classify output and invalid packets should be ignored.
+The rules are configured in a txt file. Testing content includes different
+udp/tcp/sctp stream and multiple rules.
+
+Test Case : check valid rule with udp stream (performance)
+==========================================
+Send one valid 32 packets burst stream(UDP_1 or UDP_2 in Stream configuration),
+then check the total received packets in flow_classify's output message.
+
+*. boot up flow_classify
+ ./flow_classify -c 4 -n 4 -- --rule_ipv4=<rule config file>
+
+*. send stream by ixia
+
+*. check flow_classify output contain the following message
+ rule[0] count=1
+ or
+ rule[1] count=1
+
+Test Case : check invalid rule with udp stream (performance)
+==========================================
+Send one invalid 32 packets burst stream(UDP_invalid in Stream configuration),
+then check flow_classify's output message has no count message
+
+*. boot up flow_classify
+ ./flow_classify -c 4 -n 4 -- --rule_ipv4=<rule config file>
+
+*. send stream by ixia
+
+*. check flow_classify output has no message as "rule[xxx] count=xxx", such as
+ rule[0] count=1
+
+Test Case : check valid rule with tcp stream (performance)
+==========================================
+Send one valid 32 packets burst stream(TCP_1 or TCP_2 in Stream configuration),
+then check the total received packets in flow_classify's output message.
+
+*. boot up flow_classify
+ ./flow_classify -c 4 -n 4 -- --rule_ipv4=<rule config file>
+
+*. send stream by ixia
+
+*. check flow_classify output contain the following message
+ rule[2] count=1
+ or
+ rule[3] count=1
+
+Test Case : check invalid rule with tcp stream (performance)
+==========================================
+Send one invalid 32 packets burst stream(TCP_invalid in Stream configuration),
+then check flow_classify's output message has no count message
+
+*. boot up flow_classify
+ ./flow_classify -c 4 -n 4 -- --rule_ipv4=<rule config file>
+
+*. send stream by ixia
+
+*. check flow_classify output has no message as "rule[xxx] count=xxx", such as
+ rule[2] count=1
+
+Test Case : check valid rule with sctp stream (performance)
+==========================================
+Send one valid 32 packets burst stream(SCTP_1 in Stream configuration),
+then check the total received packets in flow_classify's output message.
+
+*. boot up flow_classify
+ ./flow_classify -c 4 -n 4 -- --rule_ipv4=<rule config file>
+
+*. send stream by ixia
+
+*. check flow_classify output contain the following message
+ rule[4] count=1
+
+Test Case : check invalid rule with sctp stream (performance)
+==========================================
+Send one invalid 32 packets burst stream(SCTP_invalid in Stream configuration),
+then check flow_classify's output message has no count message
+
+*. boot up flow_classify
+./flow_classify -c 4 -n 4 -- --rule_ipv4=<rule config file>
+
+*. send stream by ixia
+
+*. check flow_classify output has no "rule[xxx] count=xxx" message, such as
+ rule[4] count=1
+
+Test Case mixed stream: (performance)
+==========================================
+Send mixed 32 packets burst stream(all types in Stream configuration),
+then check if they are ignored or filtered by flow_classify
+
+*. boot up flow_classify
+ ./flow_classify -c 4 -n 4 -- --rule_ipv4=<rule config file>
+
+*. send mixed stream by ixia as above
+
+*. check flow_classify output only contain the following count message
+ rule[0] count=1
+ rule[1] count=1
+ rule[2] count=1
+ rule[3] count=1
+ rule[4] count=1
--
1.9.3
^ permalink raw reply [flat|nested] 5+ messages in thread
* [dts] [PATCH V1 2/4] ipfix_flow_classify: upload automation script
2018-06-06 5:34 [dts] [PATCH V1 0/4] ipfix_flow_classify: upload test plan yufengx.mo
2018-06-06 5:34 ` [dts] [PATCH V1 1/4] " yufengx.mo
@ 2018-06-06 5:34 ` yufengx.mo
2018-06-06 5:34 ` [dts] [PATCH V1 3/4] ipfix_flow_classify: framework etgen/ixia function exetend yufengx.mo
2018-06-06 5:34 ` [dts] [PATCH V1 4/4] ipfix_flow_classify: framework setting yufengx.mo
3 siblings, 0 replies; 5+ messages in thread
From: yufengx.mo @ 2018-06-06 5:34 UTC (permalink / raw)
To: dts; +Cc: yufengmx
From: yufengmx <yufengx.mo@intel.com>
This automation script is for flow classify feature.
DPDK provides a Flow Classification library that provides the ability
to classify an input packet by matching it against a set of Flow rules.
The implementation supports counting of IPv4 5-tuple packets which match a
particular Flow rule only.
flow_classify is the tool to call flow_classify lib for group of packets,
just after receiving them or before transmitting them.
Signed-off-by: yufengmx <yufengx.mo@intel.com>
---
tests/TestSuite_ipfix_flow_classify.py | 714 +++++++++++++++++++++++++++++++++
1 file changed, 714 insertions(+)
create mode 100644 tests/TestSuite_ipfix_flow_classify.py
diff --git a/tests/TestSuite_ipfix_flow_classify.py b/tests/TestSuite_ipfix_flow_classify.py
new file mode 100644
index 0000000..f8205d1
--- /dev/null
+++ b/tests/TestSuite_ipfix_flow_classify.py
@@ -0,0 +1,714 @@
+# BSD LICENSE
+#
+# Copyright(c) 2010-2018 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of Intel Corporation nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+import os
+import time
+import re
+import random
+import inspect, traceback
+
+from datetime import datetime
+from socket import htons, htonl
+
+from packet import Packet, NVGRE, IPPROTO_NVGRE
+from scapy.sendrecv import sendp
+from scapy.utils import wrpcap, rdpcap, hexstr
+
+import utils
+from test_case import TestCase
+from exception import TimeoutException, VerifyFailure
+from settings import TIMEOUT
+from pmd_output import PmdOutput
+from settings import HEADER_SIZE
+from serializer import Serializer
+
+class ExecBinProcess(object):
+
+ def __init__(self, **kwargs):
+ # initialize process parameter
+ self.dut = kwargs.get('dut')
+ self.interactive = kwargs.get('interactive') or False
+ self.name = kwargs.get('name')
+ self.output = kwargs.get('output')
+ self.target_code = kwargs.get('target_src')
+ self.target_name = kwargs.get('target_name')
+ self.logger = kwargs.get('logger')
+ # session command
+ self.default_prompt = ">" if self.interactive else '# '
+ self.console = self.execute_dut_cmds
+ # initialize process
+ self.status = 'close'
+ self.process_pid = None
+ self.output_log = None
+ #
+ self.bin = self.compile(self.target_code, self.name)
+ self.bin_name = os.path.basename(self.bin)
+
+ def execute_dut_cmds(self, cmds):
+ if len(cmds) == 0:
+ return
+ if len(cmds) > 1:
+ outputs = []
+ else:
+ outputs = ''
+ for item in cmds:
+ expected_items = item[1]
+ if expected_items and isinstance(expected_items, (list, tuple)):
+ check_output = True
+ expected_str = expected_items[0] or self.default_prompt
+ else:
+ check_output = False
+ expected_str = expected_items or self.default_prompt
+ #----------------
+ if len(item) == 3:
+ timeout = int(item[2])
+ output = self.dut.send_expect(item[0], expected_str, timeout)
+ output = self.dut.get_session_output(timeout) if not output \
+ else output
+ else:
+ output = self.dut.send_expect(item[0], expected_str)
+ output = self.dut.get_session_output() if not output else output
+ #--------------------
+ if len(cmds) > 1:
+ outputs.append(output)
+ else:
+ outputs = output
+ if check_output and len(expected_items) >= 2:
+ self.logger.info(output)
+ expected_output = expected_items[1]
+ if len(expected_items) == 2:
+ check_type = True
+ else:
+ check_type = expected_items[2]
+
+ if check_type and expected_output in output:
+ msg = "expected '{0}' is in output".format(expected_output)
+ self.logger.info(msg)
+ elif not check_type and expected_output not in output:
+ msg = "unexpected '{0}' is not in output".format(
+ expected_output)
+ self.logger.info(msg)
+ else:
+ status = "isn't in" if check_type else "is in"
+ msg = "[{0}] {1} output".format(expected_output, status)
+ self.logger.error(msg)
+ raise VerifyFailure(msg)
+
+ time.sleep(2)
+ return outputs
+
+ def compile(self, target_code, name):
+ key_words = ['build', self.target_name]
+ tool_path = os.sep.join([target_code, 'examples', name])
+ cmds = []
+ cmds.append(['make -C {0}'.format(tool_path), '', 15])
+ self.console(cmds)
+ # check executable binary file
+ exec_bin = self.get_exec_bin_file(tool_path, key_words)
+
+ if not exec_bin or not os.path.exists(exec_bin):
+ msg = 'expected tool <{0}> does not exist'.format(name)
+ self.logger.error(msg)
+ raise VerifyFailure(msg)
+
+ return exec_bin
+
+ def get_exec_bin_file(self, tool_path, key_words):
+ bin_dir = []
+ for key_word in key_words:
+ cmds = []
+ cmds.append(['find {0} -name {1}'.format(tool_path, key_word),
+ '', 5])
+ output = self.console(cmds)
+ if output == '':
+ continue
+ bin_dir.extend(output.splitlines())
+ for dir in bin_dir:
+ cmds = []
+ cmds.append(["ls -F {0} | grep '*'".format(dir), '', 5])
+ exec_file = self.console(cmds)
+ exec_bin = os.sep.join([dir, exec_file[:-1]])
+ msg = "binary file is <{0}>".format(exec_bin)
+ self.logger.info(msg)
+ return exec_bin
+ else:
+ return None
+
+ def check_process(self, process_name, check_status):
+ kill_session = self.dut.new_session()
+ # check subprocess in task space
+ cmd = ("ps aux | grep -i '%s' | "
+ "grep -v grep | awk {'print $2'}")% (process_name)
+ out = kill_session.send_expect(cmd, '# ', 5)
+ if out != "":
+ self.process_pid = out.splitlines()[0]
+ self.logger.info("{0}'s pid is {1}".format(self.bin_name,
+ self.process_pid))
+ status = True
+ else:
+ status = False
+
+ kill_session.close()
+
+ if check_status == 'start' and not status:
+ raise_flg = True
+ elif check_status == 'close' and status:
+ raise_flg = True
+ else:
+ raise_flg = False
+
+ if raise_flg:
+ raise VerifyFailure("{0} {1} failed".format(process_name,
+ check_status))
+ else:
+ self.logger.info("{0} {1} success".format(process_name,
+ check_status))
+
+ return status
+
+ def start(self, eal_option=''):
+ if self.status == 'running':
+ return
+ if self.interactive:
+ pass
+ else:
+ cmds =[['{0} {1} & 2>&1'.format(self.bin, eal_option),
+ 'table_entry_delete succeeded', 15],]
+ self.console(cmds)
+ time.sleep(10)
+ ############################
+ # check if process has bootep up
+ self.check_process(self.bin, "start")
+ self.status = 'running'
+
+ def close(self, log="output.log"):
+ output = self.dut.get_session_output()
+ with open(log, 'wb') as fp:
+ fp.write(output)
+ if self.status == 'close':
+ return None
+ cmds =[['kill -TERM {0}'.format(self.process_pid), ''],]
+ output = self.console(cmds)
+ time.sleep(10)
+ self.check_process(self.bin, 'close')
+ self.status = 'close'
+
+ return output
+#############
+
+#############
+class TestIpfixFlowClassify(TestCase):
+
+ def send_packets_by_ixia(self, **kwargs):
+ tester_port = kwargs.get('tx_intf')
+ count = kwargs.get('count', 1)
+ traffic_type = kwargs.get('traffic_type', 'normal')
+ traffic_time = kwargs.get('traffic_time', 0)
+ rate_percent = kwargs.get('rate_percent', float(100))
+ #---------------------------------------------------------------
+ send_pkts = []
+ self.tgen_input = []
+ tgen_input = self.tgen_input
+ send_pkts = kwargs.get('stream')
+ pcap = self.target_source + os.sep + 'ixia.pcap'
+ wrpcap(pcap, send_pkts)
+ #-----------------------------------------------------------
+ # set packet for send
+ # pause frame basic configuration
+ pause_time = 65535
+ pause_rate = 0.50
+ # run ixia testing
+ frame_size = 64
+ # calculate number of packets
+ expect_pps = self.wirespeed(self.nic, frame_size, 1) * 1000000.0
+ # get line rate
+ linerate = expect_pps * (frame_size + 20) * 8
+ # calculate default sleep time for one pause frame
+ sleep = (1 / linerate) * pause_time * 512
+ # calculate packets dropped in sleep time
+ self.n_pkts = int((sleep / (1 / expect_pps)) * (1 / pause_rate))
+ #----------------------------------------------------------------
+ tgen_input.append((tester_port,
+ tester_port,
+ pcap))
+ # run latency stat statistics
+ self.rate_percent = rate_percent
+ self.pktgen_status = 'running'
+ #if traffic_type == 'burst':
+ stream_configs = kwargs.get('stream configs', None)
+ if not stream_configs:
+ raise VerifyFailure("no stream configs set")
+ self.tester.burst_traffic_generator_throughput(
+ tgen_input,
+ rate_percent,
+ **stream_configs)
+ # move stop method in packet thread
+ if traffic_time:
+ time.sleep(traffic_time)
+ result = self.stop_ixia()
+
+ return result
+
+ def stop_ixia(self, data_types='packets'):
+ # get ixia statistics
+ if self.pktgen_status != 'running':
+ return
+ try:
+ line_rate = self.tester.get_port_line_rate()
+ stop_traffic = self.tester.stop_traffic_generator_throughput_loop
+ rx_bps, rx_pps = stop_traffic(self.tgen_input)
+ output = self.tester.traffic_get_port_stats(self.tgen_input)
+ cur_data = {}
+ cur_data['ixia statistics'] = []
+ append = cur_data['ixia statistics'].append
+ append('send packets: {0}'.format(output[0]))
+ append('line_rate: {0}'.format(line_rate[0]))
+ append('rate_percent: {0}%'.format(self.rate_percent))
+ except Exception as e:
+ msg = traceback.format_exc()
+ self.logger.error(msg)
+ finally:
+ self.pktgen_status = 'stop'
+ return cur_data
+
+ def get_pktgen(self, name):
+ pkt_gens = {'ixia': self.send_packets_by_ixia}
+ pkt_generator = pkt_gens.get(name)
+
+ return pkt_generator
+ #
+ # Test cases.
+ #
+ def set_up_all(self):
+ """
+ Run before each test suite
+ """
+ #------------------------------------------------------------------
+ # initialize ports topology
+ self.dut_ports = self.dut.get_ports()
+ self.port_mask = utils.create_mask(self.dut_ports)
+ self.verify(len(self.dut_ports) >= 1, "Insufficient ports")
+ self.target_source = self.dut.base_dir
+ # get output path
+ if self.logger.log_path.startswith(os.sep):
+ output_path = self.logger.log_path
+ else:
+ cur_path = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
+ output_path = cur_path + os.sep + self.logger.log_path
+ #
+ fcPsInfo = {
+ 'dut': self.dut,
+ 'name': 'flow_classify',
+ 'target_src': self.target_source,
+ 'target_name': self.dut.target,
+ 'output': output_path,
+ 'logger': self.logger,}
+ self.flow_classify = ExecBinProcess(**fcPsInfo)
+ self.output_log = None
+ #------------------------------------------------------------------
+ # initialize ixia session
+ self.pktgen_status = 'stop'
+ #------------------------------------------------------------------
+ # initialize packet generator
+ if self._enable_perf:
+ self.pktgen_name = 'ixia'
+ else:
+ self.pktgen_name = 'scapy_mix'
+
+ def set_up(self):
+ """
+ Run before each test case.
+ """
+ pass
+
+ def tear_down(self):
+ """
+ Run after each test case.
+ """
+ pass
+
+ def tear_down_all(self):
+ """
+ Run after each test suite.
+ """
+ pass
+
+ def get_pkt_len(self, pkt_type):
+ # packet size
+ frame_size = FRAME_SIZE_256
+ headers_size = sum(map(lambda x: HEADER_SIZE[x],
+ ['eth', 'ip', pkt_type]))
+ pktlen = frame_size - headers_size
+ return pktlen
+
+ def set_stream(self, stm_names=None):
+ '''
+ '''
+ #----------------------------------------------------------------------
+ # set streams for traffic
+ pkt_configs = {
+ # UDP_1:
+ # Frame Data/Protocols: Ethernet 2 0800, IPv4,UDP/IP, Fixed 64.
+ # IPv4 Header Page: Dest Address: 2.2.2.7 Src Address: 2.2.2.3
+ # UDP Header: Src Port: 32 Dest Port: 33
+ #
+ # Stream Control: Stop after this Stream, Packet Count 32.
+ #
+ 'UDP_1': {
+ 'type': 'UDP',
+ 'pkt_layers': {
+ #'ether': {'src': srcmac, 'dst': nutmac},
+ 'ipv4': {'src': '2.2.2.3', 'dst': '2.2.2.7'},
+ 'udp': {'src': 32, 'dst': 33},
+ 'raw': {'payload': ['58'] * self.get_pkt_len('udp')}}},
+ # UDP_2:
+ # Frame Data/Protocols: Ethernet 2 0800, IPv4,UDP/IP, Fixed 64.
+ # IPv4 Header Page: Dest Address: 9.9.9.7 Src Address: 9.9.9.3
+ # UDP Header: Src Port: 32 Dest Port: 33
+ #
+ # Stream Control: Stop after this Stream, Packet Count 32.
+ #
+ 'UDP_2':{
+ 'type': 'UDP',
+ 'pkt_layers': {
+ #'ether': {'src': srcmac, 'dst': nutmac},
+ 'ipv4': {'src': '9.9.9.3', 'dst': '9.9.9.7'},
+ 'udp': {'src': 32, 'dst': 33},
+ 'raw': {'payload': ['58'] * self.get_pkt_len('udp')}}},
+ 'invalid_UDP':{
+ 'type': 'UDP',
+ 'pkt_layers': {
+ #'ether': {'src': srcmac, 'dst': nutmac},
+ 'ipv4': {'src': '9.8.7.6', 'dst': '192.168.0.36'},
+ 'udp': {'src': 10, 'dst': 11},
+ 'raw': {'payload': ['58'] * self.get_pkt_len('udp')}}},
+ # TCP_1:
+ # Frame Data/Protocols: Ethernet 2 0800, IPv4,TCP/IP, Fixed 64.
+ # IPv4 Header Page: Dest Address: 9.9.9.7 Src Address: 9.9.9.3
+ # TCP Header: Src Port: 32 Dest Port: 33
+ #
+ # Stream Control: Stop after this Stream, Packet Count 32.
+ #
+ 'TCP_1':{
+ 'type': 'TCP',
+ 'pkt_layers': {
+ #'ether': {'src': srcmac, 'dst': nutmac},
+ 'ipv4': {'src': '9.9.9.3', 'dst': '9.9.9.7'},
+ 'tcp': {'src': 32, 'dst': 33},
+ 'raw': {'payload': ['58'] * self.get_pkt_len('tcp')}}},
+ # TCP_2:
+ # Frame Data/Protocols: Ethernet 2 0800, IPv4,TCP/IP, Fixed 64.
+ # IPv4 Header Page: Dest Address: 9.9.8.7 Src Address: 9.9.8.3
+ # TCP Header: Src Port: 32 Dest Port: 33
+ #
+ # Stream Control: Stop after this Stream, Packet Count 32.
+ #
+ 'TCP_2':{
+ 'type': 'TCP',
+ 'pkt_layers': {
+ #'ether': {'src': srcmac, 'dst': nutmac},
+ 'ipv4': {'src': '9.9.8.3', 'dst': '9.9.8.7'},
+ 'tcp': {'src': 32, 'dst': 33},
+ 'raw': {'payload': ['58'] * self.get_pkt_len('tcp')}}},
+ 'invalid_TCP':{
+ 'type': 'TCP',
+ 'pkt_layers': {
+ #'ether': {'src': srcmac, 'dst': nutmac},
+ 'ipv4': {'src': '9.8.7.6', 'dst': '192.168.0.36'},
+ 'tcp': {'src': 10, 'dst': 11},
+ 'raw': {'payload': ['58'] * self.get_pkt_len('tcp')}}},
+ # SCTP_1:
+ # Frame Data/Protocols: Ethernet 2 0800, IPv4, None, Fixed 256.
+ # IPv4 Header Page: Dest Address: 2.3.4.5 Src Address: 6.7.8.9
+ # Protocol: 132-SCTP
+ # Stream Control: Stop after this Stream, Packet Count 32.
+ #
+ 'SCTP_1':{
+ 'type': 'SCTP',
+ 'pkt_layers': {
+ #'ether': {'src': srcmac, 'dst': nutmac},
+ 'ipv4': {'src': '6.7.8.9', 'dst': '2.3.4.5'},
+ 'sctp': {'src': 32, 'dst': 33},
+ 'raw': {'payload': ['58'] * self.get_pkt_len('sctp')}}},
+ 'invalid_SCTP':{
+ 'type': 'SCTP',
+ 'pkt_layers': {
+ #'ether': {'src': srcmac, 'dst': nutmac},
+ 'ipv4': {'src': '9.8.7.6', 'dst': '192.168.0.36'},
+ 'sctp': {'src': 10, 'dst': 11},
+ 'raw': {'payload': ['58'] * self.get_pkt_len('sctp')}}},
+ }
+
+ # create packet for send
+ streams = []
+ for stm_name in stm_names:
+ if stm_name not in pkt_configs.keys():
+ continue
+ values = pkt_configs[stm_name]
+ savePath = os.sep.join([self.target_source,
+ "pkt_{0}.pcap".format(stm_name)])
+ pkt_type = values.get('type')
+ pkt_layers = values.get('pkt_layers')
+ pkt = Packet(pkt_type=pkt_type)
+ for layer in pkt_layers.keys():
+ pkt.config_layer(layer, pkt_layers[layer])
+ #
+ pkt.pktgen.write_pcap(savePath)
+ streams.append(pkt.pktgen.pkt)
+
+ return streams
+
+ def get_stream_rule_priority(self, stream_type):
+ stream_types = {
+ 'UDP_1': 0,
+ 'UDP_2': 1,
+ 'TCP_1': 2,
+ 'TCP_2': 3,
+ 'SCTP_1': 4}
+ return stream_types.get(stream_type, None)
+
+ def traffic(self, ports_topo):
+ """
+ stream transmission on specified link topology
+ """
+ time.sleep(2)
+ result = self.send_packets_by_ixia(**ports_topo)
+ # end traffic
+ self.logger.info("complete transmission")
+
+ return result
+
+ def check_filter_pkts(self, log, rule_priority):
+ if rule_priority != None:
+ pat = "rule\[{0}\] count=(\d+)".format(rule_priority)
+ else:
+ pat = "rule\[\d+\] count=(\d+)"
+ with open(log, 'rb') as fp:
+ content = fp.read()
+ if content:
+ grp = re.findall(pat, content, re.M)
+ if grp and len(grp):
+ total = reduce(lambda x,y: x+y, [int(i) for i in grp])
+ else:
+ total = 0
+ return total
+
+ def run_test_pre(self):
+ # boot up flow_classify
+ rule_config = os.sep.join([self.target_source,
+ 'examples',
+ 'flow_classify',
+ 'ipv4_rules_file.txt'])
+ if not os.path.exists(rule_config):
+ raise VerifyFailure("rules file doesn't existed")
+ option = r" -c 4 -n 4 --file-prefix=test -- --rule_ipv4={0}".format(rule_config)
+ dt = datetime.now()
+ timestamp = dt.strftime('%Y-%m-%d_%H%M%S')
+ self.output_log = '{0}/{1}_{2}.log'.format(self.flow_classify.output,
+ self.flow_classify.name,
+ timestamp)
+ self.flow_classify.start(option)
+ time.sleep(10)
+
+ return True
+
+ def get_ixia_peer_port(self):
+ for cnt in self.dut_ports:
+ if self.tester.get_local_port_type(cnt) != 'ixia':
+ continue
+ tester_port = self.tester.get_local_port(cnt)
+ return tester_port
+
+ def run_test_post(self, **kwargs):
+ # close flow_classify
+ output = self.flow_classify.close(self.output_log)
+ return output
+
+ def run_traffic(self, **kwargs):
+ stm_types = kwargs.get('stm_types')
+ burst_packet = kwargs.get('burst_packet')
+ gap = kwargs.get('gap')
+ dma = kwargs.get('dma')
+ traffic_time = kwargs.get('traffic_time')
+ #-----------------------------------------
+ # set traffic topology
+ # for lack ixia port, one of ixia port use normal link peer
+ # so there set a hard code for temporarily usage
+ port = 0
+ tester_port_id = self.get_ixia_peer_port()
+ if self.pktgen_name == 'ixia':
+ tx_port = tester_port_id
+ else:
+ tx_port = self.tester.get_interface(tester_port_id)
+ ports_topo = {'tx_intf': tx_port,
+ 'rx_intf': 0,
+ 'stream': self.set_stream(stm_types),
+ 'stream configs': {
+ 'count': burst_packet,
+ 'frameType': {
+ # gapNanoSeconds gapMilliSeconds gapSeconds
+ 'gapUnit': 'gapMilliSeconds',
+ 'ibg': gap[0],
+ 'ifg': gap[1],
+ 'isg': gap[2]},
+ 'flow_type': dma,
+ 'stream_type': 'burst'
+ },
+ # send bursts of 32 packets
+ 'traffic_type': 'burst',
+ # 0 means stop after one round traffic
+ # xx value means stop after traffic_time time
+ 'traffic_time': traffic_time,}
+ # begin traffic checking
+ result = self.traffic(ports_topo)
+
+ return result
+
+ def check_test_result(self, **kwargs):
+ check_results = []
+ stm_types = kwargs.get('stm_types')
+ burst_packet = kwargs.get('burst_packet')
+ dma = kwargs.get('dma')
+ self.logger.info(stm_types)
+ for stm_type in stm_types:
+ rule_priority = self.get_stream_rule_priority(stm_type)
+ captured_pkts = self.check_filter_pkts(self.output_log,
+ rule_priority)
+ self.logger.info("%s %d %d"%(stm_type, rule_priority or 0,
+ captured_pkts or 0))
+ msg = ''
+ if len(stm_types) > 1:#dma == 'contBurst':
+ # check if packets are multiples of burst pkts
+ # ignore invalid rule
+ if rule_priority and captured_pkts%burst_packet != 0 :
+ msg = ("captured packets are not multiples of "
+ "burst {0} packets".format(burst_packet))
+ else:
+ continue
+ elif dma == 'stopStream':
+ if rule_priority == None and captured_pkts != 0:
+ msg = "invalid stream hasn't been filtered out"
+ elif rule_priority != None and captured_pkts != burst_packet:
+ msg = "expect {0} ".format(burst_packet) + \
+ "captured {0}".format(captured_pkts)
+ else:
+ continue
+ else:
+ continue
+ if msg:
+ check_results.append(msg)
+
+ if check_results:
+ self.logger.error(os.linesep.join(check_results))
+ raise VerifyFailure("test result fail")
+ else:
+ return True
+
+ def burst_traffic(self, stm_types=None, gap=[100, 100, 100],
+ flow_type="one burst"):
+ self.logger.info('begin to check ...... ')
+
+ info = {}
+ info['stm_types'] = stm_types
+ info['burst_packet'] = 32
+ info['gap'] = gap
+ if flow_type == "one burst":
+ info['dma'] = 'stopStream'
+ info['traffic_time'] = 0
+ else:
+ info['dma'] = 'gotoFirst'
+ info['traffic_time'] = 30
+ check_flg = False
+ #-----------------------------------------
+ try:
+ # preset test environment
+ self.run_test_pre()
+ # run traffic
+ self.run_traffic(**info)
+ check_flg = True
+ except Exception as e:
+ pass
+ finally:
+ pass
+ #-----------------------------------------
+ # close flow_classify
+ self.run_test_post(**info)
+ #-----------------------------------------
+ # analysis test result
+ if check_flg == True:
+ status = self.check_test_result(**info)
+ else:
+ status = False
+
+ return status
+
+ def check_tx_mixed(self):
+ stream_list = [
+ 'UDP_1', 'UDP_2', 'invalid_UDP',
+ 'TCP_1', 'TCP_2', 'invalid_TCP',
+ 'SCTP_1', 'invalid_SCTP']
+ paras=[[1, 10, 1000]]
+ for para in paras:
+ self.burst_traffic(stm_types=stream_list, gap=para,
+ flow_type="mixed burst")
+
+ def test_perf_udp_valid_rule(self):
+ stream_list = ['UDP_1', 'UDP_2']
+ for stm_type in stream_list:
+ self.burst_traffic([stm_type])
+
+ def test_perf_udp_invalid_rule(self):
+ stream_list = ['invalid_UDP']
+ for stm_type in stream_list:
+ self.burst_traffic([stm_type])
+
+ def test_perf_tcp_valid_rule(self):
+ stream_list = ['TCP_1', 'TCP_2']
+ for stm_type in stream_list:
+ self.burst_traffic([stm_type])
+
+ def test_perf_tcp_invalid_rule(self):
+ stream_list = ['invalid_TCP']
+ for stm_type in stream_list:
+ self.burst_traffic([stm_type])
+
+ def test_perf_sctp_valid_rule(self):
+ stream_list = ['SCTP_1']
+ for stm_type in stream_list:
+ self.burst_traffic([stm_type])
+
+ def test_perf_sctp_invalid_rule(self):
+ stream_list = ['invalid_SCTP']
+ for stm_type in stream_list:
+ self.burst_traffic([stm_type])
+
+ def test_perf_whole_rules(self):
+ self.check_tx_mixed()
\ No newline at end of file
--
1.9.3
^ permalink raw reply [flat|nested] 5+ messages in thread
* [dts] [PATCH V1 3/4] ipfix_flow_classify: framework etgen/ixia function exetend
2018-06-06 5:34 [dts] [PATCH V1 0/4] ipfix_flow_classify: upload test plan yufengx.mo
2018-06-06 5:34 ` [dts] [PATCH V1 1/4] " yufengx.mo
2018-06-06 5:34 ` [dts] [PATCH V1 2/4] ipfix_flow_classify: upload automation script yufengx.mo
@ 2018-06-06 5:34 ` yufengx.mo
2018-06-06 5:34 ` [dts] [PATCH V1 4/4] ipfix_flow_classify: framework setting yufengx.mo
3 siblings, 0 replies; 5+ messages in thread
From: yufengx.mo @ 2018-06-06 5:34 UTC (permalink / raw)
To: dts; +Cc: yufengmx
From: yufengmx <yufengx.mo@intel.com>
a new method to support send a group of multiple streams using burst mode by
a custom interval time
Signed-off-by: yufengmx <yufengx.mo@intel.com>
---
framework/etgen.py | 184 ++++++++++++++++++++++++++++++++++++++++++++++++++--
framework/tester.py | 53 ++++++++++++++-
2 files changed, 231 insertions(+), 6 deletions(-)
mode change 100755 => 100644 framework/tester.py
diff --git a/framework/etgen.py b/framework/etgen.py
index 2856a28..296d416 100644
--- a/framework/etgen.py
+++ b/framework/etgen.py
@@ -1,6 +1,6 @@
# BSD LICENSE
#
-# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+# Copyright(c) 2010-2018 Intel Corporation. All rights reserved.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
@@ -168,6 +168,8 @@ class IxiaPacketGenerator(SSHConnection):
"""
def __init__(self, tester):
+ self.bpsRate, self.oversize, self.rate = 0, 0, 0
+ self.rxPortlist, self.txPortlist = None, None
self.tester = tester
self.NAME = 'ixia'
self.logger = getLogger(self.NAME)
@@ -227,6 +229,12 @@ class IxiaPacketGenerator(SSHConnection):
Add one tcl command into command list.
"""
self.tcl_cmds.append(cmd)
+
+ def add_tcl_cmds(self, cmds):
+ """
+ Add multiple tcl commands into command list.
+ """
+ self.tcl_cmds += cmds
def clean(self):
"""
@@ -393,7 +401,62 @@ class IxiaPacketGenerator(SSHConnection):
self.add_tcl_cmd("stream set %d %d %d %d" %
(self.chasId, txport['card'], txport['port'], stream_id))
- def config_ixia_stream(self, rate_percent, flows, latency):
+ def config_burst_stream(self, fpcap, txport, rate_percent, stream_id=1,
+ latency=False, **kwargs):
+ """
+ Configure IXIA stream and enable mutliple flows.
+ """
+ flows = self.parse_pcap(fpcap)
+
+ self.add_tcl_cmd("ixGlobalSetDefault")
+
+ dma = kwargs.get('flow_type', 'stopStream')
+ frameType = kwargs.get('frameType', None)
+ gapUnit = frameType.get('type', 'gapMilliSeconds')
+ isg = frameType.get('isg', 100)
+ ifg = frameType.get('ifg', 100)
+ ibg = frameType.get('ibg', 100)
+ pkt = kwargs.get('count', 100)
+ frame_cmds = [
+ "stream config -numBursts {0}".format(pkt),
+ "stream config -gapUnit {0}".format(gapUnit),
+ "stream config -rateMode useGap",
+ "stream config -ifg {0}".format(ifg),
+ "stream config -ifgType gapFixed",
+ "stream config -enableIbg true",
+ "stream config -ibg {0}".format(ibg),
+ "stream config -enableIsg true",
+ "stream config -isg {0}".format(isg),]
+
+ self.config_ixia_stream(rate_percent, flows, latency)
+
+ pat = re.compile(r"(\w+)\((.*)\)")
+ for flow in flows:
+ for header in flow.split('/'):
+ match = pat.match(header)
+ params = eval('dict(%s)' % match.group(2))
+ method_name = match.group(1)
+ if method_name == 'Vxlan':
+ method = getattr(self, method_name.lower())
+ method(txport, **params)
+ break
+ if method_name in SCAPY2IXIA:
+ method = getattr(self, method_name.lower())
+ method(txport, **params)
+ cmds =["stream config -name {0}".format(stream_id),
+ "stream set %d %d %d %d" % (self.chasId, txport['card'],
+ txport['port'], stream_id)]
+ self.add_tcl_cmds(frame_cmds + cmds)
+ stream_id += 1
+
+ stream_id -= 1
+ cmds = ["stream config -dma {0}".format(dma),
+ "stream set %d %d %d %d" %
+ (self.chasId, txport['card'], txport['port'], stream_id),
+ ]
+ self.add_tcl_cmds(cmds)
+
+ def config_ixia_stream(self, rate_percent, flows, latency, dma=None):
"""
Configure IXIA stream with rate and latency.
Override this method if you want to add custom stream configuration.
@@ -402,7 +465,7 @@ class IxiaPacketGenerator(SSHConnection):
self.add_tcl_cmd("stream config -percentPacketRate %s" % rate_percent)
self.add_tcl_cmd("stream config -numFrames 1")
if len(flows) == 1:
- self.add_tcl_cmd("stream config -dma contPacket")
+ self.add_tcl_cmd("stream config -dma {0}".format(dma or 'contPacket'))
else:
self.add_tcl_cmd("stream config -dma advance")
# request by packet Group
@@ -527,6 +590,43 @@ class IxiaPacketGenerator(SSHConnection):
return {'card': int(m.group(1)), 'port': int(m.group(2))}
+ def get_port_stats(self, rxPortlist, txPortlist, delay=5):
+ """
+ Get RX/TX packet statistics and calculate loss rate.
+ """
+ time.sleep(delay)
+
+ #self.send_expect("ixStopTransmit portList", "%", 10)
+ time.sleep(2)
+ sendNumber = 0
+ for port in txPortlist:
+ self.stat_get_stat_all_stats(port)
+ sendNumber += self.get_frames_sent()
+ time.sleep(0.5)
+
+ self.logger.info("send :%f" % sendNumber)
+
+ assert sendNumber != 0
+
+ revNumber = 0
+ for port in rxPortlist:
+ self.stat_get_stat_all_stats(port)
+ revNumber += self.get_frames_received()
+ self.logger.info("rev :%f" % revNumber)
+
+ return sendNumber, revNumber
+
+ def port_stats(self, portList, ratePercent, delay=5):
+ rxPortlist, txPortlist = self.rxPortlist, self.txPortlist
+ return self.get_port_stats(rxPortlist, txPortlist, delay)
+
+ def port_line_rate(self):
+ chasId = '1'
+ port_line_rate = []
+ for port in self.ports:
+ port_line_rate.append(self.get_line_rate(chasId, port))
+ return port_line_rate
+
def loss(self, portList, ratePercent, delay=5):
"""
Run loss performance test and return loss rate.
@@ -589,6 +689,41 @@ class IxiaPacketGenerator(SSHConnection):
"""
rxPortlist, txPortlist = self._configure_everything(port_list, rate_percent)
return self.get_transmission_results(rxPortlist, txPortlist, delay)
+
+ def loop_throughput(self, port_list, rate_percent=100, delay=5):
+ """
+ Run throughput performance test and return throughput statistics.
+ """
+ rxPortlist, txPortlist = self._configure_everything(port_list, rate_percent)
+ self.rxPortlist, self.txPortlist = rxPortlist, txPortlist
+
+ def stop_loop_throughput(self, port_list, rate_percent=100, delay=5):
+ """
+ Run throughput performance test and return throughput statistics.
+ """
+
+ return self.stop_loop_transmission(self.rxPortlist, self.txPortlist, delay)
+ return True
+
+ def burst_throughput(self, port_list, rate_percent=100, delay=5, **kwargs):
+ """
+ Run burst performance test and return throughput statistics.
+ """
+ rxPortlist, txPortlist = self._configure_burst(port_list, rate_percent, **kwargs)
+ self.rxPortlist, self.txPortlist = rxPortlist, txPortlist
+ return True
+
+ def _configure_burst(self, port_list, rate_percent, latency=False, **kwargs):
+ """
+ Prepare and configure IXIA ports for performance test.
+ """
+ rxPortlist, txPortlist = self.prepare_port_list(port_list, rate_percent,
+ latency , **kwargs)
+ self.prepare_ixia_for_transmission(txPortlist, rxPortlist)
+ self.configure_transmission()
+ self.start_transmission()
+ self.clear_tcl_commands()
+ return rxPortlist, txPortlist
"""
This function could be used to check the packets' order whether same as the receive sequence.
@@ -644,12 +779,14 @@ class IxiaPacketGenerator(SSHConnection):
"""
self.add_tcl_cmd("ixStartTransmit portList")
- def prepare_port_list(self, portList, rate_percent=100, latency=False):
+ def prepare_port_list(self, portList, rate_percent=100, latency=False,
+ **kwargs):
"""
Configure stream and flow on every IXIA ports.
"""
txPortlist = set()
rxPortlist = set()
+ stream_type= kwargs.get('stream_type') or 'normal'
for (txPort, rxPort, pcapFile) in portList:
txPortlist.add(txPort)
@@ -661,7 +798,15 @@ class IxiaPacketGenerator(SSHConnection):
# stream/flow setting
for (txPort, rxPort, pcapFile) in portList:
- self.config_stream(pcapFile, self.pci_to_port(self.tester.get_pci(txPort)), rate_percent, 1, latency)
+ if stream_type != 'normal':
+ config_stream = getattr(self, 'config_{0}_stream'.format(stream_type))
+ config_stream(pcapFile,
+ self.pci_to_port(self.tester.get_pci(txPort)),
+ rate_percent, 1, latency, **kwargs)
+ else:
+ self.config_stream(pcapFile,
+ self.pci_to_port(self.tester.get_pci(txPort)),
+ rate_percent, 1, latency)
# config stream before packetGroup
if latency is not False:
@@ -686,6 +831,35 @@ class IxiaPacketGenerator(SSHConnection):
def hook_transmission_func(self):
pass
+ def loop_get_transmission_results(self, rx_port_list, tx_port_list, delay=5):
+ """
+ Override this method if you want to change the way of getting results
+ back from IXIA.
+ """
+ time.sleep(delay)
+ bpsRate = self.bpsRate
+ rate = self.rate
+ oversize = self.oversize
+ for port in rx_port_list:
+ self.stat_get_rate_stat_all_stats(port)
+ out = self.send_expect("stat cget -framesReceived", '%', 10)
+ rate += int(out.strip())
+ out = self.send_expect("stat cget -bitsReceived", '% ', 10)
+ self.logger.debug("port %d bits rate:" % (port) + out)
+ bpsRate += int(out.strip())
+ out = self.send_expect("stat cget -oversize", '%', 10)
+ oversize += int(out.strip())
+
+ self.logger.info("Rate: %f Mpps" % (rate * 1.0 / 1000000))
+ self.logger.info("Mbps rate: %f Mbps" % (bpsRate * 1.0 / 1000000))
+ self.hook_transmissoin_func()
+
+ return True
+
+ def stop_loop_transmission(self, rx_port_list, tx_port_list, delay=5):
+ return self.get_transmission_results(self.rxPortlist, self.txPortlist,
+ delay)
+
def get_transmission_results(self, rx_port_list, tx_port_list, delay=5):
"""
Override this method if you want to change the way of getting results
diff --git a/framework/tester.py b/framework/tester.py
old mode 100755
new mode 100644
index a775f68..7f34292
--- a/framework/tester.py
+++ b/framework/tester.py
@@ -1,6 +1,6 @@
# BSD LICENSE
#
-# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+# Copyright(c) 2010-2018 Intel Corporation. All rights reserved.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
@@ -502,6 +502,41 @@ class Tester(Crb):
return None
return self.packet_gen.throughput(portList, rate_percent)
+ def loop_traffic_generator_throughput(self, portList, rate_percent=100,
+ delay=5):
+ """
+ Run throughput performance test on specified ports.
+ """
+ if self.check_port_list(portList, 'ixia'):
+ return self.ixia_packet_gen.loop_throughput(portList, rate_percent,
+ delay)
+ if not self.check_port_list(portList):
+ self.logger.warning("exception by mixed port types")
+ return None
+ result = self.packet_gen.loop_throughput(portList, rate_percent)
+ return result
+
+ def stop_traffic_generator_throughput_loop(self, portList,
+ rate_percent=100, delay=5):
+ """
+ Run throughput performance test on specified ports.
+ """
+ return self.ixia_packet_gen.stop_loop_throughput(portList,
+ rate_percent, delay)
+
+ def burst_traffic_generator_throughput(self, portList, rate_percent=100,
+ delay=5, **kwargs):
+ """
+ Run throughput performance test on specified ports.
+ """
+ if self.check_port_list(portList, 'ixia'):
+ return self.ixia_packet_gen.burst_throughput(portList, rate_percent,
+ delay, **kwargs)
+ if not self.check_port_list(portList):
+ self.logger.warning("exception by mixed port types")
+ return None
+ return result
+
def verify_packet_order(self, portList, delay):
if self.check_port_list(portList, 'ixia'):
return self.ixia_packet_gen.is_packet_ordered(portList, delay)
@@ -536,6 +571,22 @@ class Tester(Crb):
return None
return self.packet_gen.loss(portList, ratePercent, delay)
+ def traffic_get_port_stats(self, portList, delay=60):
+ """
+ Run loss performance test on specified ports.
+ """
+ ratePercent = float(100)
+ if self.check_port_list(portList, 'ixia'):
+ return self.ixia_packet_gen.port_stats(portList, ratePercent, delay)
+ elif not self.check_port_list(portList):
+ self.logger.warning("exception by mixed port types")
+ return None
+ else:
+ return None
+
+ def get_port_line_rate(self):
+ return self.ixia_packet_gen.port_line_rate()
+
def traffic_generator_latency(self, portList, ratePercent=100, delay=5):
"""
Run latency performance test on specified ports.
--
1.9.3
^ permalink raw reply [flat|nested] 5+ messages in thread
* [dts] [PATCH V1 4/4] ipfix_flow_classify: framework setting
2018-06-06 5:34 [dts] [PATCH V1 0/4] ipfix_flow_classify: upload test plan yufengx.mo
` (2 preceding siblings ...)
2018-06-06 5:34 ` [dts] [PATCH V1 3/4] ipfix_flow_classify: framework etgen/ixia function exetend yufengx.mo
@ 2018-06-06 5:34 ` yufengx.mo
3 siblings, 0 replies; 5+ messages in thread
From: yufengx.mo @ 2018-06-06 5:34 UTC (permalink / raw)
To: dts; +Cc: yufengmx
From: yufengmx <yufengx.mo@intel.com>
add sctp head size definition
Signed-off-by: yufengmx <yufengx.mo@intel.com>
---
framework/settings.py | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/framework/settings.py b/framework/settings.py
index 07c3ac6..99107fd 100644
--- a/framework/settings.py
+++ b/framework/settings.py
@@ -1,6 +1,6 @@
# BSD LICENSE
#
-# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+# Copyright(c) 2010-2018 Intel Corporation. All rights reserved.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
@@ -173,6 +173,7 @@ HEADER_SIZE = {
'ipv6': 40,
'udp': 8,
'tcp': 20,
+ 'sctp': 132,
'vxlan': 8,
}
"""
--
1.9.3
^ permalink raw reply [flat|nested] 5+ messages in thread