test suite reviews and discussions
 help / color / mirror / Atom feed
From: "Tu, Lijuan" <lijuan.tu@intel.com>
To: "Wang, Yinan" <yinan.wang@intel.com>, "dts@dpdk.org" <dts@dpdk.org>
Cc: "Wang, Yinan" <yinan.wang@intel.com>
Subject: Re: [dts] [PATCH v1] tests: add packed ring vectorized ring cases in	TestSuite_pvp_multi_paths_vhost_single_core_performance.py
Date: Mon, 27 Apr 2020 07:51:05 +0000	[thread overview]
Message-ID: <8CE3E05A3F976642AAB0F4675D0AD20E0BC12A86@SHSMSX101.ccr.corp.intel.com> (raw)
In-Reply-To: <20200425213208.114434-1-yinan.wang@intel.com>

Applied, thanks

> -----Original Message-----
> From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Yinan
> Sent: Sunday, April 26, 2020 5:32 AM
> To: dts@dpdk.org
> Cc: Wang, Yinan <yinan.wang@intel.com>
> Subject: [dts] [PATCH v1] tests: add packed ring vectorized ring cases in
> TestSuite_pvp_multi_paths_vhost_single_core_performance.py
> 
> From: Wang Yinan <yinan.wang@intel.com>
> 
> Signed-off-by: Wang Yinan <yinan.wang@intel.com>
> ---
>  ...lti_paths_vhost_single_core_performance.py | 28 ++++++++++++++-----
>  1 file changed, 21 insertions(+), 7 deletions(-)
> 
> diff --git
> a/tests/TestSuite_pvp_multi_paths_vhost_single_core_performance.py
> b/tests/TestSuite_pvp_multi_paths_vhost_single_core_performance.py
> index 1da023e..c4d4081 100644
> --- a/tests/TestSuite_pvp_multi_paths_vhost_single_core_performance.py
> +++ b/tests/TestSuite_pvp_multi_paths_vhost_single_core_performance.py
> @@ -114,7 +114,6 @@ class TestPVPMultiPathVhostPerformance(TestCase):
>                  self.dut_ports[0])
>              destination_mac = self.dut.get_mac_address(
>                  self.dut_ports[0])
> -
>              pkt = Packet(pkt_type='UDP', pkt_len=frame_size)
>              pkt.config_layer('ether', {'dst': '%s' % destination_mac})
>              pkt.save_pcapfile(self.tester, "%s/multi_path.pcap" % (self.out_path))
> @@ -122,15 +121,13 @@ class
> TestPVPMultiPathVhostPerformance(TestCase):
> 
>              self.tester.pktgen.clear_streams()
>              streams =
> self.pktgen_helper.prepare_stream_from_tginput(tgen_input, 100, None,
> self.tester.pktgen)
> -            traffic_opt = {'delay': 5}
> -            _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams,
> options=traffic_opt)
> +            _, pps =
> + self.tester.pktgen.measure_throughput(stream_ids=streams)
>              Mpps = pps / 1000000.0
>              self.verify(Mpps > 0.0, "%s can not receive packets of frame size %d" %
> (self.running_case, frame_size))
> 
>              linerate = Mpps * 100 / \
>                           float(self.wirespeed(self.nic, frame_size,
> self.number_of_ports))
>              self.throughput[frame_size][self.nb_desc] = Mpps
> -
>              results_row = [frame_size]
>              results_row.append(case_info)
>              results_row.append(Mpps)
> @@ -343,7 +340,24 @@ class TestPVPMultiPathVhostPerformance(TestCase):
>          """
>          self.test_target = self.running_case
>          self.expected_throughput =
> self.get_suite_cfg()['expected_throughput'][self.test_target]
> -        virtio_pmd_arg = {"version": "in_order=1,packed_vq=1,mrg_rxbuf=0",
> +        virtio_pmd_arg = {"version":
> "in_order=1,packed_vq=1,mrg_rxbuf=0,vectorized=1",
> +                            "path": "--rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip"}
> +        self.start_vhost_testpmd()
> +        self.start_virtio_testpmd(virtio_pmd_arg)
> +        self.send_and_verify("virtio_1.1 inorder normal")
> +        self.close_all_testpmd()
> +        self.logger.info('result of all framesize result')
> +        self.result_table_print()
> +        self.handle_expected()
> +        self.handle_results()
> +
> +    def test_perf_vhost_single_core_virtio11_vectorized(self):
> +        """
> +        performance for Vhost PVP virtio1.1 vectorized Path.
> +        """
> +        self.test_target = self.running_case
> +        self.expected_throughput =
> self.get_suite_cfg()['expected_throughput'][self.test_target]
> +        virtio_pmd_arg = {"version":
> + "in_order=1,packed_vq=1,mrg_rxbuf=0,vectorized=1",
>                              "path": "--tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip"}
>          self.start_vhost_testpmd()
>          self.start_virtio_testpmd(virtio_pmd_arg)
> @@ -371,9 +385,9 @@ class TestPVPMultiPathVhostPerformance(TestCase):
>          self.handle_expected()
>          self.handle_results()
> 
> -    def test_perf_vhost_single_core_inorder_no_mergeable(self):
> +    def test_perf_vhost_single_core_inorder_no_normal(self):
>          """
> -        performance for Vhost PVP In_order no_mergeable Path.
> +        performance for Vhost PVP In_order normal Path.
>          """
>          self.test_target = self.running_case
>          self.expected_throughput =
> self.get_suite_cfg()['expected_throughput'][self.test_target]
> --
> 2.17.1


      reply	other threads:[~2020-04-27  7:51 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-25 21:32 Yinan
2020-04-27  7:51 ` Tu, Lijuan [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8CE3E05A3F976642AAB0F4675D0AD20E0BC12A86@SHSMSX101.ccr.corp.intel.com \
    --to=lijuan.tu@intel.com \
    --cc=dts@dpdk.org \
    --cc=yinan.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).