test suite reviews and discussions
 help / color / mirror / Atom feed
From: Zhimin Huang <zhiminx.huang@intel.com>
To: dts@dpdk.org
Cc: Zhimin Huang <zhiminx.huang@intel.com>
Subject: [dts][PATCH V1] tests/large_vf:modify test case to adapt ice change
Date: Mon, 18 Jul 2022 23:27:29 +0800	[thread overview]
Message-ID: <20220718152729.5200-1-zhiminx.huang@intel.com> (raw)

For ice kernel update, the available queue is 767 in 2 port E810 nic.
it's only support 2 vfs start testpmd with 256 queue.

Signed-off-by: Zhimin Huang <zhiminx.huang@intel.com>
---
 test_plans/large_vf_test_plan.rst | 11 ++++++++---
 tests/TestSuite_large_vf.py       | 10 ++++++----
 2 files changed, 14 insertions(+), 7 deletions(-)

diff --git a/test_plans/large_vf_test_plan.rst b/test_plans/large_vf_test_plan.rst
index c7f0d119..e043a660 100644
--- a/test_plans/large_vf_test_plan.rst
+++ b/test_plans/large_vf_test_plan.rst
@@ -36,7 +36,7 @@ Note::
      --total-num-mbufs=N, N is mbuf number, usually allocate 512 mbuf for one
      queue, if use 3 VFs, N >= 512*256*3=393216.
 
-Test case: 3 Max VFs + 256 queues
+Test case: multi VFs + 256 queues
 =================================
 
 Subcase 1 : multi fdir for 256 queues of consistent queue group
@@ -308,12 +308,17 @@ or::
 Fail to setup test.
 
 
-Subcase 7: negative: fail to setup 256 queues when more than 3 VFs
+Subcase 7: negative: fail to setup 256 queues when more than 2 VFs
 ------------------------------------------------------------------
-Create 4 VFs.
+Create 3 VFs.
 Bind all VFs to vfio-pci.
 Fail to start testpmd with "--txq=256 --rxq=256".
 
+.. note::
+
+    For SW4.0 + ice-1.9.5, the available queue is 767 in 2 port E810 nic, it support 2 vfs start testpmd with 256 queue
+
+    For SW3.2 + ice-1.8.3, the available queue is 943 in 2 port E810 nic, it support 3 vfs start testpmd with 256 queue
 
 Test case: 128 Max VFs + 4 queues (default)
 ===========================================
diff --git a/tests/TestSuite_large_vf.py b/tests/TestSuite_large_vf.py
index 00991790..4e2ff1d6 100644
--- a/tests/TestSuite_large_vf.py
+++ b/tests/TestSuite_large_vf.py
@@ -355,7 +355,9 @@ class TestLargeVf(TestCase):
                 elif subcase_name == "test_more_than_3_vfs_256_queues":
                     self.pmd_output.execute_cmd("quit", "#")
                     # start testpmd use 256 queues
-                    for i in range(self.vf_num + 1):
+                    # for CVL 2 ports, the available queue is 767 for ice-1.9.5(SW4.0)
+                    _vfs_num = self.vf_num - 1
+                    for i in range(_vfs_num + 1):
                         if self.max_vf_num == 64:
                             self.pmdout_list[0].start_testpmd(
                                 param=tv["param"],
@@ -379,7 +381,7 @@ class TestLargeVf(TestCase):
                             self.pmdout_list[0].execute_cmd("quit", "# ")
                             break
                         else:
-                            if i < self.vf_num:
+                            if i < _vfs_num:
                                 self.pmdout_list[i].start_testpmd(
                                     param=tv["param"],
                                     ports=[self.sriov_vfs_port[i].pci],
@@ -406,7 +408,7 @@ class TestLargeVf(TestCase):
                                 self.pmdout_list[0].execute_cmd("quit", "# ")
                                 self.pmdout_list[1].execute_cmd("quit", "# ")
                                 self.pmdout_list[2].execute_cmd("quit", "# ")
-                                if self.vf_num > 3:
+                                if _vfs_num > 3:
                                     self.pmdout_list[3].execute_cmd("quit", "# ")
                                     self.pmdout_list[4].execute_cmd("quit", "# ")
                                     self.pmdout_list[5].execute_cmd("quit", "# ")
@@ -645,7 +647,7 @@ class TestLargeVf(TestCase):
             )
             self.check_txonly_pkts(rxtx_num)
 
-    def test_3_vfs_256_queues(self):
+    def test_multi_vfs_256_queues(self):
         self.create_iavf(self.vf_num + 1)
         self.launch_testpmd("--rxq=256 --txq=256", total=True)
         self.config_testpmd()
-- 
2.17.1


             reply	other threads:[~2022-07-18  7:06 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-18 15:27 Zhimin Huang [this message]
2022-08-24 10:13 ` lijuan.tu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220718152729.5200-1-zhiminx.huang@intel.com \
    --to=zhiminx.huang@intel.com \
    --cc=dts@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).