test suite reviews and discussions
 help / color / mirror / Atom feed
From: xiewei <weix.xie@intel.com>
To: dts@dpdk.org
Cc: xiewei <weix.xie@intel.com>
Subject: [dts] [PATCH 2/2] test_plans/rxtx_offload_test_plan: adapt to CVL NIC
Date: Wed, 24 Mar 2021 19:24:43 +0800	[thread overview]
Message-ID: <20210324112443.29319-3-weix.xie@intel.com> (raw)
In-Reply-To: <20210324112443.29319-1-weix.xie@intel.com>

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=y, Size: 2090 bytes --]

add CVL NIC supported.

Signed-off-by: xiewei <weix.xie@intel.com>
---
 test_plans/rxtx_offload_test_plan.rst | 29 +++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/test_plans/rxtx_offload_test_plan.rst b/test_plans/rxtx_offload_test_plan.rst
index 246e1e16..8b034d67 100644
--- a/test_plans/rxtx_offload_test_plan.rst
+++ b/test_plans/rxtx_offload_test_plan.rst
@@ -146,6 +146,21 @@ Test case: Rx offload per-port setting
    The port can be started normally, but the setting doesn't take effect.
    Pkt1 still can be distributed to queue 1.
 
+Note:
+
+But for ice NIC, disable jumboframe per-queue will change the RSS key.
+So if set jumboframe per-queue, the hash value of received packet will be changed, and we can’t judge the test result by queue number.
+So we disable jumboframe per-queue in all the queues, and if the packet still can be received in any queue, the setting doesn't take effect::
+
+    testpmd> port stop 0
+    testpmd> port 0 rxq 0 rx_offload jumbo_frame off
+    testpmd> port 0 rxq 1 rx_offload jumbo_frame off
+    testpmd> port 0 rxq 2 rx_offload jumbo_frame off
+    testpmd> port 0 rxq 3 rx_offload jumbo_frame off
+    testpmd> port start 0
+
+   Pkt1 can be distributed to queues by RSS.
+
 4. Succeed to disable jumboframe per_port::
 
     testpmd> port stop 0
@@ -643,6 +658,20 @@ Test case: FVL Tx offload per-queue setting
 
    The port fwd can be started normally.
 
+Note:
+
+But for ice NIC, it failed to enable mbuf_fast_free per_queue::
+
+    testpmd> port stop 0
+    testpmd> port 0 txq 0 tx_offload mbuf_fast_free on
+    testpmd> port 0 txq 1 tx_offload mbuf_fast_free on
+    testpmd> port 0 txq 2 tx_offload mbuf_fast_free on
+    testpmd> port 0 txq 3 tx_offload mbuf_fast_free on
+    testpmd> port start 0
+    Configuring Port 0 (socket 0)
+    Ethdev port_id=0 tx_queue_id=0, new added offloads 0x10000 must be within per-queue offload capabilities 0x0 in rte_eth_tx_queue_setup()
+    Fail to configure port 0 tx queues
+
 4. Disable mbuf_fast_free per_queue::
 
     testpmd> port stop 0
-- 
2.17.1


  parent reply	other threads:[~2021-03-24  3:00 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-24 11:24 [dts] [PATCH 0/2] " xiewei
2021-03-24  3:05 ` Xie, WeiX
2021-03-24 11:24 ` [dts] [PATCH 1/2] tests/rxtx_offload: " xiewei
2021-03-24 11:24 ` xiewei [this message]
2021-03-25  7:20   ` [dts] [PATCH 2/2] test_plans/rxtx_offload_test_plan: " Tu, Lijuan
2021-03-25  3:18 ` [dts] [PATCH 0/2] " Peng, Yuan
2021-03-25  3:19 ` Peng, Yuan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210324112443.29319-3-weix.xie@intel.com \
    --to=weix.xie@intel.com \
    --cc=dts@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).