test suite reviews and discussions
 help / color / mirror / Atom feed
From: Peng Yuan <yuan.peng@intel.com>
To: dts@dpdk.org
Cc: pengyuan <yuan.peng@intel.com>
Subject: [dts] [PATCH v2]test_plans: correct the discription error for PFCP case
Date: Fri, 19 Jun 2020 14:51:28 +0000	[thread overview]
Message-ID: <1592578288-112628-1-git-send-email-yuan.peng@intel.com> (raw)

From: pengyuan <yuan.peng@intel.com>

Correct the discription error for PFCP case.

Signed-off-by: pengyuan <yuan.peng@intel.com>

diff --git a/test_plans/cvl_advanced_iavf_rss_test_plan.rst b/test_plans/cvl_advanced_iavf_rss_test_plan.rst
index 3fd8882..1fca6fc 100644
--- a/test_plans/cvl_advanced_iavf_rss_test_plan.rst
+++ b/test_plans/cvl_advanced_iavf_rss_test_plan.rst
@@ -44,7 +44,11 @@ to hash IP and ports domain, diversion the packets to the difference queues in V
 * GTPU_DOWN and GTPU_UP rule creat and package
 * symmetric hash by rte_flow RSS action.
 * input set change by rte_flow RSS action.
-  
+* For PFCP protocal, the destination port value of the outer UDP header is equal to 8805(0x2265)
+  PFCP Node headers shall be identified when the Version field is equal to 001 and the S field is equal 0.
+  PFCP Session headers shall be identified when the Version field is equal to 001 and the S field is equal 1.
+  CVL only support RSS hash for PFCP Session SEID value.
+
 Pattern and input set
 ---------------------
 .. table::
@@ -128,6 +132,10 @@ Pattern and input set
     +-------------------------------+---------------------------+----------------------------------------------------------------------------------+
     |                               | MAC_IPV4_CVLAN            |  [VLAN ID]                                                                       |
     +-------------------------------+---------------------------+----------------------------------------------------------------------------------+
+    |                               | MAC_IPV4_PFCP_SESSION     |  [SEID]                                                                          |
+    +-------------------------------+---------------------------+----------------------------------------------------------------------------------+
+    |                               | MAC_IPV6_PFCP_SESSION     |  [SEID]                                                                          |
+    +-------------------------------+---------------------------+----------------------------------------------------------------------------------+
 
 .. table::
 
@@ -307,14 +315,16 @@ Compile DPDK and testpmd::
     testpmd>set fwd rxonly
     testpmd>set verbose 1
     testpmd>rx_vxlan_port add 4789 0
-   
-5. start scapy and configuration NVGRE and GTP profile in tester
+
+5. start scapy and configuration NVGRE, PFCP and GTP profile in tester
+   add pfcp.py to "scapy/layers", add "pfcp" to "load_layers" in "scapy/config.py",
    scapy::
 
-   >>> import sys
-   >>> sys.path.append('~/dts/dep')
-   >>> from nvgre import NVGRE
-   >>> from scapy.contrib.gtp import *
+    >>> import sys
+    >>> sys.path.append('~/dts/dep')
+    >>> from nvgre import NVGRE
+    >>> from pfcp import PFCP
+    >>> from scapy.contrib.gtp import *
 
 Test case: MAC_IPV4_L3SRC
 =========================
@@ -4166,86 +4176,9 @@ Test case: MAC_ETH:
 #. Destory rule on port 0 
          testpmd> flow flush 0
 
-==========================================
-CVL Support RSS for PFCP in advanced iavf
-==========================================
-
-Description
-===========
-
-For PFCP protocal, the destination port value of the outer UDP header is equal to 8805(0x2265)
-PFCP Node headers shall be identified when the Version field is equal to 001 and the S field is equal 0.
-PFCP Session headers shall be identified when the Version field is equal to 001 and the S field is equal 1.
 
-CVL supports PFCP protocols in advanced iavf, the supported pattern as below::
-    
-    +-------------------------+------------------------+
-    |    Packet type          |     RSS input set      |
-    +-------------------------+------------------------+
-    |  MAC_IPV4_PFCP_NODE     |           -            |
-    +-------------------------+------------------------+
-    |  MAC_IPV4_PFCP_SESSION  |          SEID          |
-    +-------------------------+------------------------+
-    |  MAC_IPV6_PFCP_NODE     |           -            |
-    +-------------------------+------------------------+
-    |  MAC_IPV6_PFCP_SESSION  |          SEID          |
-    +-------------------------+------------------------+
-
-Prerequisites
-=============
-
-Create a VF interface from kernel PF interfaces, and then attach them to VM. Suppose PF is 0000:18:00.0 . 
-Generate a VF using commands below and make them in pci-stub mods.
-
-NIC: 4x25G or 2x100G, several TC need breakout mode, then 2x100G is required
-PF: The 1st PF's PCI address 0000:18:00.0 , kernel interface name enp24s0f0 . The 2nd PF's PCI address 0000:18:00.1 , kernel interface name enp24s0f1
-VF: The VFs generated by 0000:18:00.0 , are 0000:18:02.x , The VFs generated by 0000:18:00.1 , are 0000:18:0a.x
-
-Copy correct ``ice.pkg`` into ``/usr/lib/firmware/intel/ice/ddp/``, 
-For the test cases, comms package is expected.
-
-Prepare test toplogoy, in the test case, it requires
-
-- 1 Intel E810 interface
-- 1 network interface enp134s0f0 for sending test packet, which could be connect to the E810 interface
-- Directly connect the 2 interfaces
-- Latest driver and comms pkgs of version
-
-Compile DPDK and testpmd::
-
-    make install -j T=x86_64-native-linuxapp-gcc
-
-1. Create 1 VF from a PF, and set VF mac address::
-
-    echo 1 > /sys/bus/pci/devices/0000\:18\:00.0/sriov_numvfs
-    ip link set enp24s0f0 vf 0 mac 00:11:22:33:44:55
-          
-2. Bind VF to vfio-pci::
-
-    ./usertools/dpdk-devbind.py -b vfio-pci 0000:18:02.0 
-
-3. Bring up PF and tester port::
-
-    ifconfig enp24s0f0 up
-    ifconfig enp134s0f0 up
-
-4. Launch the testpmd::
-
-    ./x86_64-native-linux-gcc/app/testpmd -l 6-9 -n 4 -w 18:02.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2
-    testpmd>set verbose 1
-    testpmd>set fwd rxonly
-    testpmd>start
-
-5. on tester side, add pfcp.py to "scapy/layers", and copy it to "/root".
-   add "pfcp" to "load_layers" in "scapy/config.py", then start scapy::
- 
-    >>> import sys
-    >>> sys.path.append('/root)
-    >>> from pfcp import PFCP
-    >>>from scapy.contrib.pfcp import *  
-
-Test Case 01: RSS support MAC_IPV4_PFCP_SESSION
-===============================================
+Test Case: RSS support MAC_IPV4_PFCP_SESSION
+============================================
 
 1. DUT create rule for RSS type of MAC_IPV4_PFCP_SESSION::
 
@@ -4253,17 +4186,19 @@ Test Case 01: RSS support MAC_IPV4_PFCP_SESSION
 
 3. Tester use scapy to send the 100 MAC_IPV4_PFCP_SESSION pkts with different SEID::
 
-    sendp([Ether(dst="00:11:22:33:44:55")/IP(src=RandIP(),dst=RandIP())/UDP(sport=RandShort(),dport=RandShort())/PFCP(Sfield=1, SEID=12)/Raw('x' * 80)],iface="enp177s0f1,count=100")
-    
-4. Verify 100 pkts has been sent, 
-and check the 100 pkts has been recieved by DUT in differently 16 queues evenly with differently RSS hash value::
+    sendp([Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.20",dst="192.168.0.21")/UDP(sport=22,dport=8805)/PFCP(Sfield=1, SEID=12)/Raw('x' * 80)],iface="enp134s0f1")
+
+   The SEID can be set to random value.
+
+4. check the 100 pkts has been received by DUT in differently 16 queues evenly with differently RSS hash value::
 
 5. send MAC_IPV4_PFCP_NODE and MAC_IPV6_PFCP_SESSION pkts::
 
-    sendp([Ether(dst="00:11:22:33:44:55")/IP(src=RandIP(),dst=RandIP())/UDP(sport=RandShort(),dport=RandShort())/PFCP(Sfield=0)/Raw('x' * 80)],iface="enp177s0f1", count=100)
-    sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(sport=RandShort(),dport=RandShort())/PFCP(Sfield=1, SEID=12)/Raw('x' * 80)],iface="enp177s0f1",count=100)
+    sendp([Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.20",dst="192.168.0.21")/UDP(sport=22,dport=8805)/PFCP(Sfield=0)/Raw('x' * 80)],iface="enp134s0f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(sport=22,dport=8805)/PFCP(Sfield=1, SEID=12)/Raw('x' * 80)],iface="enp134s0f1")
 
-   check the packet is distributed to queue 0.
+   The SEID can be set to random value.
+   check the packets are distributed to queue 0.
 
 6. DUT verify rule can be listed and destroyed::
 
@@ -4273,12 +4208,12 @@ and check the 100 pkts has been recieved by DUT in differently 16 queues evenly
 
     testpmd> flow destroy 0 rule 0
 
-8. Verify 100 pkts has been sent, 
-and check the 100 pkts has been recieved by DUT in queue 0::
+8. Send the 100 matched pkts,
+and check the 100 pkts has been received by DUT in queue 0::
 
 
-Test Case 02: RSS support MAC_IPV6_PFCP_SESSION
-===============================================
+Test Case: RSS support MAC_IPV6_PFCP_SESSION
+============================================
 
 1. DUT create rule for the RSS type for MAC_IPV6_PFCP_SESSION::
 
@@ -4286,17 +4221,19 @@ Test Case 02: RSS support MAC_IPV6_PFCP_SESSION
 
 2. Tester use scapy to send the 100 MAC_IPV6_PFCP_SESSION pkts with different SEID::
 
-    sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(sport=RandShort(),dport=RandShort())/PFCP(Sfield=1, SEID=12)/Raw('x' * 80)],iface="enp177s0f1",count=100)
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(sport=22,dport=8805)/PFCP(Sfield=1, SEID=12)/Raw('x' * 80)],iface="enp134s0f1")
+
+   The SEID can be set to random value.
 
-3. Verify 100 pkts has been sent, 
-and check the 100 pkts has been recieved by DUT in differently 16 queues evenly with differently RSS hash value::
+3. Check the 100 pkts has been recieved by DUT in differently 16 queues evenly with differently RSS hash value::
 
 4. send MAC_IPV6_PFCP_NODE and MAC_IPV4_PFCP_SESSION pkts::
 
-    sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(sport=RandShort(),dport=RandShort())/PFCP(Sfield=0)/Raw('x' * 80)],iface="enp177s0f1, count=100")
-    sendp([Ether(dst="00:11:22:33:44:55")/IP(src=RandIP(),dst=RandIP())/UDP(sport=RandShort(),dport=RandShort())/PFCP(Sfield=1, SEID=12)/Raw('x' * 80)],iface="enp177s0f1, count=100")
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(sport=22,dport=8805)/PFCP(Sfield=0)/Raw('x' * 80)],iface="enp134s0f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.20",dst="192.168.0.21")/UDP(sport=22,dport=8805)/PFCP(Sfield=1, SEID=12)/Raw('x' * 80)],iface="enp134s0f1")
 
-   check the packet is distributed to different queue.
+   The SEID can be set to random value.
+   check the packets are distributed to queue 0.
 
 6. DUT verify rule can be listed and destroyed::
 
@@ -4306,11 +4243,11 @@ and check the 100 pkts has been recieved by DUT in differently 16 queues evenly
 
     testpmd> flow destroy 0 rule 0
 
-8. Verify 100 pkts has been sent, 
-and check the 100 pkts has been recieved by DUT in queue 0::
+8. Send the 100 matched pkts,
+and check the 100 pkts has been received by DUT in queue 0::
 
-Test Case 03: RSS Negative test with OS default
-====================================================
+Test Case: RSS Negative test with OS default
+============================================
 
 1. load OS package, and rmmod ice driver. insmod ice driver
 
-- 
2.14.3


             reply	other threads:[~2020-06-19  7:44 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-19 14:51 Peng Yuan [this message]
2020-06-19  8:21 ` Tu, Lijuan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1592578288-112628-1-git-send-email-yuan.peng@intel.com \
    --to=yuan.peng@intel.com \
    --cc=dts@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).