test suite reviews and discussions
 help / color / mirror / Atom feed
* [PATCH V5 0/3] ice PF enable buffer split
@ 2022-12-20  6:09 Yaqi Tang
  2022-12-20  6:09 ` [dts][PATCH V5 1/3] test_plans/index: add new test plan for ice_buffer_split Yaqi Tang
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Yaqi Tang @ 2022-12-20  6:09 UTC (permalink / raw)
  To: dts; +Cc: Yaqi Tang

ice support protocol split based on current buffer split.

Yaqi Tang (3):
  test_plans/index
  test_plans/ice_buffer_split
  tests/ice_buffer_split

 test_plans/ice_buffer_split_test_plan.rst | 1247 ++++++++++++++++
 test_plans/index.rst                      |    1 +
 tests/TestSuite_ice_buffer_split.py       | 1631 +++++++++++++++++++++
 3 files changed, 2879 insertions(+)
 create mode 100644 test_plans/ice_buffer_split_test_plan.rst
 create mode 100644 tests/TestSuite_ice_buffer_split.py

-- 
2.25.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [dts][PATCH V5 1/3] test_plans/index: add new test plan for ice_buffer_split
  2022-12-20  6:09 [PATCH V5 0/3] ice PF enable buffer split Yaqi Tang
@ 2022-12-20  6:09 ` Yaqi Tang
  2022-12-20  6:09 ` [dts][PATCH V5 2/3] test_plans/ice_buffer_split: ice PF enable buffer split Yaqi Tang
  2022-12-20  6:09 ` [dts][PATCH V5 3/3] tests/ice_buffer_split: " Yaqi Tang
  2 siblings, 0 replies; 7+ messages in thread
From: Yaqi Tang @ 2022-12-20  6:09 UTC (permalink / raw)
  To: dts; +Cc: Yaqi Tang

Add new test plan for ice support protocol split based on current buffer split.

Signed-off-by: Yaqi Tang <yaqi.tang@intel.com>
---
 test_plans/index.rst | 1 +
 1 file changed, 1 insertion(+)

diff --git a/test_plans/index.rst b/test_plans/index.rst
index 4e0e8133..356cbf9d 100644
--- a/test_plans/index.rst
+++ b/test_plans/index.rst
@@ -27,6 +27,7 @@ The following are the test plans for the DPDK DTS automated test system.
     ice_advanced_iavf_rss_vlan_esp_ah_l2tp_pfcp_test_plan
     ice_advanced_iavf_rss_pppol2tpoudp_test_plan
     ice_advanced_rss_vlan_esp_ah_l2tp_pfcp_test_plan
+    ice_buffer_split_test_plan
     ice_dcf_acl_filter_test_plan
     ice_dcf_data_path_test_plan
     ice_dcf_switch_filter_test_plan
-- 
2.25.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [dts][PATCH V5 2/3] test_plans/ice_buffer_split: ice PF enable buffer split
  2022-12-20  6:09 [PATCH V5 0/3] ice PF enable buffer split Yaqi Tang
  2022-12-20  6:09 ` [dts][PATCH V5 1/3] test_plans/index: add new test plan for ice_buffer_split Yaqi Tang
@ 2022-12-20  6:09 ` Yaqi Tang
  2022-12-20  6:09 ` [dts][PATCH V5 3/3] tests/ice_buffer_split: " Yaqi Tang
  2 siblings, 0 replies; 7+ messages in thread
From: Yaqi Tang @ 2022-12-20  6:09 UTC (permalink / raw)
  To: dts; +Cc: Yaqi Tang

Packets received in ice scalar path can be devided into two mempools with expected hdr and payload length/content specified by protocol type.

Signed-off-by: Yaqi Tang <yaqi.tang@intel.com>
---
 test_plans/ice_buffer_split_test_plan.rst | 1247 +++++++++++++++++++++
 1 file changed, 1247 insertions(+)
 create mode 100644 test_plans/ice_buffer_split_test_plan.rst

diff --git a/test_plans/ice_buffer_split_test_plan.rst b/test_plans/ice_buffer_split_test_plan.rst
new file mode 100644
index 00000000..5c4b3df5
--- /dev/null
+++ b/test_plans/ice_buffer_split_test_plan.rst
@@ -0,0 +1,1247 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright(c) 2022 Intel Corporation
+
+==========================
+ICE PF Enable Buffer Split
+==========================
+
+Description
+===========
+Protocol based buffer split consists of splitting a received packet into two separate regions based on the packet content. 
+It is useful in some scenarios, such as GPU acceleration. The splitting will help to enable true zero copy and hence 
+improve the performance significantly.
+
+It supports protocol split based on current buffer split. When Rx queue is 
+configured with RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT offload and corresponding protocol, 
+packets received will be directly split into two different mempools with expected hdr and payload length/content.
+
+For validation, we will focus on:
+1. Configuration of protocol based buffer split is applied.
+Setup buffer split:
+Per port: testpmd>port config 0 rx_offload buffer_split on
+Per queue: testpmd>port 0 rxq 0 rx_offload buffer_split on
+
+Set the protocol type of buffer split:
+testpmd>set rxhdrs (eth|ipv4|ipv6|ipv4-tcp|ipv6-tcp|ipv4-udp|ipv6-udp|
+ipv4-sctp|ipv6-sctp|grenat|inner-eth|inner-ipv4|inner-ipv6|
+inner-ipv4-tcp|inner-ipv6-tcp|inner-ipv4-udp|inner-ipv6-udp|
+inner-ipv4-sctp|inner-ipv6-sctp)
+
+2. Packets received in ice scalar path(--force-max-simd-bitwidth=64) can be devided into 
+two mempools with expected hdr and payload length/content specified by protocol type.
+
+.. note::
+
+    Currently, it supports 6 kinds segmentation of buffer split:
+    * Outer mac: set rxhdrs eth
+    * Inner mac: set rxhdrs inner-eth
+    * Inner l3: set rxhdrs ipv4|ipv6|inner-ipv4|inner-ipv6
+    * Inner l4: set rxhdrs ipv4-udp|ipv4-tcp|ipv6-udp|ipv6-tcp|inner-ipv4-udp|inner-ipv4-tcp|inner-ipv6-udp|inner-ipv6-tcp
+    * Inner sctp: set rxhdrs ipv4-sctp|ipv6-sctp|inner-ipv4-sctp|inner-ipv6-sctp
+    * Tunnel: set rxhdrs grenat
+
+Prerequisites
+=============
+
+Topology
+--------
+DUT port 0 <----> Tester port 0
+
+Hardware
+--------
+Supported NICs: Intel® Ethernet 800 Series E810-XXVDA4/E810-CQ
+
+Software
+--------
+dpdk: http://dpdk.org/git/dpdk
+runtime command: https://doc.dpdk.org/guides/testpmd_app_ug/testpmd_funcs.html
+
+General Set Up
+--------------
+1. Compile DPDK with '-Dc_args='-DRTE_ETHDEV_DEBUG_RX=1' to dump segment data::
+
+    # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dc_args='-DRTE_ETHDEV_DEBUG_RX=1' --default-library=static <dpdk build dir>
+    # ninja -C <dpdk build dir> -j 110
+
+2. Get the pci device id and interface of DUT and tester.
+   For example, 0000:3b:00.0 and 0000:3b:00.1 is pci device id,
+   ens785f0 and ens785f1 is interface::
+
+    <dpdk dir># ./usertools/dpdk-devbind.py -s
+
+    0000:3b:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci
+    0000:3b:00.1 'Device 159b' if=ens785f1 drv=ice unused=vfio-pci
+
+3. Bind the DUT port to dpdk::
+
+    <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port pci device id> 
+
+Test Case
+=========
+The test case verifies the buffer split of 6 packets:
+* MAC_IPV4_UDP_PAY
+* MAC_IPV4_IPV4_UDP_PAY
+* MAC_IPV4_UDP_VXLAN_MAC_IPV4_UDP_PAY
+* MAC_IPV4_UDP_VXLAN_IPV4_UDP_PAY
+* MAC_IPV4_GRE_MAC_IPV4_UDP_PAY 
+* MAC_IPV4_GRE_IPV4_UDP_PAY
+
+Common Steps
+------------
+1.port stop all
+2.port config 0 rx_offload buffer_split on 
+3.show port 0 rx_offload configuration
+4.port config 0 udp_tunnel_port add vxlan 4789
+5.set rxhdrs eth
+6.show config rxhdrs
+7.port start all
+8.start
+
+Test Case 1: PORT_BUFFER_SPLIT_OUTER_MAC
+----------------------------------------
+Launch two ports testpmd, configure port 0 buffer split on outer mac, send matched packets to port 0 and check the received packets
+can be devided into two mempools with expected hdr and payload length/content by the outer mac.
+
+Test Steps
+~~~~~~~~~~
+1. Launch two ports testpmd::
+ 
+    <dpdk build dir>/app/dpdk-testpmd <EAL options> -a 3b:00.0 -a 3b:00.1 --force-max-simd-bitwidth=64 -- -i --mbuf-size=2048,2048 
+
+2. Execute common steps to configure port 0 buffer split on outer mac.
+
+3. Send matched packets to port 0.
+    
+    Send MAC_IPV4_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=14 and pay_len=50.
+    
+    Send MAC_IPV4_IPV6_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/IPv6()/("Y"*30)], iface="ens260f0")
+    
+    Check the received packets can be devided into two mempools with hdr_len=14 and pay_len=90.
+
+    Send MAC_IPV4_UDP_VXLAN_MAC_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with har_len=14 and pay_len=80.
+   
+    Send MAC_IPV6_UDP_VXLAN_IPV6_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(sport=200, dport=4790)/VXLAN()/IPv6()/("Y"*30)], iface="ens260f0") 
+      
+    Check the received packets can be devided into two mempools with hdr_len=14 and pay_len=126.
+   
+    Send MAC_IPV4_GRE_MAC_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/GRE()/Ether(dst="00:11:22:33:44:66")/("Y"*30)], iface="ens260f0")
+ 
+    Check the received packets can be devided into two mempools with hdr_len=14 and pay_len=68.
+
+    Send MAC_IPV4_GRE_IPV6_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/GRE()/IPv6()/("Y"*30)], iface="ens260f0")
+   
+    Check the received packets can be devided into two mempools with hdr_len=14 and pay_len=94.
+
+4. Send matched packets to port 1::
+
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/IPv6()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(sport=200, dport=4790)/VXLAN()/IPv6()/("Y"*30)], iface="ens260f1") 
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/GRE()/Ether(dst="00:11:22:33:44:66")/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/GRE()/IPv6()/("Y"*30)], iface="ens260f1")
+
+   Check the received packets can't be devided into two mempools and the segment length should be empty.
+
+Test Case 2: PORT_BUFFER_SPLIT_INNER_MAC
+----------------------------------------
+Launch two ports testpmd, configure port 0 buffer split on inner mac, send matched packets to port 0 and check the received packets
+can be devided into two mempools with expected hdr and payload length/content by the inner mac.
+
+Test Steps
+~~~~~~~~~~
+1. Launch two ports testpmd::
+ 
+    <dpdk build dir>/app/dpdk-testpmd <EAL options> -a 3b:00.0 -a 3b:00.1 --force-max-simd-bitwidth=64 -- -i --mbuf-size=2048,2048 
+
+2. Modify common step 5 to::
+ 
+    set rxhdrs inner-eth
+
+   Execute common steps to configure port 0 buffer split on inner mac.
+
+3. Send matched packets to port 0.
+
+    Send MAC_IPV4_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=14 and pay_len=50.
+    
+    Send MAC_IPV4_IPV6_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/IPv6()/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=14 and pay_len=90.
+
+    Send MAC_IPV4_UDP_VXLAN_MAC_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=64 and pay_len=30.
+    
+    Send MAC_IPV6_UDP_VXLAN_IPV6_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(sport=200, dport=4790)/VXLAN()/IPv6()/("Y"*30)], iface="ens260f0")
+   
+    Check the received packets can be devided into two mempools with hdr_len=14 and pay_len=126.
+
+    Send MAC_IPV4_GRE_MAC_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/GRE()/Ether(dst="00:11:22:33:44:66")/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=52 and pay_len=30.
+
+    Send MAC_IPV6_GRE_IPV6_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/GRE()/IPv6()/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=14 and pay_len=114.
+
+4. Send matched packets to port 1::
+
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/IPv6()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(sport=200, dport=4790)/VXLAN()/IPv6()/("Y"*30)], iface="ens260f1") 
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/GRE()/Ether(dst="00:11:22:33:44:66")/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/GRE()/IPv6()/("Y"*30)], iface="ens260f1")
+
+   Check the received packets can't be devided into two mempools and the segment length should be empty.
+
+Test Case 3: PORT_BUFFER_SPLIT_INNER_L3
+---------------------------------------
+Launch two ports testpmd, configure port 0 buffer split on inner l3, send matched packets to port 0 and check the received packets
+can be devided into two mempools with expected hdr and payload length/content by the inner l3.
+Whether configure buffer split on ipv4 or ipv6, packets are split at inner ipv4 or inner ipv6.
+
+Subcase 1: buffer split ipv4
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Test Steps
+~~~~~~~~~~
+1. Launch two ports testpmd::
+ 
+    <dpdk build dir>/app/dpdk-testpmd <EAL options> -a 3b:00.0 -a 3b:00.1 --force-max-simd-bitwidth=64 -- -i --mbuf-size=2048,2048 
+
+2. Modify common step 5 to::
+ 
+    set rxhdrs ipv4
+
+   Execute common steps to configure port 0 buffer split on inner l3.
+
+3. Send matched packets to port 0.
+    
+    Send MAC_IPV4_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/("Y"*30)], iface="ens260f0")
+    
+    Check the received packets can be devided into two mempools with hdr_len=34 and pay_len=30.
+
+    Send MAC_IPV6_IPV4_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/IP()/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=74 and pay_len=30.
+
+    Send MAC_IPV4_UDP_VXLAN_MAC_IPV6_PAY packet::
+      
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IPv6()/("Y"*30)], iface="ens260f0")
+    
+    Check the received packets can be devided into two mempools with hdr_len=104 and pay_len=30.
+
+    Send MAC_IPV6_UDP_VXLAN_IPV4_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(sport=200, dport=4790)/VXLAN()/IP()/("Y"*30)], iface="ens260f0")
+    
+    Check the received packets can be devided into two mempools with hdr_len=90 and pay_len=30.
+
+    Send MAC_IPV4_GRE_MAC_IPV6_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/GRE()/Ether(dst="00:11:22:33:44:66")/IPv6()/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=92 and pay_len=30.
+
+    Send MAC_IPV6_GRE_IPV4_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/GRE()/IP()/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=78 and pay_len=30.
+
+4. Send matched packets to port 1::
+
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/IP()/("Y"*30)], iface="ens260f1") 
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IPv6()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(sport=200, dport=4790)/VXLAN()/IP()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/GRE()/Ether()/IPv6()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/GRE()/IP()/("Y"*30)], iface="ens260f1")    
+ 
+   Check the received packets can't be devided into two mempools and the segment length should be empty.
+
+Subcase 2: buffer split ipv6
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+1. Modify subcase 1 test step 2 to::
+
+    set rxhdrs ipv6
+
+2. Execute subcase 1 test steps to check the received packets can be devided into two mempools with expected hdr and payload length/content by the inner l3.
+
+Subcase 3: buffer split inner-ipv4
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+1. Modify subcase 1 test step 2 to::
+
+    set rxhdrs inner-ipv4
+
+2. Execute subcase 1 test steps to check the received packets can be devided into two mempools with expected hdr and payload length/content by the inner l3.
+
+Subcase 4: buffer split inner-ipv6
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+1. Modify subcase 1 test step 2 to::
+
+    set rxhdrs inner-ipv6
+
+2. Execute subcase 1 test steps to check the received packets can be devided into two mempools with expected hdr and payload length/content by the inner l3.
+
+Test Case 4: PORT_BUFFER_SPLIT_INNER_L4
+---------------------------------------
+Launch two ports testpmd, configure port 0 buffer split on inner udp/tcp, send matched packets to port 0 and check the received packets
+can be devided into two mempools with expected hdr and payload length/content by the inner udp/tcp. 
+Whether configure buffer split on udp or tcp, packets are split at inner udp or inner tcp.
+
+Subcase 1: buffer split ipv4-udp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Test Steps
+~~~~~~~~~~
+1. Launch two ports testpmd::
+ 
+    <dpdk build dir>/app/dpdk-testpmd <EAL options> -a 3b:00.0 -a 3b:00.1 --force-max-simd-bitwidth=64 -- -i --mbuf-size=2048,2048 
+
+2. Modify common step 5 to::
+ 
+    set rxhdrs ipv4-udp
+
+   Execute common steps to configure port 0 buffer split on inner udp/tcp.
+
+3. Send matched packets to port 0.
+   
+    #UDP packets
+    Send MAC_IPV4_UDP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=42 and pay_len=30.
+    
+    Send MAC_IPV4_IPV6_UDP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/IPv6()/UDP()/("Y"*30)], iface="ens260f0")
+ 
+    Check the received packets can be devided into two mempools with hdr_len=82 and pay_len=30.
+
+    Send MAC_IPV4_UDP_VXLAN_MAC_IPV4_UDP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP()/UDP()/("Y"*30)], iface="ens260f0")
+       
+    Check the received packets can be devided into two mempools with hdr_len=92 and pay_len=30.
+
+    Send MAC_IPV6_UDP_VXLAN_IPV6_UDP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(sport=200, dport=4790)/VXLAN()/IPv6()/UDP()/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=118 and pay_len=30.    
+
+    Send MAC_IPV6_GRE_MAC_IPV4_UDP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/GRE()/Ether(dst="00:11:22:33:44:66")/IP()/UDP()/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=100 and pay_len=30.
+
+    Send MAC_IPV4_GRE_IPV6_UDP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/GRE()/IPv6()/UDP()/("Y"*30)], iface="ens260f0")    
+
+    Check the received packets can be devided into two mempools with hdr_len=86 and pay_len=30.
+
+    #TCP packets
+    Send MAC_IPV6_TCP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/TCP()/("Y"*30)], iface="ens260f0")
+ 
+    Check the received packets can be devided into two mempools with hdr_len=74 and pay_len=30.    
+
+    Send MAC_IPV6_IPV4_TCP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/IP()/TCP()/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=94 and pay_len=30.
+ 
+    Send MAC_IPV6_UDP_VXLAN_MAC_IPV6_TCP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IPv6()/TCP()/("Y"*30)], iface="ens260f0")
+ 
+    Check the received packets can be devided into two mempools with hdr_len=144 and pay_len=30.
+
+    Send MAC_IPV4_UDP_VXLAN_IPV4_TCP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP(sport=200, dport=4790)/VXLAN()/IP()/TCP()/("Y"*30)], iface="ens260f0")
+ 
+    Check the received packets can be devided into two mempools with hdr_len=90 and pay_len=30.    
+
+    Send MAC_IPV4_GRE_MAC_IPV6_TCP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/GRE()/Ether(dst="00:11:22:33:44:66")/IPv6()/TCP()/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=112 and pay_len=30.
+
+    Send MAC_IPV6_GRE_IPV4_TCP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/GRE()/IP()/TCP()/("Y"*30)], iface="ens260f0")
+    
+    Check the received packets can be devided into two mempools with hdr_len=98 and pay_len=30.
+
+4. Send mismatched packet to port 0::
+
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/("Y"*30)], iface="ens260f0")
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/GRE()/Ether(dst="00:11:22:33:44:66")/IP()/SCTP()/("Y"*30)], iface="ens260f0")
+
+   Check the received packets can't be devided into two mempools and hdr_len=0.
+
+5. Send matched packets to port 1::
+
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/IPv6()/UDP()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP()/UDP()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(sport=200, dport=4790)/VXLAN()/IPv6()/UDP()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/GRE()/Ether(dst="00:11:22:33:44:66")/IP()/UDP()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/GRE()/IPv6()/UDP()/("Y"*30)], iface="ens260f1")
+    
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/TCP()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/IP()/TCP()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IPv6()/TCP()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP(sport=200, dport=4790)/VXLAN()/IP()/TCP()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/GRE()/Ether(dst="00:11:22:33:44:66")/IPv6()/TCP()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/GRE()/IP()/TCP()/("Y"*30)], iface="ens260f1")
+
+   Check the received packets can't be devided into two mempools and the segment length should be empty.
+
+Subcase 2: buffer split ipv6-udp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+1. Modify subcase 1 test step 2 to::
+
+    set rxhdrs ipv6-udp
+
+2. Execute subcase 1 test steps to check the received packets can be devided into two mempools with expected hdr and payload length/content by the inner udp/tcp.
+
+Subcase 3: buffer split ipv4-tcp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+1. Modify subcase 1 test step 2 to::
+
+    set rxhdrs ipv4-tcp
+
+2. Execute subcase 1 test steps to check the received packets can be devided into two mempools with expected hdr and payload length/content by the inner udp/tcp.
+
+Subcase 4: buffer split ipv6-tcp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+1. Modify subcase 1 test step 2 to::
+
+    set rxhdrs ipv6-tcp
+
+2. Execute subcase 1 test steps to check the received packets can be devided into two mempools with expected hdr and payload length/content by the inner udp/tcp.
+
+Subcase 5: buffer split inner-ipv4-udp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+1. Modify subcase 1 test step 2 to::
+
+    set rxhdrs inner-ipv4-udp
+
+2. Execute subcase 1 test steps to check the received packets can be devided into two mempools with expected hdr and payload length/content by the inner udp/tcp.
+
+Subcase 6: buffer split inner-ipv6-udp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+1. Modify subcase 1 test step 2 to::
+
+    set rxhdrs inner-ipv6-udp
+
+2. Execute subcase 1 test steps to check the received packets can be devided into two mempools with expected hdr and payload length/content by the inner udp/tcp.
+
+Subcase 7: buffer split inner-ipv4-tcp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+1. Modify subcase 1 test step 2 to::
+
+    set rxhdrs inner-ipv4-tcp
+
+2. Execute subcase 1 test steps to check the received packets can be devided into two mempools with expected hdr and payload length/content by the inner udp/tcp.
+
+Subcase 8: buffer split inner-ipv6-tcp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+1. Modify subcase 1 test step 2 to::
+
+    set rxhdrs inner-ipv6-tcp
+
+2. Execute subcase 1 test steps to check the received packets can be devided into two mempools with expected hdr and payload length/content by the inner udp/tcp.
+
+Test Case 5: PORT_BUFFER_SPLIT_INNER_SCTP
+-----------------------------------------
+Launch two ports testpmd, configure port 0 buffer split on inner sctp, send matched packets to port 0 and check the received packets
+can be devided into two mempools with expected hdr and payload length/content by the inner sctp.
+
+Subcase 1: buffer split ipv4-sctp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Test Steps
+~~~~~~~~~~
+1. Launch two ports testpmd::
+ 
+    <dpdk build dir>/app/dpdk-testpmd <EAL options> -a 3b:00.0 -a 3b:00.1 --force-max-simd-bitwidth=64 -- -i --mbuf-size=2048,2048 
+
+2. Modify common step 5 to::
+ 
+    set rxhdrs ipv4-sctp
+
+   Execute common steps to configure port 0 buffer split on inner sctp.
+
+3. Send matched packets to port 0.
+
+    Send MAC_IPV4_SCTP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/SCTP()/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=46 and pay_len=30.
+    
+    Send MAC_IPV4_IPV6_SCTP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/IPv6()/SCTP()/("Y"*30)], iface="ens260f0")
+ 
+    Check the received packets can be devided into two mempools with hdr_len=86 and pay_len=30.
+ 
+    Send MAC_IPV4_UDP_VXLAN_MAC_IPV4_SCTP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP()/SCTP()/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=96 and pay_len=30.
+
+    Send MAC_IPV6_UDP_VXLAN_IPV6_SCTP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(sport=200, dport=4790)/VXLAN()/IPv6()/SCTP()/("Y"*30)], iface="ens260f0")
+    
+    Check the received packets can be devided into two mempools with hdr_len=122 and pay_len=30.
+
+    Send MAC_IPV6_GRE_MAC_IPV4_SCTP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/GRE()/Ether(dst="00:11:22:33:44:66")/IP()/SCTP()/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=104 and pay_len=30.
+
+    Send MAC_IPV4_GRE_IPV6_SCTP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/GRE()/IPv6()/SCTP()/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=90 and pay_len=30.
+    
+4. Send mismatched packet to port 0.
+    
+    Send MAC_IPV4_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can't be devided into two mempools with hdr_len=0 and pay_len=64.
+    
+    Send MAC_IPV4_GRE_MAC_IPV4_UDP_PAY packet::
+    
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/GRE()/Ether(dst="00:11:22:33:44:66")/IP()/UDP()/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can't be devided into two mempools with hdr_len=0 and pay_len=110.
+
+    Send MAC_IPV4_GRE_MAC_IPV4_TCP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/GRE()/Ether(dst="00:11:22:33:44:66")/IP()/TCP()/("Y"*30)], iface="ens260f0")
+    
+    Check the received packets can't be devided into two mempools with hdr_len=0 and pay_len=122.
+
+5. Send matched packets to port 1::
+
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/SCTP()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/IPv6()/SCTP()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP()/SCTP()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(sport=200, dport=4790)/VXLAN()/IPv6()/SCTP()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/GRE()/Ether(dst="00:11:22:33:44:66")/IP()/SCTP()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/GRE()/IPv6()/SCTP()/("Y"*30)], iface="ens260f1")
+
+   Check the received packets can't be devided into two mempools and the segment length should be empty.
+
+Subcase 2: buffer split ipv6-sctp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+1. Modify subcase 1 test step 2 to::
+
+    set rxhdrs ipv6-sctp
+
+2. Execute subcase 1 test steps to check the received packets can be devided into two mempools with expected hdr and payload length/content by the inner sctp.
+
+Subcase 3: buffer split inner-ipv4-sctp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+1. Modify subcase 1 test step 2 to::
+
+    set rxhdrs inner-ipv4-sctp
+
+2. Execute subcase 1 test steps to check the received packets can be devided into two mempools with expected hdr and payload length/content by the inner sctp.
+
+Subcase 4: buffer split inner-ipv6-sctp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+1. Modify subcase 1 test step 2 to::
+
+    set rxhdrs inner-ipv6-sctp
+
+2. Execute subcase 1 test steps to check the received packets can be devided into two mempools with expected hdr and payload length/content by the inner sctp.
+
+Test Case 6: PORT_BUFFER_SPLIT_TUNNEL
+-------------------------------------
+Launch two ports testpmd, configure port 0 buffer split on tunnel, send matched packets to port 0 and check the received packets
+can be devided into two mempools with expected hdr and payload length/content by the tunnel.
+
+Test Steps
+~~~~~~~~~~
+1. Launch two ports testpmd::
+ 
+    <dpdk build dir>/app/dpdk-testpmd <EAL options> -a 3b:00.0 -a 3b:00.1 --force-max-simd-bitwidth=64 -- -i --mbuf-size=2048,2048 
+
+2. Modify common step 5 to::
+ 
+    set rxhdrs grenat
+
+   Execute common steps to configure port 0 buffer split on tunnel.
+
+3. Send matched packets to port 0.
+    
+    Send MAC_IPV4_IPV4_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/IP()/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=34 and pay_len=50.
+ 
+    Send MAC_IPV6_IPV6_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/IPv6()/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=54 and pay_len=70.
+ 
+    Send MAC_IPV4_UDP_VXLAN_MAC_IPV4_UDP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP()/UDP()/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=50 and pay_len=72.
+
+    Send MAC_IPV6_UDP_VXLAN_IPV6_TCP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(sport=200, dport=4790)/VXLAN()/IPv6()/TCP()/("Y"*30)], iface="ens260f0")
+ 
+    Check the received packets can be devided into two mempools with hdr_len=70 and pay_len=90.
+
+    Send MAC_IPV4_GRE_MAC_IPV6_SCTP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/GRE()/Ether(dst="00:11:22:33:44:66")/IPv6()/SCTP()/("Y"*30)], iface="ens260f0")
+ 
+    Check the received packets can be devided into two mempools with hdr_len=38 and pay_len=96.
+
+    Send MAC_IPV6_GRE_IPV4_UDP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/GRE()/IP()/UDP()/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=58 and pay_len=58.
+
+4. Send mismatched packet to port 0::
+
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/("Y"*30)], iface="ens260f0")
+    
+   Check the received packets can't be devided into two mempools with hdr_len=0 and pay_len=72.
+
+5. Send matched packets to port 1::
+
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/IP()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/IPv6()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP()/UDP()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(sport=200, dport=4790)/VXLAN()/IPv6()/TCP()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/GRE()/Ether(dst="00:11:22:33:44:66")/IPv6()/SCTP()/("Y"*30)], iface="ens260f1")
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/GRE()/IP()/UDP()/("Y"*30)], iface="ens260f1")
+
+   Check the received packets can't be devided into two mempools and the segment length should be empty.
+
+.. note::
+
+    Test Case 7~14 are queue buffer split cases. Verify the configuration of buffer split on single queue or queue group is effective. 
+    It will not affect creating, matching and destroying of fdir rule. 
+
+Test Case 7: QUEUE_BUFFER_SPLIT_OUTER_MAC
+-----------------------------------------
+Launch one port with multi queues testpmd, configure queue buffer split on outer mac, send matched packets and check the received packets
+can be devided into two mempools with expected hdr and payload length/content by the outer mac.
+
+Test Steps
+~~~~~~~~~~
+1. Launch one port with multi queues testpmd::
+
+    <dpdk build dir>/app/dpdk-testpmd <EAL options> -a 3b:00.0 --force-max-simd-bitwidth=64 -- -i --mbuf-size=2048,2048 --txq=8 --rxq=8
+
+2. Modify common step 2 to::
+ 
+    port 0 rxq 1 rx_offload buffer_split on
+
+   Execute common steps to configure queue buffer split on outer mac.
+
+3. Create a fdir rule::
+
+    flow create 0 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 / end actions queue index 1 / mark / end
+
+4. Send matched packets.
+
+    Send MAC_IPV4_UDP_VXLAN_MAC_IPV4_UDP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.0.1",dst="192.168.0.2")/("Y"*30)], iface="ens260f0")
+    
+    Check the received packets can be devided into two mempools with hdr_len=14 and pay_len=100.
+
+5. Send mismatched packets::
+
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.1.1",dst="192.168.0.2")/("Y"*30)], iface="ens260f0") 
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.0.1",dst="192.168.1.2")/("Y"*30)], iface="ens260f0") 
+
+   If the received packets are distributed to queue 1 by RSS, check the received packets can be devided into two mempools with hdr_len=14 and pay_len=100. 
+   Else check the received packets can't be devided into two mempools and the segment length should be empty.
+
+6. Destroy the rule::
+
+    flow destroy 0 rule 0  
+
+Test Case 8: QUEUE_BUFFER_SPLIT_INNER_MAC
+-----------------------------------------
+Launch one port with multi queues testpmd, configure queue buffer split on inner mac, send matched packets and check the received packets
+can be devided into two mempools with expected hdr and payload length/content by the inner mac.
+
+Test Steps
+~~~~~~~~~~
+1. Launch one port with multi queues testpmd::
+
+    <dpdk build dir>/app/dpdk-testpmd <EAL options> -a 3b:00.0 --force-max-simd-bitwidth=64 -- -i --mbuf-size=2048,2048 --txq=8 --rxq=8
+
+2. Modify common step 2 to::
+ 
+    port 0 rxq 2 rx_offload buffer_split on
+    port 0 rxq 3 rx_offload buffer_split on 
+
+   Modify common step 5 to::
+   
+    set rxhdrs inner-eth
+
+   Execute common steps to configure queue buffer split on inner mac.
+
+3. Create a fdir rule::
+
+    flow create 0 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 / end actions rss queues 2 3 end / mark / end
+
+4. Send matched packets.
+
+    Send MAC_IPV4_UDP_VXLAN_MAC_IPV4_UDP_PAY packet::
+  
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.0.1",dst="192.168.0.2")/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=64 and pay_len=50.
+
+5. Send mismatched packets::
+
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.1.1",dst="192.168.0.2")/("Y"*30)], iface="ens260f0")
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.0.1",dst="192.168.1.2")/("Y"*30)], iface="ens260f0")  
+
+   If the received packets are distributed to queue 2 or queue 3 by RSS, check the received packets can be devided into two mempools with hdr_len=64 and pay_len=50. 
+   Else check the received packets can't be devided into two mempools and the segment length should be empty.
+
+6. Destroy the rule::
+
+    flow destroy 0 rule 0  
+
+Test Case 9: QUEUE_BUFFER_SPLIT_INNER_IPV4
+------------------------------------------
+Launch one port with multi queues testpmd, configure queue buffer split on inner ipv4, send matched packets and check the received packets
+can be devided into two mempools with expected hdr and payload length/content by the inner ipv4.
+
+Subcase 1: buffer split ipv4
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Test Steps
+~~~~~~~~~~
+1. Launch one port with multi queues testpmd::
+
+    <dpdk build dir>/app/dpdk-testpmd <EAL options> -a 3b:00.0 --force-max-simd-bitwidth=64 -- -i --mbuf-size=2048,2048 --txq=8 --rxq=8
+
+2. Modify common step 2 to::
+ 
+    port 0 rxq 2 rx_offload buffer_split on
+
+   Modify common step 5 to::
+   
+    set rxhdrs ipv4
+
+   Execute common steps to configure queue buffer split on inner ipv4.
+
+3. Create a fdir rule::
+
+    flow create 0 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 / end actions queue index 2 / mark / end
+
+4. Send matched packets.
+
+    Send MAC_IPV4_UDP_VXLAN_MAC_IPV4_UDP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.0.1",dst="192.168.0.2")/("Y"*30)], iface="ens260f0") 
+
+    Check the received packets can be devided into two mempools with hdr_len=84 and pay_len=30.
+
+5. Send mismatched packets::
+
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.1.1",dst="192.168.0.2")/("Y"*30)], iface="ens260f0") 
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.0.1",dst="192.168.1.2")/("Y"*30)], iface="ens260f0") 
+
+   If the received packets are distributed to queue 2 by RSS, check the received packets can be devided into two mempools with hdr_len=84 and pay_len=30. 
+   Else check the received packets can't be devided into two mempools and the segment length should be empty.
+
+6. Destroy the rule::
+
+    flow destroy 0 rule 0  
+
+Subcase 2: buffer split inner-ipv4
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+1. Modify subcase 1 test step 2 to::
+
+    set rxhdrs inner-ipv4
+
+2. Execute subcase 1 test steps to check the received packets can be devided into two mempools with expected hdr and payload length/content by the inner ipv4.
+
+Test Case 10: QUEUE_BUFFER_SPLIT_INNER_IPV6
+-------------------------------------------
+Launch one port with multi queues testpmd, configure queue buffer split on inner ipv6, send matched packets and check the received packets
+can be devided into two mempools with expected hdr and payload length/content by the inner ipv6.
+
+Subcase 1: buffer split ipv6
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Test Steps
+~~~~~~~~~~
+1. Launch one port with multi queues testpmd::
+
+    <dpdk build dir>/app/dpdk-testpmd <EAL options> -a 3b:00.0 --force-max-simd-bitwidth=64 -- -i --mbuf-size=2048,2048 --txq=8 --rxq=8
+
+2. Modify common step 2 to::
+ 
+    port 0 rxq 4 rx_offload buffer_split on
+    port 0 rxq 5 rx_offload buffer_split on 
+
+   Modify common step 5 to::
+   
+    set rxhdrs ipv6
+
+   Execute common steps to configure queue buffer split on inner ipv6.
+
+3. Create a fdir rule::
+
+    flow create 0 ingress pattern eth / ipv6 src is 2001::1 dst is 2001::2 / end actions rss queues 4 5 end / mark / end
+
+4. Send matched packets.
+
+    Send MAC_IPV6_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1",dst="2001::2")/("Y"*30)], iface="ens260f0")
+ 
+    Check the received packets can be devided into two mempools with hdr_len=54 and pay_len=30. 
+
+5. Send mismatched packets::
+
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::8",dst="2001::2")/("Y"*30)], iface="ens260f0")
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1",dst="2001::9")/("Y"*30)], iface="ens260f0")
+
+   If the received packets are distributed to queue 4 or queue 5 by RSS, check the received packets can be devided into two mempools with hdr_len=54 and pay_len=30. 
+   Else check the received packets can't be devided into two mempools and the segment length should be empty.
+
+6. Destroy the rule::
+
+    flow destroy 0 rule 0  
+
+Subcase 2: buffer split inner-ipv6
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+1. Modify subcase 1 test step 2 to::
+
+    set rxhdrs inner-ipv6
+
+2. Execute subcase 1 test steps to check the received packets can be devided into two mempools with expected hdr and payload length/content by the inner ipv6.
+
+Test Case 11: QUEUE_BUFFER_SPLIT_INNER_UDP
+------------------------------------------
+Launch one port with multi queues testpmd, configure queue buffer split on inner udp, send matched packets and check the received packets
+can be devided into two mempools with expected hdr and payload length/content by the inner udp.
+
+Subcase 1: buffer split ipv4-udp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Test Steps
+~~~~~~~~~~
+1. Launch one port with multi queues testpmd::
+
+    <dpdk build dir>/app/dpdk-testpmd <EAL options> -a 3b:00.0 --force-max-simd-bitwidth=64 -- -i --mbuf-size=2048,2048 --txq=8 --rxq=8
+
+2. Modify common step 2 to::
+ 
+    port 0 rxq 3 rx_offload buffer_split on
+
+   Modify common step 5 to::
+   
+    set rxhdrs ipv4-udp
+
+   Execute common steps to configure queue buffer split on inner udp.
+
+3. Create a fdir rule::
+
+    flow create 0 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 / udp dst is 23 / end actions queue index 3 / mark / end
+
+4. Send matched packets.
+
+    Send MAC_IPV4_UDP_VXLAN_MAC_IPV4_UDP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.0.1", dst="192.168.0.2")/UDP(dport=23)/("Y"*30)], iface="ens260f0")  
+
+    Check the received packets can be devided into two mempools with hdr_len=92 and pay_len=30. 
+
+5. Send mismatched packets::
+
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.1.1", dst="192.168.0.2")/UDP(dport=23)/("Y"*30)], iface="ens260f0")
+
+   If the received packets are distributed to queue 3 by RSS, check the received packets can be devided into two mempools with hdr_len=92 and pay_len=30. 
+   Else check the received packets can't be devided into two mempools and the segment length should be empty.
+
+6. Destroy the rule::
+
+    flow destroy 0 rule 0  
+
+Subcase 2: buffer split ipv6-udp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Test Steps
+~~~~~~~~~~
+1. Launch one port multi queues testpmd::
+
+    <dpdk build dir>/app/dpdk-testpmd <EAL options> -a 3b:00.0 --force-max-simd-bitwidth=64 -- -i --mbuf-size=2048,2048 --txq=8 --rxq=8
+
+2. Modify common step 2 to::
+ 
+     port 0 rxq 3 rx_offload buffer_split on
+
+   Modify common step 5 to::
+   
+    set rxhdrs ipv6-udp
+
+   Execute common steps to configure queue buffer split on inner udp.
+
+3. Create a fdir rule::
+
+    flow create 0 ingress pattern eth / ipv6 src is 2001::1 dst is 2001::2 / udp dst is 23 / end actions queue index 3 / mark / end
+
+4. Send matched packets.
+
+    Send MAC_IPV6_UDP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1",dst="2001::2")/UDP(dport=23)/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=62 and pay_len=30.      
+
+5. Send mismatched packets::
+
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::8",dst="2001::2")/UDP(dport=23)/("Y"*30)], iface="ens260f0")
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1",dst="2001::2")/UDP(dport=24)/("Y"*30)], iface="ens260f0")
+
+   If the received packets are distributed to queue 3 by RSS, check the received packets can be devided into two mempools with hdr_len=62 and pay_len=30. 
+   Else check the received packets can't be devided into two mempools and the segment length should be empty.
+
+6. Destroy the rule::
+
+    flow destroy 0 rule 0  
+
+Subcase 3: buffer split inner-ipv4-udp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+1. Modify subcase 1 test step 2 to::
+
+    set rxhdrs inner-ipv4-udp
+
+2. Execute subcase 1 test steps to check the received packets can be devided into two mempools with expected hdr and payload length/content by the inner udp.
+
+Subcase 4: buffer split inner-ipv6-udp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+1. Modify subcase 2 test step 2 to::
+
+    set rxhdrs inner-ipv6-udp
+
+2. Execute subcase 2 test steps to check the received packets can be devided into two mempools with expected hdr and payload length/content by the inner udp.
+
+Test Case 12: QUEUE_BUFFER_SPLIT_INNER_TCP
+------------------------------------------
+Launch one port with multi queues testpmd, configure queue buffer split on inner tcp, send matched packets and check the received packets
+can be devided into two mempools with expected hdr and payload length/content by the inner tcp.
+
+Subcase 1: buffer split ipv4-tcp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Test Steps
+~~~~~~~~~~
+1. Launch one port with multi queues testpmd::
+
+    <dpdk build dir>/app/dpdk-testpmd <EAL options> -a 3b:00.0 --force-max-simd-bitwidth=64 -- -i --mbuf-size=2048,2048 --txq=8 --rxq=8
+
+2. Modify common step 2 to::
+ 
+    port 0 rxq 2 rx_offload buffer_split on
+    port 0 rxq 3 rx_offload buffer_split on 
+
+   Modify common step 5 to::
+   
+    set rxhdrs ipv4-tcp
+
+   Execute common steps to configure queue buffer split on inner tcp.
+
+3. Create a fdir rule::
+
+    flow create 0 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 / tcp dst is 23 / end actions rss queues 2 3 end / mark / end
+
+4. Send matched packets.
+
+    Send MAC_IPV4_UDP_VXLAN_MAC_IPV4_TCP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.0.1", dst="192.168.0.2")/TCP(dport=23)/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=104 and pay_len=30. 
+
+5. Send mismatched packets::
+
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.1.1", dst="192.168.0.2")/TCP(dport=23)/("Y"*30)], iface="ens260f0")
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.0.1", dst="192.168.0.2")/TCP(dport=24)/("Y"*30)], iface="ens260f0")
+
+   If the received packets are distributed to queue 2 or queue 3 by RSS, check the received packets can be devided into two mempools with hdr_len=104 and pay_len=30. 
+   Else check the received packets can't be devided into two mempools and the segment length should be empty.
+
+6. Destroy the rule::
+
+    flow destroy 0 rule 0  
+
+Subcase 2: buffer split ipv6-tcp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Test Steps
+~~~~~~~~~~
+1. Launch one port multi queues testpmd::
+
+    <dpdk build dir>/app/dpdk-testpmd <EAL options> -a 3b:00.0 --force-max-simd-bitwidth=64 -- -i --mbuf-size=2048,2048 --txq=8 --rxq=8
+
+2. Modify common step 2 to::
+ 
+     port 0 rxq 2 rx_offload buffer_split on
+     port 0 rxq 3 rx_offload buffer_split on 
+
+   Modify common step 5 to::
+   
+    set rxhdrs ipv6-tcp
+
+   Execute common steps to configure queue buffer split on inner udp.
+
+3. Create a fdir rule::
+
+    flow create 0 ingress pattern eth / ipv6 src is 2001::1 dst is 2001::2 / tcp dst is 23 / end actions rss queues 2 3 end / mark / end
+
+4. Send matched packets.
+
+    Send MAC_IPV6_TCP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1",dst="2001::2")/TCP(dport=23)/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=74 and pay_len=30. 
+
+5. Send mismatched packets::
+
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::8",dst="2001::2")/TCP(dport=23)/("Y"*30)], iface="ens260f0")
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1",dst="2001::2")/TCP(dport=24)/("Y"*30)], iface="ens260f0")
+
+   If the received packets are distributed to queue 2 or queue 3 by RSS, check the received packets can be devided into two mempools with hdr_len=74 and pay_len=30. 
+   Else check the received packets can't be devided into two mempools and the segment length should be empty.
+
+6. Destroy the rule::
+
+    flow destroy 0 rule 0  
+
+Subcase 3: buffer split inner-ipv4-tcp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+1. Modify subcase 1 test step 2 to::
+
+    set rxhdrs inner-ipv4-tcp
+
+2. Execute subcase 1 test steps to check the received packets can be devided into two mempools with expected hdr and payload length/content by the inner tcp.
+
+Subcase 4: buffer split inner-ipv6-tcp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+1. Modify subcase 2 test step 2 to::
+
+    set rxhdrs inner-ipv6-tcp
+
+2. Execute subcase 2 test steps to check the received packets can be devided into two mempools with expected hdr and payload length/content by the inner tcp.
+
+Test Case 13: QUEUE_BUFFER_SPLIT_INNER_SCTP
+-------------------------------------------
+Launch one port with multi queues testpmd, configure queue buffer split on inner sctp, send matched packets and check the received packets
+can be devided into two mempools with expected hdr and payload length/content by the inner sctp.
+
+Subcase 1: buffer split ipv4-sctp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Test Steps
+~~~~~~~~~~
+1. Launch one port multi queues testpmd::
+
+    <dpdk build dir>/app/dpdk-testpmd <EAL options> -a 3b:00.0 --force-max-simd-bitwidth=64 -- -i --mbuf-size=2048,2048 --txq=8 --rxq=8
+
+2. Modify common step 2 to::
+ 
+    port 0 rxq 5 rx_offload buffer_split on
+
+   Modify common step 5 to::
+   
+    set rxhdrs ipv4-sctp
+
+   Execute common steps to configure queue buffer split on inner sctp.
+
+3. Create a fdir rule::
+
+    flow create 0 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 / sctp dst is 23 / end actions queue index 5 / mark / end
+
+4. Send matched packets.
+
+    Send MAC_IPV4_UDP_VXLAN_MAC_IPV4_SCTP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.0.1", dst="192.168.0.2")/SCTP(dport=23)/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=96 and pay_len=30. 
+
+5. Send mismatched packets::
+
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.1.1", dst="192.168.0.2")/SCTP(dport=23)/("Y"*30)], iface="ens260f0")
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.0.1", dst="192.168.0.2")/SCTP(dport=24)/("Y"*30)], iface="ens260f0")
+
+   If the received packets are distributed to queue 5 by RSS, check the received packets can be devided into two mempools with hdr_len=96 and pay_len=30. 
+   Else check the received packets can't be devided into two mempools and the segment length should be empty.
+
+6. Destroy the rule::
+
+    flow destroy 0 rule 0  
+
+Subcase 2: buffer split ipv6-sctp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Test Steps
+~~~~~~~~~~
+1. Launch one port multi queues testpmd::
+
+    <dpdk build dir>/app/dpdk-testpmd <EAL options> -a 3b:00.0 --force-max-simd-bitwidth=64 -- -i --mbuf-size=2048,2048 --txq=8 --rxq=8
+
+2. Modify common step 2 to::
+ 
+     port 0 rxq 5 rx_offload buffer_split on
+
+   Modify common step 5 to::
+   
+    set rxhdrs ipv6-sctp
+
+   Execute common steps to configure queue buffer split on inner sctp.
+
+3. Create a fdir rule::
+
+    flow create 0 ingress pattern eth / ipv6 src is 2001::1 dst is 2001::2 / sctp dst is 23 / end actions queue index 5 / mark / end
+
+4. Send matched packets.
+
+    Send MAC_IPV6_SCTP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1",dst="2001::2")/SCTP(dport=23)/("Y"*30)], iface="ens260f0")
+    
+    Check the received packets can be devided into two mempools with hdr_len=66 and pay_len=30. 
+   
+5. Send mismatched packets::
+
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::8",dst="2001::2")/SCTP(dport=23)/("Y"*30)], iface="ens260f0")
+    sendp([Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1",dst="2001::2")/SCTP(dport=24)/("Y"*30)], iface="ens260f0")
+
+   If the received packets are distributed to queue 5 by RSS, check the received packets can be devided into two mempools with hdr_len=66 and pay_len=30. 
+   Else check the received packets can't be devided into two mempools and the segment length should be empty.
+
+6. Destroy the rule::
+
+    flow destroy 0 rule 0  
+
+Subcase 3: buffer split inner-ipv4-sctp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+1. Modify subcase 1 test step 2 to::
+
+    set rxhdrs inner-ipv4-sctp
+
+2. Execute subcase 1 test steps to check the received packets can be devided into two mempools with expected hdr and payload length/content by the inner sctp.
+
+Subcase 4: buffer split inner-ipv6-sctp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+1. Modify subcase 2 test step 2 to::
+
+    set rxhdrs inner-ipv6-sctp
+
+2. Execute subcase 2 test steps to check the received packets can be devided into two mempools with expected hdr and payload length/content by the inner sctp.
+
+Test Case 14: QUEUE_BUFFER_SPLIT_TUNNEL
+---------------------------------------
+Launch one port with multi queues testpmd, configure queue buffer split on tunnel, send matched packets and check the received packets
+can be devided into two mempools with expected hdr and payload length/content by the tunnel.
+
+Test Steps
+~~~~~~~~~~
+1. Launch one port multi queues testpmd::
+
+    <dpdk build dir>/app/dpdk-testpmd <EAL options> -a 3b:00.0 --force-max-simd-bitwidth=64 -- -i --mbuf-size=2048,2048 --txq=8 --rxq=8
+
+2. Modify common step 2 to::
+ 
+    port 0 rxq 4 rx_offload buffer_split on
+    
+    port 0 rxq 5 rx_offload buffer_split on
+
+   Modify common step 5 to::
+   
+    set rxhdrs grenat
+
+   Execute common steps to configure queue buffer split on inner udp.
+
+3. Create a fdir rule::
+
+    flow create 0 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 / udp dst is 23 / end actions rss queues 4 5 end / mark / end
+
+4. Send matched packets.
+
+    Send MAC_IPV4_UDP_VXLAN_MAC_IPV4_SCTP_PAY packet::
+
+      sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.0.1", dst="192.168.0.2")/UDP(dport=23)/("Y"*30)], iface="ens260f0")
+
+    Check the received packets can be devided into two mempools with hdr_len=50 and pay_len=72. 
+  
+5. Send mismatched packets::
+
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.1.1", dst="192.168.0.2")/UDP(dport=23)/("Y"*30)], iface="ens260f0")
+    sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.0.1", dst="192.168.0.2")/UDP(dport=24)/("Y"*30)], iface="ens260f0")
+    
+   If the received packets are distributed to queue 4 or queue 5 by RSS, check the received packets can be devided into two mempools with hdr_len=50 and pay_len=72. 
+   Else check the received packets can't be devided into two mempools and the segment length should be empty.
+
+6. Destroy the rule::
+
+    flow destroy 0 rule 0  
+
-- 
2.25.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [dts][PATCH V5 3/3] tests/ice_buffer_split: ice PF enable buffer split
  2022-12-20  6:09 [PATCH V5 0/3] ice PF enable buffer split Yaqi Tang
  2022-12-20  6:09 ` [dts][PATCH V5 1/3] test_plans/index: add new test plan for ice_buffer_split Yaqi Tang
  2022-12-20  6:09 ` [dts][PATCH V5 2/3] test_plans/ice_buffer_split: ice PF enable buffer split Yaqi Tang
@ 2022-12-20  6:09 ` Yaqi Tang
  2022-12-20  9:33   ` Jiang, YuX
                     ` (2 more replies)
  2 siblings, 3 replies; 7+ messages in thread
From: Yaqi Tang @ 2022-12-20  6:09 UTC (permalink / raw)
  To: dts; +Cc: Yaqi Tang

Packets received in ice scalar path can be devided into two mempools with expected hdr and payload length/content specified by protocol type.

Signed-off-by: Yaqi Tang <yaqi.tang@intel.com>
---
 tests/TestSuite_ice_buffer_split.py | 1631 +++++++++++++++++++++++++++
 1 file changed, 1631 insertions(+)
 create mode 100644 tests/TestSuite_ice_buffer_split.py

diff --git a/tests/TestSuite_ice_buffer_split.py b/tests/TestSuite_ice_buffer_split.py
new file mode 100644
index 00000000..bde1ca0d
--- /dev/null
+++ b/tests/TestSuite_ice_buffer_split.py
@@ -0,0 +1,1631 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 Intel Corporation
+#
+
+import copy
+import os
+import re
+import time
+
+from framework.packet import Packet
+from framework.pmd_output import PmdOutput
+from framework.test_case import TestCase
+from framework.utils import GREEN, RED
+
+from .rte_flow_common import FdirProcessing
+
+# define length
+MAC_HEADER_LEN = 14
+IPV4_HEADER_LEN = 20
+IPV6_HEADER_LEN = 40
+UDP_HEADER_LEN = 8
+TCP_HEADER_LEN = 20
+SCTP_HEADER_LEN = 12
+GRE_HEADER_LEN = 4
+VXLAN_HEADER_LEN = 8
+PAY_LEN = 30
+
+port_buffer_split_mac_matched_pkts = {
+    "mac_ipv4_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/("Y"*30)',
+    "mac_ipv4_ipv6_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/IPv6()/("Y"*30)',
+    "mac_ipv4_udp_vxlan_mac_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/("Y"*30)',
+    "mac_ipv6_udp_vxlan_ipv6_pay": 'Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(sport=200, dport=4790)/VXLAN()/IPv6()/("Y"*30)',
+    "mac_ipv4_gre_mac_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/GRE()/Ether(dst="00:11:22:33:44:66")/("Y"*30)',
+    "mac_ipv4_gre_ipv6_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/GRE()/IPv6()/("Y"*30)',
+}
+
+port_buffer_split_inner_l3_matched_pkts = {
+    "mac_ipv4_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/("Y"*30)',
+    "mac_ipv6_ipv4_pay": 'Ether(dst="00:11:22:33:44:55")/IPv6()/IP()/("Y"*30)',
+    "mac_ipv4_udp_vxlan_mac_ipv6_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IPv6()/("Y"*30)',
+    "mac_ipv6_udp_vxlan_ipv4_pay": 'Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(sport=200, dport=4790)/VXLAN()/IP()/("Y"*30)',
+    "mac_ipv4_gre_mac_ipv6_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/GRE()/Ether(dst="00:11:22:33:44:66")/IPv6()/("Y"*30)',
+    "mac_ipv6_gre_ipv4_pay": 'Ether(dst="00:11:22:33:44:55")/IPv6()/GRE()/IP()/("Y"*30)',
+}
+
+port_buffer_split_inner_l4_matched_pkts = {
+    "mac_ipv4_udp_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/UDP()/("Y"*30)',
+    "mac_ipv4_ipv6_udp_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/IPv6()/UDP()/("Y"*30)',
+    "mac_ipv4_udp_vxlan_mac_ipv4_udp_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP()/UDP()/("Y"*30)',
+    "mac_ipv6_udp_vxlan_ipv6_udp_pay": 'Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(sport=200, dport=4790)/VXLAN()/IPv6()/UDP()/("Y"*30)',
+    "mac_ipv6_gre_mac_ipv4_udp_pay": 'Ether(dst="00:11:22:33:44:55")/IPv6()/GRE()/Ether(dst="00:11:22:33:44:66")/IP()/UDP()/("Y"*30)',
+    "mac_ipv4_gre_ipv6_udp_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/GRE()/IPv6()/UDP()/("Y"*30)',
+    "mac_ipv6_tcp_pay": 'Ether(dst="00:11:22:33:44:55")/IPv6()/TCP()/("Y"*30)',
+    "mac_ipv6_ipv4_tcp_pay": 'Ether(dst="00:11:22:33:44:55")/IPv6()/IP()/TCP()/("Y"*30)',
+    "mac_ipv6_udp_vxlan_mac_ipv6_tcp_pay": 'Ether(dst="00:11:22:33:44:55")/IPv6()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IPv6()/TCP()/("Y"*30)',
+    "mac_ipv4_udp_vxlan_ipv4_tcp_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/UDP(sport=200, dport=4790)/VXLAN()/IP()/TCP()/("Y"*30)',
+    "mac_ipv4_gre_mac_ipv6_tcp_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/GRE()/Ether(dst="00:11:22:33:44:66")/IPv6()/TCP()/("Y"*30)',
+    "mac_ipv6_gre_ipv4_tcp_pay": 'Ether(dst="00:11:22:33:44:55")/IPv6()/GRE()/IP()/TCP()/("Y"*30)',
+}
+
+port_buffer_split_inner_l4_mismatched_pkts = {
+    "mac_ipv4_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/("Y"*30)',
+    "mac_ipv4_gre_mac_ipv4_sctp_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/GRE()/Ether(dst="00:11:22:33:44:66")/IP()/SCTP()/("Y"*30)',
+}
+
+port_buffer_split_inner_sctp_matched_pkts = {
+    "mac_ipv4_sctp_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/SCTP()/("Y"*30)',
+    "mac_ipv4_ipv6_sctp_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/IPv6()/SCTP()/("Y"*30)',
+    "mac_ipv4_udp_vxlan_mac_ipv4_sctp_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP()/SCTP()/("Y"*30)',
+    "mac_ipv6_udp_vxlan_ipv6_sctp_pay": 'Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(sport=200, dport=4790)/VXLAN()/IPv6()/SCTP()/("Y"*30)',
+    "mac_ipv6_gre_mac_ipv4_sctp_pay": 'Ether(dst="00:11:22:33:44:55")/IPv6()/GRE()/Ether(dst="00:11:22:33:44:66")/IP()/SCTP()/("Y"*30)',
+    "mac_ipv4_gre_ipv6_sctp_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/GRE()/IPv6()/SCTP()/("Y"*30)',
+}
+
+port_buffer_split_inner_sctp_mismatched_pkts = {
+    "mac_ipv4_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/("Y"*30)',
+    "mac_ipv4_gre_mac_ipv4_udp_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/GRE()/Ether(dst="00:11:22:33:44:66")/IP()/UDP()/("Y"*30)',
+    "mac_ipv4_gre_mac_ipv4_tcp_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/GRE()/Ether(dst="00:11:22:33:44:66")/IP()/TCP()/("Y"*30)',
+}
+
+port_buffer_split_tunnel_matched_pkts = {
+    "mac_ipv4_ipv4_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/IP()/("Y"*30)',
+    "mac_ipv6_ipv6_pay": 'Ether(dst="00:11:22:33:44:55")/IPv6()/IPv6()/("Y"*30)',
+    "mac_ipv4_udp_vxlan_mac_ipv4_udp_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP()/UDP()/("Y"*30)',
+    "mac_ipv6_udp_vxlan_ipv6_tcp_pay": 'Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(sport=200, dport=4790)/VXLAN()/IPv6()/TCP()/("Y"*30)',
+    "mac_ipv4_gre_mac_ipv6_sctp_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/GRE()/Ether(dst="00:11:22:33:44:66")/IPv6()/SCTP()/("Y"*30)',
+    "mac_ipv6_gre_ipv4_udp_pay": 'Ether(dst="00:11:22:33:44:55")/IPv6()/GRE()/IP()/UDP()/("Y"*30)',
+}
+
+port_buffer_split_tunnel_mismatched_pkts = {
+    "mac_ipv4_udp_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/UDP()/("Y"*30)',
+}
+
+queue_buffer_split_mac_matched_pkts = {
+    "mac_ipv4_udp_vxlan_mac_ipv4_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.0.1",dst="192.168.0.2")/("Y"*30)',
+}
+
+queue_buffer_split_mac_mismatched_pkts = {
+    "mac_ipv4_udp_vxlan_mac_ipv4_l3src_changed_pkt": 'Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.1.1",dst="192.168.0.2")/("Y"*30)',
+    "mac_ipv4_udp_vxlan_mac_ipv4_l3dst_changed_pkt": 'Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.0.1",dst="192.168.1.2")/("Y"*30)',
+}
+
+queue_buffer_split_inner_ipv6_matched_pkts = {
+    "mac_ipv6_pay": 'Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1",dst="2001::2")/("Y"*30)',
+}
+
+queue_buffer_split_inner_ipv6_mismatched_pkts = {
+    "mac_ipv6_l3src_changed_pkt": 'Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::8",dst="2001::2")/("Y"*30)',
+    "mac_ipv6_l3dst_changed_pkt": 'Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1",dst="2001::9")/("Y"*30)',
+}
+
+queue_buffer_split_inner_ipv4_udp_matched_pkts = {
+    "mac_ipv4_udp_vxlan_mac_ipv4_udp_pay": 'Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.0.1",dst="192.168.0.2")/UDP(dport=23)/("Y"*30)',
+}
+
+queue_buffer_split_inner_ipv4_udp_mismatched_pkts = {
+    "mac_ipv4_udp_vxlan_mac_ipv4_udp_l3src_changed_pkt": 'Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.1.1",dst="192.168.0.2")/UDP(dport=23)/("Y"*30)',
+    "mac_ipv4_udp_vxlan_mac_ipv4_udp_l4dst_changed_pkt": 'Ether(dst="00:11:22:33:44:55")/IP()/UDP()/VXLAN()/Ether(dst="00:11:22:33:44:66")/IP(src="192.168.0.1",dst="192.168.0.2")/UDP(dport=24)/("Y"*30)',
+}
+
+queue_buffer_split_inner_ipv6_udp_matched_pkts = {
+    "mac_ipv6_udp_pay": 'Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1",dst="2001::2")/UDP(dport=23)/("Y"*30)',
+}
+
+queue_buffer_split_inner_ipv6_udp_mismatched_pkts = {
+    "mac_ipv6_udp_l3src_changed_pkt": 'Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::8",dst="2001::2")/UDP(dport=23)/("Y"*30)',
+    "mac_ipv6_udp_l4dst_changed_pkt": 'Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1",dst="2001::2")/UDP(dport=24)/("Y"*30)',
+}
+
+port_buffer_split_mac = {
+    "test": [
+        {
+            "port_id": 0,
+            "send_packet": [
+                port_buffer_split_mac_matched_pkts["mac_ipv4_pay"],
+                port_buffer_split_mac_matched_pkts["mac_ipv4_ipv6_pay"],
+                port_buffer_split_mac_matched_pkts["mac_ipv4_udp_vxlan_mac_pay"],
+                port_buffer_split_mac_matched_pkts["mac_ipv6_udp_vxlan_ipv6_pay"],
+                port_buffer_split_mac_matched_pkts["mac_ipv4_gre_mac_pay"],
+                port_buffer_split_mac_matched_pkts["mac_ipv4_gre_ipv6_pay"],
+            ],
+            "check_pkt_data": True,
+            "action": "check_seg_len",
+        },
+        {
+            "port_id": 1,
+            "send_packet": [
+                port_buffer_split_mac_matched_pkts["mac_ipv4_pay"],
+                port_buffer_split_mac_matched_pkts["mac_ipv4_ipv6_pay"],
+                port_buffer_split_mac_matched_pkts["mac_ipv4_udp_vxlan_mac_pay"],
+                port_buffer_split_mac_matched_pkts["mac_ipv6_udp_vxlan_ipv6_pay"],
+                port_buffer_split_mac_matched_pkts["mac_ipv4_gre_mac_pay"],
+                port_buffer_split_mac_matched_pkts["mac_ipv4_gre_ipv6_pay"],
+            ],
+            "check_pkt_data": False,
+            "action": "check_no_seg_len",
+        },
+    ],
+}
+
+port_buffer_split_inner_l3 = {
+    "test": [
+        {
+            "port_id": 0,
+            "send_packet": [
+                port_buffer_split_inner_l3_matched_pkts["mac_ipv4_pay"],
+                port_buffer_split_inner_l3_matched_pkts["mac_ipv6_ipv4_pay"],
+                port_buffer_split_inner_l3_matched_pkts[
+                    "mac_ipv4_udp_vxlan_mac_ipv6_pay"
+                ],
+                port_buffer_split_inner_l3_matched_pkts["mac_ipv6_udp_vxlan_ipv4_pay"],
+                port_buffer_split_inner_l3_matched_pkts["mac_ipv4_gre_mac_ipv6_pay"],
+                port_buffer_split_inner_l3_matched_pkts["mac_ipv6_gre_ipv4_pay"],
+            ],
+            "check_pkt_data": True,
+            "action": "check_seg_len",
+        },
+        {
+            "port_id": 1,
+            "send_packet": [
+                port_buffer_split_inner_l3_matched_pkts["mac_ipv4_pay"],
+                port_buffer_split_inner_l3_matched_pkts["mac_ipv6_ipv4_pay"],
+                port_buffer_split_inner_l3_matched_pkts[
+                    "mac_ipv4_udp_vxlan_mac_ipv6_pay"
+                ],
+                port_buffer_split_inner_l3_matched_pkts["mac_ipv6_udp_vxlan_ipv4_pay"],
+                port_buffer_split_inner_l3_matched_pkts["mac_ipv4_gre_mac_ipv6_pay"],
+                port_buffer_split_inner_l3_matched_pkts["mac_ipv6_gre_ipv4_pay"],
+            ],
+            "check_pkt_data": False,
+            "action": "check_no_seg_len",
+        },
+    ],
+}
+
+port_buffer_split_inner_l4 = {
+    "test": [
+        {
+            "port_id": 0,
+            "send_packet": [
+                port_buffer_split_inner_l4_matched_pkts["mac_ipv4_udp_pay"],
+                port_buffer_split_inner_l4_matched_pkts["mac_ipv4_ipv6_udp_pay"],
+                port_buffer_split_inner_l4_matched_pkts[
+                    "mac_ipv4_udp_vxlan_mac_ipv4_udp_pay"
+                ],
+                port_buffer_split_inner_l4_matched_pkts[
+                    "mac_ipv6_udp_vxlan_ipv6_udp_pay"
+                ],
+                port_buffer_split_inner_l4_matched_pkts[
+                    "mac_ipv6_gre_mac_ipv4_udp_pay"
+                ],
+                port_buffer_split_inner_l4_matched_pkts["mac_ipv4_gre_ipv6_udp_pay"],
+                port_buffer_split_inner_l4_matched_pkts["mac_ipv6_tcp_pay"],
+                port_buffer_split_inner_l4_matched_pkts["mac_ipv6_ipv4_tcp_pay"],
+                port_buffer_split_inner_l4_matched_pkts[
+                    "mac_ipv6_udp_vxlan_mac_ipv6_tcp_pay"
+                ],
+                port_buffer_split_inner_l4_matched_pkts[
+                    "mac_ipv4_udp_vxlan_ipv4_tcp_pay"
+                ],
+                port_buffer_split_inner_l4_matched_pkts[
+                    "mac_ipv4_gre_mac_ipv6_tcp_pay"
+                ],
+                port_buffer_split_inner_l4_matched_pkts["mac_ipv6_gre_ipv4_tcp_pay"],
+            ],
+            "check_pkt_data": True,
+            "action": "check_seg_len",
+        },
+        {
+            "port_id": 0,
+            "send_packet": [
+                port_buffer_split_inner_l4_mismatched_pkts["mac_ipv4_pay"],
+                port_buffer_split_inner_l4_mismatched_pkts[
+                    "mac_ipv4_gre_mac_ipv4_sctp_pay"
+                ],
+            ],
+            "check_pkt_data": True,
+            "action": "check_no_segment",
+        },
+        {
+            "port_id": 1,
+            "send_packet": [
+                port_buffer_split_inner_l4_matched_pkts["mac_ipv4_udp_pay"],
+                port_buffer_split_inner_l4_matched_pkts["mac_ipv4_ipv6_udp_pay"],
+                port_buffer_split_inner_l4_matched_pkts[
+                    "mac_ipv4_udp_vxlan_mac_ipv4_udp_pay"
+                ],
+                port_buffer_split_inner_l4_matched_pkts[
+                    "mac_ipv6_udp_vxlan_ipv6_udp_pay"
+                ],
+                port_buffer_split_inner_l4_matched_pkts[
+                    "mac_ipv6_gre_mac_ipv4_udp_pay"
+                ],
+                port_buffer_split_inner_l4_matched_pkts["mac_ipv4_gre_ipv6_udp_pay"],
+                port_buffer_split_inner_l4_matched_pkts["mac_ipv6_tcp_pay"],
+                port_buffer_split_inner_l4_matched_pkts["mac_ipv6_ipv4_tcp_pay"],
+                port_buffer_split_inner_l4_matched_pkts[
+                    "mac_ipv6_udp_vxlan_mac_ipv6_tcp_pay"
+                ],
+                port_buffer_split_inner_l4_matched_pkts[
+                    "mac_ipv4_udp_vxlan_ipv4_tcp_pay"
+                ],
+                port_buffer_split_inner_l4_matched_pkts[
+                    "mac_ipv4_gre_mac_ipv6_tcp_pay"
+                ],
+                port_buffer_split_inner_l4_matched_pkts["mac_ipv6_gre_ipv4_tcp_pay"],
+            ],
+            "check_pkt_data": False,
+            "action": "check_no_seg_len",
+        },
+    ],
+}
+
+port_buffer_split_inner_sctp = {
+    "test": [
+        {
+            "port_id": 0,
+            "send_packet": [
+                port_buffer_split_inner_sctp_matched_pkts["mac_ipv4_sctp_pay"],
+                port_buffer_split_inner_sctp_matched_pkts["mac_ipv4_ipv6_sctp_pay"],
+                port_buffer_split_inner_sctp_matched_pkts[
+                    "mac_ipv4_udp_vxlan_mac_ipv4_sctp_pay"
+                ],
+                port_buffer_split_inner_sctp_matched_pkts[
+                    "mac_ipv6_udp_vxlan_ipv6_sctp_pay"
+                ],
+                port_buffer_split_inner_sctp_matched_pkts[
+                    "mac_ipv6_gre_mac_ipv4_sctp_pay"
+                ],
+                port_buffer_split_inner_sctp_matched_pkts["mac_ipv4_gre_ipv6_sctp_pay"],
+            ],
+            "check_pkt_data": True,
+            "action": "check_seg_len",
+        },
+        {
+            "port_id": 0,
+            "send_packet": [
+                port_buffer_split_inner_sctp_mismatched_pkts["mac_ipv4_pay"],
+                port_buffer_split_inner_sctp_mismatched_pkts[
+                    "mac_ipv4_gre_mac_ipv4_udp_pay"
+                ],
+                port_buffer_split_inner_sctp_mismatched_pkts[
+                    "mac_ipv4_gre_mac_ipv4_tcp_pay"
+                ],
+            ],
+            "check_pkt_data": True,
+            "action": "check_no_segment",
+        },
+        {
+            "port_id": 1,
+            "send_packet": [
+                port_buffer_split_inner_sctp_matched_pkts["mac_ipv4_sctp_pay"],
+                port_buffer_split_inner_sctp_matched_pkts["mac_ipv4_ipv6_sctp_pay"],
+                port_buffer_split_inner_sctp_matched_pkts[
+                    "mac_ipv4_udp_vxlan_mac_ipv4_sctp_pay"
+                ],
+                port_buffer_split_inner_sctp_matched_pkts[
+                    "mac_ipv6_udp_vxlan_ipv6_sctp_pay"
+                ],
+                port_buffer_split_inner_sctp_matched_pkts[
+                    "mac_ipv6_gre_mac_ipv4_sctp_pay"
+                ],
+                port_buffer_split_inner_sctp_matched_pkts["mac_ipv4_gre_ipv6_sctp_pay"],
+            ],
+            "check_pkt_data": False,
+            "action": "check_no_seg_len",
+        },
+    ],
+}
+
+port_buffer_split_tunnel = {
+    "test": [
+        {
+            "port_id": 0,
+            "send_packet": [
+                port_buffer_split_tunnel_matched_pkts["mac_ipv4_ipv4_pay"],
+                port_buffer_split_tunnel_matched_pkts["mac_ipv6_ipv6_pay"],
+                port_buffer_split_tunnel_matched_pkts[
+                    "mac_ipv4_udp_vxlan_mac_ipv4_udp_pay"
+                ],
+                port_buffer_split_tunnel_matched_pkts[
+                    "mac_ipv6_udp_vxlan_ipv6_tcp_pay"
+                ],
+                port_buffer_split_tunnel_matched_pkts["mac_ipv4_gre_mac_ipv6_sctp_pay"],
+                port_buffer_split_tunnel_matched_pkts["mac_ipv6_gre_ipv4_udp_pay"],
+            ],
+            "check_pkt_data": True,
+            "action": "check_seg_len",
+        },
+        {
+            "port_id": 0,
+            "send_packet": [
+                port_buffer_split_tunnel_mismatched_pkts["mac_ipv4_udp_pay"],
+            ],
+            "check_pkt_data": True,
+            "action": "check_no_segment",
+        },
+        {
+            "port_id": 1,
+            "send_packet": [
+                port_buffer_split_tunnel_matched_pkts["mac_ipv4_ipv4_pay"],
+                port_buffer_split_tunnel_matched_pkts["mac_ipv6_ipv6_pay"],
+                port_buffer_split_tunnel_matched_pkts[
+                    "mac_ipv4_udp_vxlan_mac_ipv4_udp_pay"
+                ],
+                port_buffer_split_tunnel_matched_pkts[
+                    "mac_ipv6_udp_vxlan_ipv6_tcp_pay"
+                ],
+                port_buffer_split_tunnel_matched_pkts["mac_ipv4_gre_mac_ipv6_sctp_pay"],
+                port_buffer_split_tunnel_matched_pkts["mac_ipv6_gre_ipv4_udp_pay"],
+            ],
+            "check_pkt_data": False,
+            "action": "check_no_seg_len",
+        },
+    ],
+}
+
+queue_buffer_split_mac = {
+    "test": [
+        {
+            "port_id": 0,
+            "send_packet": [
+                queue_buffer_split_mac_matched_pkts["mac_ipv4_udp_vxlan_mac_ipv4_pay"],
+            ],
+            "check_pkt_data": True,
+            "action": "check_seg_len",
+        },
+        {
+            "port_id": 0,
+            "send_packet": [
+                queue_buffer_split_mac_mismatched_pkts[
+                    "mac_ipv4_udp_vxlan_mac_ipv4_l3src_changed_pkt"
+                ],
+                queue_buffer_split_mac_mismatched_pkts[
+                    "mac_ipv4_udp_vxlan_mac_ipv4_l3dst_changed_pkt"
+                ],
+            ],
+            "check_pkt_data": False,
+            "action": "check_mismatch_pkts",
+        },
+    ],
+}
+
+queue_buffer_split_inner_ipv6 = {
+    "test": [
+        {
+            "port_id": 0,
+            "send_packet": [
+                queue_buffer_split_inner_ipv6_matched_pkts["mac_ipv6_pay"],
+            ],
+            "check_pkt_data": True,
+            "action": "check_seg_len",
+        },
+        {
+            "port_id": 0,
+            "send_packet": [
+                queue_buffer_split_inner_ipv6_mismatched_pkts[
+                    "mac_ipv6_l3src_changed_pkt"
+                ],
+                queue_buffer_split_inner_ipv6_mismatched_pkts[
+                    "mac_ipv6_l3dst_changed_pkt"
+                ],
+            ],
+            "check_pkt_data": False,
+            "action": "check_mismatch_pkts",
+        },
+    ],
+}
+
+queue_buffer_split_inner_ipv4_udp = {
+    "test": [
+        {
+            "port_id": 0,
+            "send_packet": [
+                queue_buffer_split_inner_ipv4_udp_matched_pkts[
+                    "mac_ipv4_udp_vxlan_mac_ipv4_udp_pay"
+                ],
+            ],
+            "check_pkt_data": True,
+            "action": "check_seg_len",
+        },
+        {
+            "port_id": 0,
+            "send_packet": [
+                queue_buffer_split_inner_ipv4_udp_mismatched_pkts[
+                    "mac_ipv4_udp_vxlan_mac_ipv4_udp_l3src_changed_pkt"
+                ],
+                queue_buffer_split_inner_ipv4_udp_mismatched_pkts[
+                    "mac_ipv4_udp_vxlan_mac_ipv4_udp_l4dst_changed_pkt"
+                ],
+            ],
+            "check_pkt_data": False,
+            "action": "check_mismatch_pkts",
+        },
+    ],
+}
+
+queue_buffer_split_inner_ipv6_udp = {
+    "test": [
+        {
+            "port_id": 0,
+            "send_packet": [
+                queue_buffer_split_inner_ipv6_udp_matched_pkts["mac_ipv6_udp_pay"],
+            ],
+            "check_pkt_data": True,
+            "action": "check_seg_len",
+        },
+        {
+            "port_id": 0,
+            "send_packet": [
+                queue_buffer_split_inner_ipv6_udp_mismatched_pkts[
+                    "mac_ipv6_udp_l3src_changed_pkt"
+                ],
+                queue_buffer_split_inner_ipv6_udp_mismatched_pkts[
+                    "mac_ipv6_udp_l4dst_changed_pkt"
+                ],
+            ],
+            "check_pkt_data": False,
+            "action": "check_mismatch_pkts",
+        },
+    ],
+}
+
+queue_buffer_split_inner_ipv4_tcp = [
+    eval(
+        str(queue_buffer_split_inner_ipv4_udp)
+        .replace("ipv4_udp", "ipv4_tcp")
+        .replace("UDP(dport", "TCP(dport")
+    )
+]
+
+queue_buffer_split_inner_ipv6_tcp = [
+    eval(
+        str(queue_buffer_split_inner_ipv6_udp)
+        .replace("ipv6_udp", "ipv6_tcp")
+        .replace("UDP(dport", "TCP(dport")
+    )
+]
+
+queue_buffer_split_inner_ipv4_sctp = [
+    eval(
+        str(queue_buffer_split_inner_ipv4_udp)
+        .replace("ipv4_udp", "ipv4_sctp")
+        .replace("UDP(dport", "SCTP(dport")
+    )
+]
+
+queue_buffer_split_inner_ipv6_sctp = [
+    eval(
+        str(queue_buffer_split_inner_ipv6_udp)
+        .replace("ipv6_udp", "ipv6_sctp")
+        .replace("UDP(dport", "SCTP(dport")
+    )
+]
+
+
+class TestBufferSplit(TestCase):
+    def set_up_all(self):
+        """
+        Run at the start of each test suite.
+        Generic filter Prerequistites
+        """
+        self.verify(
+            self.nic in ["ICE_25G-E810C_SFP", "ICE_100G-E810C_QSFP"],
+            "%s nic not support timestamp" % self.nic,
+        )
+        self.dut_ports = self.dut.get_ports(self.nic)
+        self.ports_socket = self.dut.get_numa_id(self.dut_ports[0])
+        self.dut.build_install_dpdk(
+            self.target, extra_options="-Dc_args='-DRTE_ETHDEV_DEBUG_RX=1'"
+        )
+        # Verify that enough ports are available
+        self.verify(len(self.dut_ports) >= 2, "Insufficient ports")
+        self.tester_port0 = self.tester.get_local_port(self.dut_ports[0])
+        self.tester_port1 = self.tester.get_local_port(self.dut_ports[1])
+        self.tester_ifaces = [
+            self.tester.get_interface(self.dut.ports_map[port])
+            for port in self.dut_ports
+        ]
+        self.pf_pci0 = self.dut.ports_info[self.dut_ports[0]]["pci"]
+        self.pf_pci1 = self.dut.ports_info[self.dut_ports[1]]["pci"]
+
+        self.rxq = 8
+        self.pkt = Packet()
+        self.pmdout = PmdOutput(self.dut)
+        self.fdirprocess = FdirProcessing(
+            self, self.pmdout, self.tester_ifaces[0], self.rxq
+        )
+
+    def set_up(self):
+        """
+        Run before each test case.
+        """
+        pass
+
+    def launch_testpmd(self, allowlist, line_option=""):
+        """
+        start testpmd
+        """
+        # Prepare testpmd EAL and parameters
+        self.pmdout.start_testpmd(
+            socket=self.ports_socket,
+            eal_param=allowlist + " --force-max-simd-bitwidth=64 ",
+            param=" --mbuf-size=2048,2048 " + line_option,
+        )
+        # test link status
+        res = self.pmdout.wait_link_status_up("all", timeout=15)
+        self.verify(res is True, "there have port link is down")
+
+    def launch_two_ports_testpmd_and_config_port_buffer_split(self):
+        allowlist = f"-a {self.pf_pci0} -a {self.pf_pci1}"
+        line_option = ""
+        self.launch_testpmd(allowlist, line_option)
+        self.dut.send_expect("port stop all", "testpmd> ")
+        self.dut.send_expect("port config 0 rx_offload buffer_split on", "testpmd> ")
+        self.dut.send_expect("show port 0 rx_offload configuration", "testpmd> ")
+
+    def launch_one_port_testpmd_with_multi_queues(self):
+        allowlist = f"-a {self.pf_pci0}"
+        line_option = "--txq=8 --rxq=8"
+        self.launch_testpmd(allowlist, line_option)
+        self.dut.send_expect("port stop all", "testpmd> ")
+
+    def start_testpmd(self):
+        self.dut.send_expect("show config rxhdrs", "testpmd> ")
+        self.dut.send_expect(
+            "port config 0 udp_tunnel_port add vxlan 4789", "testpmd> "
+        )
+        self.dut.send_expect("port start all", "testpmd> ")
+        self.dut.send_expect("start", "testpmd> ")
+
+    def check_pkt_data_same(self, pkt_data, expect_pkt_data):
+        self.error_msgs = []
+        for i in range(len(expect_pkt_data)):
+            if pkt_data[i] != expect_pkt_data[i]:
+                error_msg = "The packet data should be same with expect packet data"
+                self.logger.error(error_msg)
+                self.error_msgs.append(error_msg)
+                self.verify(not self.error_msgs, "Test failed")
+
+    def check_seg_len(self, seg_len, expected_seg_len):
+        if len(seg_len) == 0:
+            error_msg = "There is no segment"
+            self.logger.error(error_msg)
+            self.error_msgs.append(error_msg)
+        else:
+            for i in range(len(expected_seg_len)):
+                if seg_len[i] != expected_seg_len[i]:
+                    error_msg = (
+                        "The segment length should be same with expected "
+                        "segment length {}".format(expected_seg_len)
+                    )
+                    self.logger.error(error_msg)
+                    self.error_msgs.append(error_msg)
+
+    def check_no_seg_len(self, seg_len):
+        for i in range(len(seg_len)):
+            if seg_len[i]:
+                error_msg = "The segment length should be empty"
+                self.logger.error(error_msg)
+                self.error_msgs.append(error_msg)
+
+    def check_no_segment(self, seg_len, expected_no_segment):
+        if len(seg_len) == 0:
+            error_msg = "The segment length should not be empty"
+            self.logger.error(error_msg)
+            self.error_msgs.append(error_msg)
+        else:
+            for i in range(len(expected_no_segment)):
+                if seg_len[i] != expected_no_segment[i]:
+                    error_msg = (
+                        "The segment length should be same with expected "
+                        "no segment length {}".format(expected_no_segment)
+                    )
+                    self.logger.error(error_msg)
+                    self.error_msgs.append(error_msg)
+
+    def check_mismatch_pkts(
+        self, queue_id, queue_id1, queue_id2, seg_len, expected_seg_len
+    ):
+        for i in range(len(queue_id)):
+            if queue_id[i] == queue_id1 or queue_id[i] == queue_id2:
+                self.logger.info(
+                    "Mismatch pkt is distributed to buffer split queue by RSS, action: check_seg_len"
+                )
+                self.check_seg_len(seg_len[i], expected_seg_len[0])
+            else:
+                self.logger.info(
+                    "Mismatch pkt is distributed to not buffer split queue by RSS, action: check_no_seg_len"
+                )
+                self.check_no_seg_len(seg_len[i])
+
+    def get_pkt_data(self, pkts):
+        pkt_data_list = []
+        self.logger.info("{}".format(pkts))
+        self.tester.send_expect("scapy", ">>> ")
+        time.sleep(1)
+        for i in range(len(pkts)):
+            self.tester.send_expect("p = %s" % pkts[i], ">>>")
+            out = self.tester.send_expect("hexdump(p)", ">>>")
+            time.sleep(1)
+            pkt_pat = "(?<=00\S\d )(.*)(?=  )"
+            pkt_data = re.findall(pkt_pat, out, re.M)
+            pkt_data = (" ".join(map(str, pkt_data))).replace("  ", " ")
+            pkt_data = pkt_data.strip()
+            self.logger.info("pkt_data: {}".format(pkt_data))
+            if len(pkt_data) != 0:
+                pkt_data_list.append(pkt_data)
+        self.logger.info("pkt_data_list: {}".format(pkt_data_list))
+        return pkt_data_list
+
+    def send_pkt_get_output(self, pkts, port_id, count=1):
+        pkt_data_list = []
+        segment_len_list = []
+        queue_id_list = []
+        self.logger.info("----------send packet-------------")
+        self.logger.info("{}".format(pkts))
+        tx_port = self.tester_ifaces[port_id]
+        for i in range(len(pkts)):
+            self.pkt.update_pkt(pkts[i])
+            time.sleep(2)
+            self.pkt.send_pkt(crb=self.tester, tx_port=tx_port, count=count)
+            out = self.pmdout.get_output(timeout=2)
+            pkt_pat = "(?<=: )(.*)(?= \| )"
+            pkt_data = re.findall(pkt_pat, out, re.M)
+            pkt_data = (" ".join(map(str, pkt_data))).replace("  ", "")
+            self.logger.info("pkt_data: {}".format(pkt_data))
+            if len(pkt_data) != 0:
+                pkt_data_list.append(pkt_data)
+            segment_pat = ".*segment\s+at\s.*len=(\d+)"
+            segment_infos = re.findall(segment_pat, out, re.M)
+            segment_len = list(map(int, segment_infos))
+            self.logger.info("segment_len: {}".format(segment_len))
+            segment_len_list.append(segment_len)
+            queue_pat = ".*queue_id=(\d+)"
+            queue_id = re.findall(queue_pat, out, re.M)
+            queue_id = list(map(int, queue_id))
+            if queue_id:
+                queue_id_list.append(queue_id)
+        self.logger.info("pkt_data_list: {}".format(pkt_data_list))
+        self.logger.info("segment_len_list: {}".format(segment_len_list))
+        return pkt_data_list, segment_len_list, queue_id_list
+
+    def handle_buffer_split_case(
+        self,
+        case_info,
+        expected_seg_len,
+        expected_no_segment,
+        queue_id1,
+        queue_id2,
+    ):
+        self.error_msgs = []
+        seg_len = []
+        # handle tests
+        tests = case_info["test"]
+        for test in tests:
+            if "send_packet" in test:
+                pkt_data, seg_len, queue_id = self.send_pkt_get_output(
+                    test["send_packet"], port_id=test["port_id"]
+                )
+                if test["check_pkt_data"] == True:
+                    self.logger.info("action: check_pkt_data_same")
+                    expect_pkt_data = self.get_pkt_data(test["send_packet"])
+                    self.check_pkt_data_same(pkt_data, expect_pkt_data)
+            if "action" in test:
+                self.logger.info("action: {}\n".format(test["action"]))
+                if test["action"] == "check_seg_len":
+                    self.check_seg_len(seg_len, expected_seg_len)
+                elif test["action"] == "check_no_segment":
+                    self.check_no_segment(seg_len, expected_no_segment)
+                elif test["action"] == "check_no_seg_len":
+                    self.check_no_seg_len(seg_len)
+                else:
+                    self.check_mismatch_pkts(
+                        queue_id, queue_id1, queue_id2, seg_len, expected_seg_len
+                    )
+            self.verify(not self.error_msgs, "Test failed")
+
+    def verify_port_buffer_split_outer_mac(self):
+        self.launch_two_ports_testpmd_and_config_port_buffer_split()
+        self.dut.send_expect("set rxhdrs eth", "testpmd> ")
+        self.start_testpmd()
+
+        expected_seg_len = [
+            [MAC_HEADER_LEN, IPV4_HEADER_LEN + PAY_LEN],
+            [MAC_HEADER_LEN, IPV4_HEADER_LEN + IPV6_HEADER_LEN + PAY_LEN],
+            [
+                MAC_HEADER_LEN,
+                IPV4_HEADER_LEN
+                + UDP_HEADER_LEN
+                + VXLAN_HEADER_LEN
+                + MAC_HEADER_LEN
+                + PAY_LEN,
+            ],
+            [
+                MAC_HEADER_LEN,
+                IPV6_HEADER_LEN
+                + UDP_HEADER_LEN
+                + VXLAN_HEADER_LEN
+                + IPV6_HEADER_LEN
+                + PAY_LEN,
+            ],
+            [
+                MAC_HEADER_LEN,
+                IPV4_HEADER_LEN + GRE_HEADER_LEN + MAC_HEADER_LEN + PAY_LEN,
+            ],
+            [
+                MAC_HEADER_LEN,
+                IPV4_HEADER_LEN + GRE_HEADER_LEN + IPV6_HEADER_LEN + PAY_LEN,
+            ],
+        ]
+        expected_no_segment = []
+        queue_id1 = queue_id2 = []
+
+        self.handle_buffer_split_case(
+            port_buffer_split_mac,
+            expected_seg_len,
+            expected_no_segment,
+            queue_id1,
+            queue_id2,
+        )
+
+    def verify_port_buffer_split_inner_mac(self):
+        self.launch_two_ports_testpmd_and_config_port_buffer_split()
+        self.dut.send_expect("set rxhdrs inner-eth", "testpmd> ")
+        self.start_testpmd()
+
+        expected_seg_len = [
+            [MAC_HEADER_LEN, IPV4_HEADER_LEN + PAY_LEN],
+            [MAC_HEADER_LEN, IPV4_HEADER_LEN + IPV6_HEADER_LEN + PAY_LEN],
+            [
+                MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + UDP_HEADER_LEN
+                + VXLAN_HEADER_LEN
+                + MAC_HEADER_LEN,
+                PAY_LEN,
+            ],
+            [
+                MAC_HEADER_LEN,
+                IPV6_HEADER_LEN
+                + UDP_HEADER_LEN
+                + VXLAN_HEADER_LEN
+                + IPV6_HEADER_LEN
+                + PAY_LEN,
+            ],
+            [
+                MAC_HEADER_LEN + IPV4_HEADER_LEN + GRE_HEADER_LEN + MAC_HEADER_LEN,
+                PAY_LEN,
+            ],
+            [
+                MAC_HEADER_LEN,
+                IPV4_HEADER_LEN + GRE_HEADER_LEN + IPV6_HEADER_LEN + PAY_LEN,
+            ],
+        ]
+        expected_no_segment = []
+        queue_id1 = queue_id2 = []
+
+        self.handle_buffer_split_case(
+            port_buffer_split_mac,
+            expected_seg_len,
+            expected_no_segment,
+            queue_id1,
+            queue_id2,
+        )
+
+    def verify_port_buffer_split_inner_l3(self, ptype):
+        self.launch_two_ports_testpmd_and_config_port_buffer_split()
+        self.dut.send_expect("set rxhdrs %s" % ptype, "testpmd> ")
+        self.start_testpmd()
+
+        expected_seg_len = [
+            [MAC_HEADER_LEN + IPV4_HEADER_LEN, PAY_LEN],
+            [MAC_HEADER_LEN + IPV6_HEADER_LEN + IPV4_HEADER_LEN, PAY_LEN],
+            [
+                MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + UDP_HEADER_LEN
+                + VXLAN_HEADER_LEN
+                + MAC_HEADER_LEN
+                + IPV6_HEADER_LEN,
+                PAY_LEN,
+            ],
+            [
+                MAC_HEADER_LEN
+                + IPV6_HEADER_LEN
+                + UDP_HEADER_LEN
+                + VXLAN_HEADER_LEN
+                + IPV4_HEADER_LEN,
+                PAY_LEN,
+            ],
+            [
+                MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + GRE_HEADER_LEN
+                + MAC_HEADER_LEN
+                + IPV6_HEADER_LEN,
+                PAY_LEN,
+            ],
+            [
+                MAC_HEADER_LEN + IPV6_HEADER_LEN + GRE_HEADER_LEN + IPV4_HEADER_LEN,
+                PAY_LEN,
+            ],
+        ]
+        expected_no_segment = []
+        queue_id1 = queue_id2 = []
+
+        self.handle_buffer_split_case(
+            port_buffer_split_inner_l3,
+            expected_seg_len,
+            expected_no_segment,
+            queue_id1,
+            queue_id2,
+        )
+        self.dut.send_expect("quit", "# ")
+
+    def verify_port_buffer_split_inner_l4(self, ptype):
+        self.launch_two_ports_testpmd_and_config_port_buffer_split()
+        self.dut.send_expect("set rxhdrs %s" % ptype, "testpmd> ")
+        self.start_testpmd()
+
+        expected_seg_len = [
+            [MAC_HEADER_LEN + IPV4_HEADER_LEN + UDP_HEADER_LEN, PAY_LEN],
+            [
+                MAC_HEADER_LEN + IPV4_HEADER_LEN + IPV6_HEADER_LEN + UDP_HEADER_LEN,
+                PAY_LEN,
+            ],
+            [
+                MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + UDP_HEADER_LEN
+                + VXLAN_HEADER_LEN
+                + MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + UDP_HEADER_LEN,
+                PAY_LEN,
+            ],
+            [
+                MAC_HEADER_LEN
+                + IPV6_HEADER_LEN
+                + UDP_HEADER_LEN
+                + VXLAN_HEADER_LEN
+                + IPV6_HEADER_LEN
+                + UDP_HEADER_LEN,
+                PAY_LEN,
+            ],
+            [
+                MAC_HEADER_LEN
+                + IPV6_HEADER_LEN
+                + GRE_HEADER_LEN
+                + MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + UDP_HEADER_LEN,
+                PAY_LEN,
+            ],
+            [
+                MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + GRE_HEADER_LEN
+                + IPV6_HEADER_LEN
+                + UDP_HEADER_LEN,
+                PAY_LEN,
+            ],
+            [MAC_HEADER_LEN + IPV6_HEADER_LEN + TCP_HEADER_LEN, PAY_LEN],
+            [
+                MAC_HEADER_LEN + IPV6_HEADER_LEN + IPV4_HEADER_LEN + TCP_HEADER_LEN,
+                PAY_LEN,
+            ],
+            [
+                MAC_HEADER_LEN
+                + IPV6_HEADER_LEN
+                + UDP_HEADER_LEN
+                + VXLAN_HEADER_LEN
+                + MAC_HEADER_LEN
+                + IPV6_HEADER_LEN
+                + TCP_HEADER_LEN,
+                PAY_LEN,
+            ],
+            [
+                MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + UDP_HEADER_LEN
+                + VXLAN_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + TCP_HEADER_LEN,
+                PAY_LEN,
+            ],
+            [
+                MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + GRE_HEADER_LEN
+                + MAC_HEADER_LEN
+                + IPV6_HEADER_LEN
+                + TCP_HEADER_LEN,
+                PAY_LEN,
+            ],
+            [
+                MAC_HEADER_LEN
+                + IPV6_HEADER_LEN
+                + GRE_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + TCP_HEADER_LEN,
+                PAY_LEN,
+            ],
+        ]
+        expected_no_segment = [
+            [0, MAC_HEADER_LEN + IPV4_HEADER_LEN + PAY_LEN],
+            [
+                0,
+                MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + GRE_HEADER_LEN
+                + MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + SCTP_HEADER_LEN
+                + PAY_LEN,
+            ],
+        ]
+        queue_id1 = queue_id2 = []
+
+        self.handle_buffer_split_case(
+            port_buffer_split_inner_l4,
+            expected_seg_len,
+            expected_no_segment,
+            queue_id1,
+            queue_id2,
+        )
+        self.dut.send_expect("quit", "# ")
+
+    def verify_port_buffer_split_inner_sctp(self, ptype):
+        self.launch_two_ports_testpmd_and_config_port_buffer_split()
+        self.dut.send_expect("set rxhdrs %s" % ptype, "testpmd> ")
+        self.start_testpmd()
+
+        expected_seg_len = [
+            [MAC_HEADER_LEN + IPV4_HEADER_LEN + SCTP_HEADER_LEN, PAY_LEN],
+            [
+                MAC_HEADER_LEN + IPV4_HEADER_LEN + IPV6_HEADER_LEN + SCTP_HEADER_LEN,
+                PAY_LEN,
+            ],
+            [
+                MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + UDP_HEADER_LEN
+                + VXLAN_HEADER_LEN
+                + MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + SCTP_HEADER_LEN,
+                PAY_LEN,
+            ],
+            [
+                MAC_HEADER_LEN
+                + IPV6_HEADER_LEN
+                + UDP_HEADER_LEN
+                + VXLAN_HEADER_LEN
+                + IPV6_HEADER_LEN
+                + SCTP_HEADER_LEN,
+                PAY_LEN,
+            ],
+            [
+                MAC_HEADER_LEN
+                + IPV6_HEADER_LEN
+                + GRE_HEADER_LEN
+                + MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + SCTP_HEADER_LEN,
+                PAY_LEN,
+            ],
+            [
+                MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + GRE_HEADER_LEN
+                + IPV6_HEADER_LEN
+                + SCTP_HEADER_LEN,
+                PAY_LEN,
+            ],
+        ]
+        expected_no_segment = [
+            [0, MAC_HEADER_LEN + IPV4_HEADER_LEN + PAY_LEN],
+            [
+                0,
+                MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + GRE_HEADER_LEN
+                + MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + UDP_HEADER_LEN
+                + PAY_LEN,
+            ],
+            [
+                0,
+                MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + GRE_HEADER_LEN
+                + MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + TCP_HEADER_LEN
+                + PAY_LEN,
+            ],
+        ]
+        queue_id1 = queue_id2 = []
+
+        self.handle_buffer_split_case(
+            port_buffer_split_inner_sctp,
+            expected_seg_len,
+            expected_no_segment,
+            queue_id1,
+            queue_id2,
+        )
+        self.dut.send_expect("quit", "# ")
+
+    def verify_port_buffer_split_tunnel(self):
+        self.launch_two_ports_testpmd_and_config_port_buffer_split()
+        self.dut.send_expect("set rxhdrs grenat", "testpmd> ")
+        self.start_testpmd()
+
+        expected_seg_len = [
+            [MAC_HEADER_LEN + IPV4_HEADER_LEN, IPV4_HEADER_LEN + PAY_LEN],
+            [MAC_HEADER_LEN + IPV6_HEADER_LEN, IPV6_HEADER_LEN + PAY_LEN],
+            [
+                MAC_HEADER_LEN + IPV4_HEADER_LEN + UDP_HEADER_LEN + VXLAN_HEADER_LEN,
+                MAC_HEADER_LEN + IPV4_HEADER_LEN + UDP_HEADER_LEN + PAY_LEN,
+            ],
+            [
+                MAC_HEADER_LEN + IPV6_HEADER_LEN + UDP_HEADER_LEN + VXLAN_HEADER_LEN,
+                IPV6_HEADER_LEN + TCP_HEADER_LEN + PAY_LEN,
+            ],
+            [
+                MAC_HEADER_LEN + IPV4_HEADER_LEN + GRE_HEADER_LEN,
+                MAC_HEADER_LEN + IPV6_HEADER_LEN + SCTP_HEADER_LEN + PAY_LEN,
+            ],
+            [
+                MAC_HEADER_LEN + IPV6_HEADER_LEN + GRE_HEADER_LEN,
+                IPV4_HEADER_LEN + UDP_HEADER_LEN + PAY_LEN,
+            ],
+        ]
+        expected_no_segment = [
+            [0, MAC_HEADER_LEN + IPV4_HEADER_LEN + UDP_HEADER_LEN + PAY_LEN],
+        ]
+        queue_id1 = queue_id2 = []
+
+        self.handle_buffer_split_case(
+            port_buffer_split_tunnel,
+            expected_seg_len,
+            expected_no_segment,
+            queue_id1,
+            queue_id2,
+        )
+
+    def verify_queue_buffer_split_outer_mac(self):
+        self.launch_one_port_testpmd_with_multi_queues()
+        self.dut.send_expect("port 0 rxq 1 rx_offload buffer_split on", "testpmd> ")
+        self.dut.send_expect("show port 0 rx_offload configuration", "testpmd> ")
+        self.dut.send_expect("set rxhdrs eth", "testpmd> ")
+        self.start_testpmd()
+
+        fdir_rule = [
+            "flow create 0 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 / end actions queue index 1 / mark / end",
+        ]
+
+        queue_id1 = queue_id2 = [1]
+        expected_seg_len = [
+            [
+                MAC_HEADER_LEN,
+                IPV4_HEADER_LEN
+                + UDP_HEADER_LEN
+                + VXLAN_HEADER_LEN
+                + MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + PAY_LEN,
+            ],
+        ]
+        expected_no_segment = []
+
+        rule_li = self.fdirprocess.create_rule(fdir_rule[0])
+        self.handle_buffer_split_case(
+            queue_buffer_split_mac,
+            expected_seg_len,
+            expected_no_segment,
+            queue_id1,
+            queue_id2,
+        )
+        self.fdirprocess.destroy_rule(port_id=0, rule_id=rule_li)
+
+    def verify_queue_buffer_split_inner_mac(self):
+        self.launch_one_port_testpmd_with_multi_queues()
+        self.dut.send_expect("port 0 rxq 2 rx_offload buffer_split on", "testpmd> ")
+        self.dut.send_expect("port 0 rxq 3 rx_offload buffer_split on", "testpmd> ")
+        self.dut.send_expect("show port 0 rx_offload configuration", "testpmd> ")
+        self.dut.send_expect("set rxhdrs inner-eth", "testpmd> ")
+        self.start_testpmd()
+
+        fdir_rule = [
+            "flow create 0 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 / end actions rss queues 2 3 end / mark / end",
+        ]
+
+        queue_id1 = [2]
+        queue_id2 = [3]
+        expected_seg_len = [
+            [
+                MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + UDP_HEADER_LEN
+                + VXLAN_HEADER_LEN
+                + MAC_HEADER_LEN,
+                IPV4_HEADER_LEN + PAY_LEN,
+            ],
+        ]
+        expected_no_segment = []
+
+        rule_li = self.fdirprocess.create_rule(fdir_rule[0])
+        self.handle_buffer_split_case(
+            queue_buffer_split_mac,
+            expected_seg_len,
+            expected_no_segment,
+            queue_id1,
+            queue_id2,
+        )
+        self.fdirprocess.destroy_rule(port_id=0, rule_id=rule_li)
+
+    def verify_queue_buffer_split_inner_ipv4(self, ptype):
+        self.launch_one_port_testpmd_with_multi_queues()
+        self.dut.send_expect("port 0 rxq 2 rx_offload buffer_split on", "testpmd> ")
+        self.dut.send_expect("show port 0 rx_offload configuration", "testpmd> ")
+        self.dut.send_expect("set rxhdrs %s" % ptype, "testpmd> ")
+        self.start_testpmd()
+
+        fdir_rule = [
+            "flow create 0 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 / end actions queue index 2 / mark / end",
+        ]
+
+        queue_id1 = queue_id2 = [2]
+        expected_seg_len = [
+            [
+                MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + UDP_HEADER_LEN
+                + VXLAN_HEADER_LEN
+                + MAC_HEADER_LEN
+                + IPV4_HEADER_LEN,
+                PAY_LEN,
+            ],
+        ]
+        expected_no_segment = []
+
+        rule_li = self.fdirprocess.create_rule(fdir_rule[0])
+        self.handle_buffer_split_case(
+            queue_buffer_split_mac,
+            expected_seg_len,
+            expected_no_segment,
+            queue_id1,
+            queue_id2,
+        )
+        self.fdirprocess.destroy_rule(port_id=0, rule_id=rule_li)
+        self.dut.send_expect("quit", "# ")
+
+    def verify_queue_buffer_split_inner_ipv6(self, ptype):
+        self.launch_one_port_testpmd_with_multi_queues()
+        self.dut.send_expect("port 0 rxq 4 rx_offload buffer_split on", "testpmd> ")
+        self.dut.send_expect("port 0 rxq 5 rx_offload buffer_split on", "testpmd> ")
+        self.dut.send_expect("show port 0 rx_offload configuration", "testpmd> ")
+        self.dut.send_expect("set rxhdrs %s" % ptype, "testpmd> ")
+        self.start_testpmd()
+
+        fdir_rule = [
+            "flow create 0 ingress pattern eth / ipv6 src is 2001::1 dst is 2001::2 / end actions rss queues 4 5 end / mark / end",
+        ]
+
+        queue_id1 = [4]
+        queue_id2 = [5]
+        expected_seg_len = [
+            [MAC_HEADER_LEN + IPV6_HEADER_LEN, PAY_LEN],
+        ]
+        expected_no_segment = []
+
+        rule_li = self.fdirprocess.create_rule(fdir_rule[0])
+        self.handle_buffer_split_case(
+            queue_buffer_split_inner_ipv6,
+            expected_seg_len,
+            expected_no_segment,
+            queue_id1,
+            queue_id2,
+        )
+        self.fdirprocess.destroy_rule(port_id=0, rule_id=rule_li)
+        self.dut.send_expect("quit", "# ")
+
+    def verify_queue_buffer_split_inner_udp(self, ptype):
+        self.launch_one_port_testpmd_with_multi_queues()
+        self.dut.send_expect("port 0 rxq 3 rx_offload buffer_split on", "testpmd> ")
+        self.dut.send_expect("show port 0 rx_offload configuration", "testpmd> ")
+        self.dut.send_expect("set rxhdrs %s" % ptype, "testpmd> ")
+        self.start_testpmd()
+
+        fdir_rule = [
+            "flow create 0 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 / udp dst is 23 / end actions queue index 3 / mark / end",
+            "flow create 0 ingress pattern eth / ipv6 src is 2001::1 dst is 2001::2 / udp dst is 23 / end actions queue index 3 / mark / end",
+        ]
+
+        queue_id1 = queue_id2 = [3]
+        expected_ipv4_pkts_seg_len = [
+            [
+                MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + UDP_HEADER_LEN
+                + VXLAN_HEADER_LEN
+                + MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + UDP_HEADER_LEN,
+                PAY_LEN,
+            ],
+        ]
+        expected_ipv6_pkts_seg_len = [
+            [MAC_HEADER_LEN + IPV6_HEADER_LEN + UDP_HEADER_LEN, PAY_LEN],
+        ]
+        expected_no_segment = []
+
+        if ptype == "ipv4-udp" or ptype == "inner-ipv4-udp":
+            rule_li = self.fdirprocess.create_rule(fdir_rule[0])
+            self.handle_buffer_split_case(
+                queue_buffer_split_inner_ipv4_udp,
+                expected_ipv4_pkts_seg_len,
+                expected_no_segment,
+                queue_id1,
+                queue_id2,
+            )
+        else:
+            rule_li = self.fdirprocess.create_rule(fdir_rule[1])
+            self.handle_buffer_split_case(
+                queue_buffer_split_inner_ipv6_udp,
+                expected_ipv6_pkts_seg_len,
+                expected_no_segment,
+                queue_id1,
+                queue_id2,
+            )
+        self.fdirprocess.destroy_rule(port_id=0, rule_id=rule_li)
+        self.dut.send_expect("quit", "# ")
+
+    def verify_queue_buffer_split_inner_tcp(self, ptype):
+        self.launch_one_port_testpmd_with_multi_queues()
+        self.dut.send_expect("port 0 rxq 2 rx_offload buffer_split on", "testpmd> ")
+        self.dut.send_expect("port 0 rxq 3 rx_offload buffer_split on", "testpmd> ")
+        self.dut.send_expect("show port 0 rx_offload configuration", "testpmd> ")
+        self.dut.send_expect("set rxhdrs %s" % ptype, "testpmd> ")
+        self.start_testpmd()
+
+        fdir_rule = [
+            "flow create 0 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 / tcp dst is 23 / end actions rss queues 2 3 end / mark / end",
+            "flow create 0 ingress pattern eth / ipv6 src is 2001::1 dst is 2001::2 / tcp dst is 23 / end actions rss queues 2 3 end / mark / end",
+        ]
+
+        queue_id1 = [2]
+        queue_id2 = [3]
+        expected_ipv4_pkts_seg_len = [
+            [
+                MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + UDP_HEADER_LEN
+                + VXLAN_HEADER_LEN
+                + MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + TCP_HEADER_LEN,
+                PAY_LEN,
+            ],
+        ]
+        expected_ipv6_pkts_seg_len = [
+            [MAC_HEADER_LEN + IPV6_HEADER_LEN + TCP_HEADER_LEN, PAY_LEN],
+        ]
+        expected_no_segment = []
+
+        if ptype == "ipv4-tcp" or ptype == "inner-ipv4-tcp":
+            rule_li = self.fdirprocess.create_rule(fdir_rule[0])
+            self.handle_buffer_split_case(
+                queue_buffer_split_inner_ipv4_tcp[0],
+                expected_ipv4_pkts_seg_len,
+                expected_no_segment,
+                queue_id1,
+                queue_id2,
+            )
+        else:
+            rule_li = self.fdirprocess.create_rule(fdir_rule[1])
+            self.handle_buffer_split_case(
+                queue_buffer_split_inner_ipv6_tcp[0],
+                expected_ipv6_pkts_seg_len,
+                expected_no_segment,
+                queue_id1,
+                queue_id2,
+            )
+        self.fdirprocess.destroy_rule(port_id=0, rule_id=rule_li)
+        self.dut.send_expect("quit", "# ")
+
+    def verify_queue_buffer_split_inner_sctp(self, ptype):
+        self.launch_one_port_testpmd_with_multi_queues()
+        self.dut.send_expect("port 0 rxq 5 rx_offload buffer_split on", "testpmd> ")
+        self.dut.send_expect("show port 0 rx_offload configuration", "testpmd> ")
+        self.dut.send_expect("set rxhdrs %s" % ptype, "testpmd> ")
+        self.start_testpmd()
+
+        fdir_rule = [
+            "flow create 0 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 / sctp dst is 23 / end actions queue index 5 / mark / end",
+            "flow create 0 ingress pattern eth / ipv6 src is 2001::1 dst is 2001::2 / sctp dst is 23 / end actions queue index 5 / mark / end",
+        ]
+
+        queue_id1 = queue_id2 = [5]
+        expected_ipv4_pkts_seg_len = [
+            [
+                MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + UDP_HEADER_LEN
+                + VXLAN_HEADER_LEN
+                + MAC_HEADER_LEN
+                + IPV4_HEADER_LEN
+                + SCTP_HEADER_LEN,
+                PAY_LEN,
+            ],
+        ]
+        expected_ipv6_pkts_seg_len = [
+            [MAC_HEADER_LEN + IPV6_HEADER_LEN + SCTP_HEADER_LEN, PAY_LEN],
+        ]
+        expected_no_segment = []
+
+        if ptype == "ipv4-sctp" or ptype == "inner-ipv4-sctp":
+            rule_li = self.fdirprocess.create_rule(fdir_rule[0])
+            self.handle_buffer_split_case(
+                queue_buffer_split_inner_ipv4_sctp[0],
+                expected_ipv4_pkts_seg_len,
+                expected_no_segment,
+                queue_id1,
+                queue_id2,
+            )
+        else:
+            rule_li = self.fdirprocess.create_rule(fdir_rule[1])
+            self.handle_buffer_split_case(
+                queue_buffer_split_inner_ipv6_sctp[0],
+                expected_ipv6_pkts_seg_len,
+                expected_no_segment,
+                queue_id1,
+                queue_id2,
+            )
+        self.fdirprocess.destroy_rule(port_id=0, rule_id=rule_li)
+        self.dut.send_expect("quit", "# ")
+
+    def verify_queue_buffer_split_tunnel(self):
+        self.launch_one_port_testpmd_with_multi_queues()
+        self.dut.send_expect("port 0 rxq 4 rx_offload buffer_split on", "testpmd> ")
+        self.dut.send_expect("port 0 rxq 5 rx_offload buffer_split on", "testpmd> ")
+        self.dut.send_expect("show port 0 rx_offload configuration", "testpmd> ")
+        self.dut.send_expect("set rxhdrs grenat", "testpmd> ")
+        self.start_testpmd()
+
+        fdir_rule = [
+            "flow create 0 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 / udp dst is 23 / end actions rss queues 4 5 end / mark / end",
+        ]
+
+        queue_id1 = [4]
+        queue_id2 = [5]
+        expected_seg_len = [
+            [
+                MAC_HEADER_LEN + IPV4_HEADER_LEN + UDP_HEADER_LEN + VXLAN_HEADER_LEN,
+                MAC_HEADER_LEN + IPV4_HEADER_LEN + UDP_HEADER_LEN + PAY_LEN,
+            ],
+        ]
+        expected_no_segment = []
+
+        rule_li = self.fdirprocess.create_rule(fdir_rule[0])
+        self.handle_buffer_split_case(
+            queue_buffer_split_inner_ipv4_udp,
+            expected_seg_len,
+            expected_no_segment,
+            queue_id1,
+            queue_id2,
+        )
+        self.fdirprocess.destroy_rule(port_id=0, rule_id=rule_li)
+
+    def test_port_buffer_split_outer_mac(self):
+        self.verify_port_buffer_split_outer_mac()
+
+    def test_port_buffer_split_inner_mac(self):
+        self.verify_port_buffer_split_inner_mac()
+
+    def test_port_buffer_split_inner_l3(self):
+        self.logger.info(
+            "===================Test subcase 1: buffer split ipv4 ================"
+        )
+        self.verify_port_buffer_split_inner_l3(ptype="ipv4")
+
+        self.logger.info(
+            "===================Test subcase 2: buffer split ipv6 ================"
+        )
+        self.verify_port_buffer_split_inner_l3(ptype="ipv6")
+
+        self.logger.info(
+            "===================Test subcase 3: buffer split inner-ipv4 ================"
+        )
+        self.verify_port_buffer_split_inner_l3(ptype="inner-ipv4")
+
+        self.logger.info(
+            "===================Test subcase 4: buffer split inner-ipv6 ================"
+        )
+        self.verify_port_buffer_split_inner_l3(ptype="inner-ipv6")
+
+    def test_port_buffer_split_inner_l4(self):
+        self.logger.info(
+            "===================Test subcase 1: buffer split ipv4-udp ================"
+        )
+        self.verify_port_buffer_split_inner_l4(ptype="ipv4-udp")
+
+        self.logger.info(
+            "===================Test subcase 2: buffer split ipv6-udp ================"
+        )
+        self.verify_port_buffer_split_inner_l4(ptype="ipv6-udp")
+
+        self.logger.info(
+            "===================Test subcase 3: buffer split ipv4-tcp ================"
+        )
+        self.verify_port_buffer_split_inner_l4(ptype="ipv4-tcp")
+
+        self.logger.info(
+            "===================Test subcase 4: buffer split ipv6-tcp ================"
+        )
+        self.verify_port_buffer_split_inner_l4(ptype="ipv6-tcp")
+
+        self.logger.info(
+            "===================Test subcase 5: buffer split inner-ipv4-udp ================"
+        )
+        self.verify_port_buffer_split_inner_l4(ptype="inner-ipv4-udp")
+
+        self.logger.info(
+            "===================Test subcase 6: buffer split inner-ipv6-udp ================"
+        )
+        self.verify_port_buffer_split_inner_l4(ptype="inner-ipv6-udp")
+
+        self.logger.info(
+            "===================Test subcase 7: buffer split inner-ipv4-tcp ================"
+        )
+        self.verify_port_buffer_split_inner_l4(ptype="inner-ipv4-tcp")
+
+        self.logger.info(
+            "===================Test subcase 8: buffer split inner-ipv6-tcp ================"
+        )
+        self.verify_port_buffer_split_inner_l4(ptype="inner-ipv6-tcp")
+
+    def test_port_buffer_split_inner_sctp(self):
+        self.logger.info(
+            "===================Test subcase 1: buffer split ipv4-sctp ================"
+        )
+        self.verify_port_buffer_split_inner_sctp(ptype="ipv4-sctp")
+
+        self.logger.info(
+            "===================Test subcase 2: buffer split ipv6-sctp ================"
+        )
+        self.verify_port_buffer_split_inner_sctp(ptype="ipv6-sctp")
+
+        self.logger.info(
+            "===================Test subcase 3: buffer split inner-ipv4-sctp ================"
+        )
+        self.verify_port_buffer_split_inner_sctp(ptype="inner-ipv4-sctp")
+
+        self.logger.info(
+            "===================Test subcase 4: buffer split inner-ipv6-sctp ================"
+        )
+        self.verify_port_buffer_split_inner_sctp(ptype="inner-ipv6-sctp")
+
+    def test_port_buffer_split_tunnel(self):
+        self.verify_port_buffer_split_tunnel()
+
+    def test_queue_buffer_split_outer_mac(self):
+        self.verify_queue_buffer_split_outer_mac()
+
+    def test_queue_buffer_split_inner_mac(self):
+        self.verify_queue_buffer_split_inner_mac()
+
+    def test_queue_buffer_split_inner_ipv4(self):
+        self.logger.info(
+            "===================Test subcase 1: buffer split ipv4 ================"
+        )
+        self.verify_queue_buffer_split_inner_ipv4(ptype="ipv4")
+
+        self.logger.info(
+            "===================Test subcase 2: buffer split inner-ipv4 ================"
+        )
+        self.verify_queue_buffer_split_inner_ipv4(ptype="inner-ipv4")
+
+    def test_queue_buffer_split_inner_ipv6(self):
+        self.logger.info(
+            "===================Test subcase 1: buffer split ipv6 ================"
+        )
+        self.verify_queue_buffer_split_inner_ipv6(ptype="ipv6")
+
+        self.logger.info(
+            "===================Test subcase 2: buffer split inner-ipv6 ================"
+        )
+        self.verify_queue_buffer_split_inner_ipv6(ptype="inner-ipv6")
+
+    def test_queue_buffer_split_inner_udp(self):
+        self.logger.info(
+            "===================Test subcase 1: buffer split ipv4-udp ================"
+        )
+        self.verify_queue_buffer_split_inner_udp(ptype="ipv4-udp")
+
+        self.logger.info(
+            "===================Test subcase 2: buffer split ipv6-udp ================"
+        )
+        self.verify_queue_buffer_split_inner_udp(ptype="ipv6-udp")
+
+        self.logger.info(
+            "===================Test subcase 3: buffer split inner-ipv4-udp ================"
+        )
+        self.verify_queue_buffer_split_inner_udp(ptype="inner-ipv4-udp")
+
+        self.logger.info(
+            "===================Test subcase 4: buffer split inner-ipv6-udp ================"
+        )
+        self.verify_queue_buffer_split_inner_udp(ptype="inner-ipv6-udp")
+
+    def test_queue_buffer_split_inner_tcp(self):
+        self.logger.info(
+            "===================Test subcase 1: buffer split ipv4-tcp ================"
+        )
+        self.verify_queue_buffer_split_inner_tcp(ptype="ipv4-tcp")
+
+        self.logger.info(
+            "===================Test subcase 2: buffer split ipv6-tcp ================"
+        )
+        self.verify_queue_buffer_split_inner_tcp(ptype="ipv6-tcp")
+
+        self.logger.info(
+            "===================Test subcase 3: buffer split inner-ipv4-tcp ================"
+        )
+        self.verify_queue_buffer_split_inner_tcp(ptype="inner-ipv4-tcp")
+
+        self.logger.info(
+            "===================Test subcase 4: buffer split inner-ipv6-tcp ================"
+        )
+        self.verify_queue_buffer_split_inner_tcp(ptype="inner-ipv6-tcp")
+
+    def test_queue_buffer_split_inner_sctp(self):
+        self.logger.info(
+            "===================Test subcase 1: buffer split ipv4-sctp ================"
+        )
+        self.verify_queue_buffer_split_inner_sctp(ptype="ipv4-sctp")
+
+        self.logger.info(
+            "===================Test subcase 2: buffer split ipv6-sctp ================"
+        )
+        self.verify_queue_buffer_split_inner_sctp(ptype="ipv6-sctp")
+
+        self.logger.info(
+            "===================Test subcase 3: buffer split inner-ipv4-sctp ================"
+        )
+        self.verify_queue_buffer_split_inner_sctp(ptype="inner-ipv4-sctp")
+
+        self.logger.info(
+            "===================Test subcase 4: buffer split inner-ipv6-sctp ================"
+        )
+        self.verify_queue_buffer_split_inner_sctp(ptype="inner-ipv6-sctp")
+
+    def test_queue_buffer_split_tunnel(self):
+        self.verify_queue_buffer_split_tunnel()
+
+    def tear_down(self):
+        """
+        Run after each test case.
+        """
+        self.pmdout.quit()
+
+    def tear_down_all(self):
+        """
+        Run after each test suite.
+        """
+        self.dut.build_install_dpdk(self.target)
+        self.dut.kill_all()
-- 
2.25.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* RE: [dts][PATCH V5 3/3] tests/ice_buffer_split: ice PF enable buffer split
  2022-12-20  6:09 ` [dts][PATCH V5 3/3] tests/ice_buffer_split: " Yaqi Tang
@ 2022-12-20  9:33   ` Jiang, YuX
  2022-12-21  6:38   ` Peng, Yuan
  2022-12-26  5:09   ` lijuan.tu
  2 siblings, 0 replies; 7+ messages in thread
From: Jiang, YuX @ 2022-12-20  9:33 UTC (permalink / raw)
  To: Tang, Yaqi, dts; +Cc: Tang, Yaqi

[-- Attachment #1: Type: text/plain, Size: 541 bytes --]

> -----Original Message-----
> From: Yaqi Tang <yaqi.tang@intel.com>
> Sent: Tuesday, December 20, 2022 2:10 PM
> To: dts@dpdk.org
> Cc: Tang, Yaqi <yaqi.tang@intel.com>
> Subject: [dts][PATCH V5 3/3] tests/ice_buffer_split: ice PF enable buffer split
> 
> Packets received in ice scalar path can be devided into two mempools with
> expected hdr and payload length/content specified by protocol type.
> 
> Signed-off-by: Yaqi Tang <yaqi.tang@intel.com>
> ---
Tested-by: Yu Jiang <yux.jiang@intel.com>

Best regards,
Yu Jiang

[-- Attachment #2: TestBufferSplit.log --]
[-- Type: application/octet-stream, Size: 1672050 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* RE: [dts][PATCH V5 3/3] tests/ice_buffer_split: ice PF enable buffer split
  2022-12-20  6:09 ` [dts][PATCH V5 3/3] tests/ice_buffer_split: " Yaqi Tang
  2022-12-20  9:33   ` Jiang, YuX
@ 2022-12-21  6:38   ` Peng, Yuan
  2022-12-26  5:09   ` lijuan.tu
  2 siblings, 0 replies; 7+ messages in thread
From: Peng, Yuan @ 2022-12-21  6:38 UTC (permalink / raw)
  To: Tang, Yaqi, dts; +Cc: Tang, Yaqi



> -----Original Message-----
> From: Yaqi Tang <yaqi.tang@intel.com>
> Sent: Tuesday, December 20, 2022 2:10 PM
> To: dts@dpdk.org
> Cc: Tang, Yaqi <yaqi.tang@intel.com>
> Subject: [dts][PATCH V5 3/3] tests/ice_buffer_split: ice PF enable buffer split
> 
> Packets received in ice scalar path can be devided into two mempools with
> expected hdr and payload length/content specified by protocol type.
> 
> Signed-off-by: Yaqi Tang <yaqi.tang@intel.com>
> ---

Acked-by: Yuan Peng <yuan.peng@intel.com>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [dts][PATCH V5 3/3] tests/ice_buffer_split: ice PF enable buffer split
  2022-12-20  6:09 ` [dts][PATCH V5 3/3] tests/ice_buffer_split: " Yaqi Tang
  2022-12-20  9:33   ` Jiang, YuX
  2022-12-21  6:38   ` Peng, Yuan
@ 2022-12-26  5:09   ` lijuan.tu
  2 siblings, 0 replies; 7+ messages in thread
From: lijuan.tu @ 2022-12-26  5:09 UTC (permalink / raw)
  To: dts, Yaqi Tang; +Cc: Yaqi Tang

On Tue, 20 Dec 2022 06:09:53 +0000, Yaqi Tang <yaqi.tang@intel.com> wrote:
> Packets received in ice scalar path can be devided into two mempools with expected hdr and payload length/content specified by protocol type.
> 
> Signed-off-by: Yaqi Tang <yaqi.tang@intel.com>

Acked-by: Lijuan Tu <lijuan.tu@intel.com>
Series applied, thanks

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-12-26  5:09 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-20  6:09 [PATCH V5 0/3] ice PF enable buffer split Yaqi Tang
2022-12-20  6:09 ` [dts][PATCH V5 1/3] test_plans/index: add new test plan for ice_buffer_split Yaqi Tang
2022-12-20  6:09 ` [dts][PATCH V5 2/3] test_plans/ice_buffer_split: ice PF enable buffer split Yaqi Tang
2022-12-20  6:09 ` [dts][PATCH V5 3/3] tests/ice_buffer_split: " Yaqi Tang
2022-12-20  9:33   ` Jiang, YuX
2022-12-21  6:38   ` Peng, Yuan
2022-12-26  5:09   ` lijuan.tu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).